Preprint
Article

This version is not peer-reviewed.

Information-Theoretic Framework for Quantum State Purification and Error Correction via Entropy Compression Mechanisms

Submitted:

28 April 2026

Posted:

29 April 2026

You are already at the latest version

Abstract
The extreme sensitivity of qubits to environmental noise constitutes the central bottleneck impeding the practical deployment of near-term quantum computing. Quantum error correction suppresses physical errors by encoding logical qubits into redundant physical qubits, yet its corrective gain deteriorates sharply when physical error rates approach the fault-tolerant threshold. This paper proposes a purification-assisted quantum error correction framework that systematically embeds a purification preprocessing module between the encoding layer and the physical layer, founded on the permutation symmetry of multiple noisy state copies. From an information-theoretic perspective, this framework treats the purification process as an entropy reduction mechanism that filters noise entropy from quantum states, concentrating quantum information into lower-entropy subspaces and achieving entropy suppression before downstream error correction processing. By projecting the joint state of identical copies onto the symmetric subspace, the module achieves noise entropy filtering, thereby concentrating quantum information and exponentially compressing the effective physical error rate before it enters the error-correcting code, establishing a compound exponential logical error rate suppression mechanism under ideal assumptions. Analytical derivations under depolarizing noise yield closed-form expressions for the purification fidelity and the enhanced equivalent fault-tolerant threshold, demonstrating that purification with three copies elevates the surface code threshold from approximately 1.1% to approximately 2.0%. Building on this framework, an Iterative Purification-assisted Error Correction (IPEC) algorithm is designed, which dynamically adjusts the purification depth via real-time syndrome feedback to achieve an adaptive balance between fidelity gain and resource consumption. Monte Carlo simulations under both independent depolarizing noise and circuit-level noise models validate the theoretical predictions: the IPEC algorithm reduces the logical error rate by approximately 46-fold at code distance 7 with a physical error rate of 1.0%, achieves an approximately 8-fold improvement on quantum LDPC codes, and maintains robust performance in non-stationary noise environments through its adaptive mechanism. From an entropy-theoretic perspective, the proposed framework constitutes an entropy management mechanism for quantum computation, in which the purification process compresses state entropy at the input stage and error correction maintains the low-entropy condition during computation, together forming a closed-loop entropy flow control system.
Keywords: 
;  ;  ;  ;  ;  ;  

1. Introduction

Quantum computing, as a revolutionary computing paradigm in the field of information science, has its core advantage in utilizing the superposition and entanglement properties of qubits to achieve exponential speedup for specific problems. However, qubits are extremely sensitive to environmental noise; decoherence and various quantum gate operation errors accumulate rapidly with increasing circuit depth, severely constraining the practical usability of current quantum processors. On the path toward large-scale fault-tolerant quantum computing, quantum error correction is widely regarded as an indispensable key technology. This technology actively suppresses physical errors by encoding logical qubits into more physical qubits and performing syndrome measurements and error correction operations in the redundant space. Meanwhile, quantum state purification—a technique that extracts high-fidelity pure states by performing local operations on multiple noisy state copies—has been extensively studied in quantum communication and quantum networks, with recent experimental demonstrations achieving entanglement distribution across multinode quantum networks and qubit teleportation between non-neighbouring network nodes [1,2]; from an information-theoretic perspective, as highlighted by recent reviews of rapid advances in quantum error correction and experimental long-distance entanglement purification over telecom fibre [3,4], the purification process can be interpreted as an entropy reduction operation that filters out noise entropy from quantum states and concentrates quantum information into lower-entropy subspaces, thereby functioning as a noise entropy filter that achieves information concentration before downstream processing. Yet the deep synergistic mechanism between this entropy suppression process and quantum error correction has not been systematically elucidated to date. How to integrate purification strategies and error correction code structures within a unified technical framework, enabling the advantages of both to complement each other and break through the performance bottlenecks encountered when each is used independently, is a scientific problem that urgently needs to be resolved in advancing near-term quantum computing toward practical utility.
In recent years, the international academic community has achieved a series of important advances in the interdisciplinary field of quantum state purification and quantum error correction. In terms of purification theory, systematic review work on entanglement purification protocols has comprehensively surveyed the principles and experimental progress of multiple technical approaches including linear optical purification, cross-Kerr nonlinear purification, and measurement-based deterministic purification [5,6]. In terms of extended applications of purification technology, quantum error mitigation strategies based on purification ideas have been successfully applied to electron pair-correlation simulations on superconducting quantum processors, achieving error suppression one to two orders of magnitude lower than post-selection methods in systems of up to 20 qubits [7]. Fault-correction experiments oriented toward fault-tolerant architectures have also continuously set new performance records: the surface code experiment based on the superconducting processor Willow achieved a logical error rate of 0.143% per round on a distance-7 codeword, with the lifetime of logical qubits exceeding that of their optimal physical qubits by 2.4 times for the first time [8]. At the level of encoding efficiency optimization in error correction codes, a fault-tolerant quantum memory scheme based on low-density parity-check codes achieved an error threshold of 0.7%, requiring only 288 physical qubits to protect 12 logical qubits through nearly one million syndrome cycles [9]. A comprehensive review of the quantum error mitigation field points out that purity constraint methods, as an important class of mitigation strategies, rely on exploiting the prior assumption that the ideal output state is a pure state, but the scalability of this approach still faces fundamental challenges when confronting correlated noise and high-depth circuits [10]. In distributed quantum computing scenarios, an entanglement purification scheme utilizing quantum low-density parity-check codes to distill GHZ states demonstrated the feasibility of deep integration of error correction codes with purification protocols [11]. An entanglement purification protocol based on noise-guessing decoding achieved effective high-fidelity distillation in a small-scale collection of only 16 Bell pairs, providing a practical path for resource-constrained quantum networks [12]. Despite the fact that the aforementioned works have advanced the field from different perspectives, a unified technical framework capable of systematically embedding purification operations into the error correction process and achieving adaptive optimization at the algorithmic level is still lacking.
Based on the aforementioned research status and shortcomings, this paper proposes a purification-assisted quantum error correction technical framework, with core innovations at the following three levels: In architectural design, a joint optimization model of purification operations and error correction code decoding processes is constructed, treating purification as an active noise entropy suppression stage within the error correction cycle rather than an independent post-processing step, thereby reducing the effective noise intensity of data qubits before syndrome extraction; this design enables systematic entropy compression of noisy quantum states prior to encoding, concentrating quantum information through noise entropy filtering. In algorithm design, an iterative purification error correction algorithm (IPEC) is proposed that dynamically adjusts the ratio of purification rounds to error correction strategy based on real-time syndrome information, achieving optimal balance between fidelity gain and resource consumption. In theoretical analysis, analytical expressions for the fidelity improvement bound and fault-tolerance threshold of the purification-assisted error correction scheme are derived, revealing the quantitative contribution of purification operations to the compound exponential suppression factor of logical error rates under ideal assumptions.

2. Research Progress in Quantum State Purification and Quantum Error Correction

The core objective of quantum error correction is to protect quantum information from physical noise through encoding redundancy. Recent experimental milestones in quantum error correction have demonstrated real-time fault-tolerant error correction using the color code on trapped-ion platforms [13], fault-tolerant universal gate sets on logical qubits encoded in the seven-qubit code [14], and quantum error correction beyond the break-even point with discrete-variable bosonic encodings [15], collectively establishing the practical viability of fault-tolerant quantum computation. The surface code, as the topological quantum error correcting code most deeply studied at present, has a two-dimensional lattice structure that only requires coupling between nearest-neighbor qubits, a characteristic that gives the surface code natural implementation advantages on hardware platforms such as superconducting quantum processors. Experiments based on superconducting platforms have confirmed that under distance-5 encoding, the error rate of logical qubits can be suppressed to a level better than that of a single physical qubit [16]. This work marks the breakthrough of the "break-even point" in the field of quantum error correction and also provides a direct experimental benchmark for extending code distance. The practical feasibility of topological quantum memories has been further validated through the demonstration of real-time quantum error correction beyond the break-even point on superconducting platforms, where model-free reinforcement learning was employed to optimize feedback strategies and achieve a coherence gain exceeding a factor of two over the best physical qubit [17]. On neutral atom platforms, the introduction of reconfigurable atomic arrays endows quantum processors with flexible long-range connectivity, and the logical quantum processor thereby implemented has demonstrated multiple encoding operations including surface codes, color codes, and fault-tolerant logical GHZ state preparation on up to 280 physical qubits [18]. This architecture validates the feasibility of logical-level parallel control, while also revealing the profound impact of hardware topology on error correction performance. The graph-theoretic properties underlying such hardware connectivity structures—including network diagnosability under comparison diagnosis models [19], edge connectivity of expanded hypercube-like architectures [20], Hamiltonian path properties in interconnection digraphs [21], and fault-tolerant path embedding in k-ary n-cube networks with faulty nodes—have been systematically studied in interconnection network theory [22], providing mathematical tools that inform the topological design and fault-tolerance analysis of quantum processor architectures.
Encoding efficiency is the key bottleneck constraining resource overhead for large-scale fault-tolerant quantum computing. Quantum low-density parity-check codes, with their high encoding rate and favorable code distance growth characteristics, are considered strong candidates for overcoming the efficiency limitations of surface codes; a comprehensive review of this code family has systematically analyzed their construction, decoding, and fault-tolerance properties [23]. Research has shown that non-local syndrome extraction circuits realized through atom rearrangement operations in reconfigurable atomic arrays can run high-efficiency quantum LDPC codes with constant-scale resource overhead, with fault-tolerant performance surpassing surface codes at scales of only a few hundred physical qubits [24]. The precision of decoding algorithms also has a decisive impact on error correction effectiveness. The neural network decoder AlphaQubit based on a cyclic Transformer architecture achieved performance surpassing traditional matching decoders on real experimental data from the Google Sycamore processor; this decoder can adaptively learn complex noise characteristics such as crosstalk and leakage [25]. The experimental implementation of dynamic surface codes further extends the temporal degrees of freedom of error correcting codes; by alternating the roles of data bits and measurement bits in each round of error correction, this scheme achieves error correction with built-in leakage removal on a hexagonal lattice [26]. Magic state distillation is a key resource preparation step for realizing universal fault-tolerant quantum computing. On superconducting processors, researchers successfully prepared logical magic states with fidelity exceeding the break-even point using error correction encoding, demonstrating that adaptive circuits can improve magic state yield without increasing the number of physical qubits [27]. The introduction of fault-tolerant post-selection technology provides a systematic optimization framework for reducing magic state preparation overhead, significantly reducing the resources required for strong logical error suppression [28].
As quantum error correction technology matures, error mitigation technology oriented toward near-term noisy quantum devices constitutes another complementary technical approach. Probabilistic error cancellation methods obtain unbiased expected value estimates by learning and inverting noise channels; experiments based on sparse Pauli-Lindblad noise models have successfully demonstrated effective mitigation of correlated noise on a 20-qubit superconducting processor [29]. Large-scale error mitigation experiments implemented by the IBM team on 127-qubit processors demonstrated the practical value of quantum computing before the arrival of the fault-tolerance threshold [30]. This experiment adopted a combined strategy of zero-noise extrapolation and probabilistic error cancellation, obtaining physical observables superior to classical approximations on a 60-layer deep Ising model simulation circuit. However, the scalability of error mitigation technology faces fundamental theoretical constraints—even when circuit depth only slightly exceeds constant order, the number of sampling instances required in the worst case will exhibit super-polynomial growth [31]. This finding profoundly reveals the intrinsic bottleneck of mitigation strategies in large-scale applications, and from a different angle illustrates the necessity of integrating purification operations with error correction mechanisms. Scalability experiments of zero-noise extrapolation methods on larger-scale quantum circuits further validate the above assertion, demonstrating at 26 qubits and circuit depth 120 layers that the accuracy of mitigated observables notably exceeds expectations [32]. It is also worth noting that the challenge of coping with non-stationary dynamics in complex systems extends beyond quantum computing; in fields such as financial time series analysis, spatio-temporal graph attention mechanisms have been developed to capture regime-switching behavior in non-stationary environments [33], and the conceptual need for adaptive strategies that respond to time-varying conditions is equally critical in quantum error correction, where processor noise characteristics are known to drift over time.
Quantum state purification provides a method for noise suppression whose theoretical basis is independent of specific noise models. At the algorithmic level of purification theory, a streaming purification protocol based on SWAP tests extends the purification process to quantum states of arbitrary dimension and proves that recursive SWAP test schemes are asymptotically optimal in sample complexity [34]. Optimality analysis of purification protocols shows that under depolarizing noise, there exists a precise trade-off relationship between purification fidelity and success probability, providing theoretical guidance for purification strategy selection in resource-constrained scenarios [35]. The virtual channel purification protocol extends purification ideas from the quantum state level to the quantum channel level; through the joint design of flag fault-tolerance technology and virtual state purification, this protocol can still provide rigorous error suppression guarantees without knowing the target quantum state or noise model [36]. The joint optimization problem of time and space overhead in fault-tolerant quantum computing also merits attention; a scheme combining non-zero rate quantum LDPC codes with concatenated Steane codes has been proven capable of achieving fault-tolerant computation with constant space overhead and poly-logarithmic time overhead [37]. At the intersection of quantum error correction and mitigation, experiments applying zero-noise extrapolation to error correction circuits reveal that logical errors exhibit polynomial dependence on noise intensity, a characteristic that gives extrapolation methods natural compatibility within the error correction framework [38]. Comprehensive experimental validation oriented toward fault-tolerant architectures has achieved milestone progress on neutral atom platforms; this research integrated surface code error correction, transversal gate logical entanglement, and transversal teleportation based on the [[15,1,3]] code in a system of up to 448 atoms, achieving for the first time all core elements of universal fault-tolerant quantum computation within a single experimental system [39]. The theoretical breakthrough of constant-overhead magic state distillation demonstrates that carefully constructed error correction codes can minimize the distillation overhead exponent γ, thereby removing a long-standing resource bottleneck in universal quantum computing [40]. Zero-level distillation schemes complete high-fidelity preparation of magic states at the physical level, reducing the number of physical qubits required for distillation by one to two orders of magnitude [41]. New strategies in quantum error mitigation based on structural encoding and classical error correction codes further confirm the central role of coding redundancy in noise suppression [42].
To clearly position the contributions of this work relative to existing approaches, Table 1 provides a comparative summary of representative purification-related methods and the framework proposed in this paper.
As shown in Table 1, three key distinctions differentiate the proposed framework from existing approaches. First, this work establishes a unified framework that systematically embeds purification as an entropy suppression preprocessing layer into the error correction pipeline, rather than treating purification and error correction as isolated single-point techniques. Second, the IPEC algorithm is not a simple superposition of purification and error correction; its core innovation lies in the adaptive feedback mechanism that dynamically adjusts purification depth based on real-time syndrome information, enabling the system to respond to runtime noise fluctuations—a capability absent from all existing purification schemes. Third, this work provides quantitative characterization of the threshold enhancement (from approximately 1.1% to approximately 2.0% with three copies), whereas prior works either offer only qualitative theoretical analysis or rely solely on experimental observations without closed-form threshold expressions.
Synthesizing the above research advances, it can be seen that experimental validation of quantum error correcting codes has crossed fault-tolerance thresholds on multiple hardware platforms, error mitigation technology has demonstrated notable noise suppression effects on near-term devices, and quantum state purification theory has achieved systematic breakthroughs in algorithmic efficiency and optimality. However, existing work still has notable gaps in four aspects: purification operations have not yet been incorporated as organic components into the encoding and decoding processes of error correction codes; there is a lack of dynamic coordination mechanisms based on real-time feedback information between purification rounds and error correction strategies; the information-theoretic interpretation of how purification reduces the von Neumann entropy of noisy quantum states—and the quantitative relationship between entropy reduction rate and error correction performance—has not been established; and the quantitative contribution of purification to fault-tolerance thresholds and logical error rates has not yet obtained analytical theoretical characterization. These unresolved problems constitute the starting point of the research in this paper.

3. Purification-Based Quantum Error Correction Framework

3.1. System Model and Formal Definition of Noise Channels

The construction of a quantum error correction framework is inseparable from precise characterization of physical noise. The system studied in this paper consists of n physical qubits, each of which is subject to noise channel action during preparation, Corresponding Authorgate operations, and measurement processes. To ensure the generality of subsequent joint analysis of purification and error correction, this section provides rigorous mathematical definitions for the core noise channels involved in the system.
For a single-qubit depolarizing channel, its action can be expressed as replacing the input state ρ with the completely mixed state I / 2 with probability p , remaining unchanged with probability 1 p . The Kraus operator representation of this channel is:
E dep ( ρ ) = ( 1 p ) ρ + p 3 ( X ρ X + Y ρ Y + Z ρ Z )
where ρ denotes the density matrix of the input quantum state; p [ 0 , 1 ] is the depolarizing probability, describing the noise intensity; X , Y , Z are the three Pauli operators, corresponding to bit flip, bit-phase flip, and phase flip errors, respectively. This channel model is widely adopted in noise characterization of mainstream quantum computing platforms such as superconducting and ion-trap systems [37]. When p is small, the depolarizing channel can be approximated as independent and identically distributed Pauli noise, a property that provides convenience for subsequent syndrome analysis.
In superconducting qubit systems, the amplitude damping channel characterizes the spontaneous decay of a qubit from excited state | 1 to ground state | 0 , with Kraus operators:
K 0 = 1 0 0 1 γ , K 1 = 0 γ 0 0
where γ [ 0 , 1 ] is the decay probability, satisfying γ = 1 e t / T 1 with the energy relaxation time T 1 of the qubit; t is the evolution time of the qubit; K 0 and K 1 correspond to the two evolution paths of no decay and decay occurring, respectively. The amplitude damping channel is a non-unitary channel that compresses the quantum state toward the ground state along the z -axis direction, producing asymmetric noise that reduces the efficiency of traditional symmetric purification schemes when addressing such noise.
In multi-qubit systems, noise often does not act independently on each qubit. The existence of correlated noise means that single-qubit channel models cannot fully capture the decoherence behavior of the system. This framework formalizes the noise channel for n qubits as a completely positive trace-preserving map N : B ( H n ) B ( H n ) , where H is the single-qubit Hilbert space and B ( ) is the bounded operator space. This map can be characterized through the operator-sum representation N ( ρ ) = k E k ρ E k , where E k is a set of Kraus operators satisfying the completeness condition k E k E k = I [38]. As shown in Figure 1, this noise model encompasses multiple types ranging from independent depolarizing noise to spatially correlated Pauli-Lindblad noise, providing a unified mathematical foundation for subsequent purification module design.
As shown in Figure 1, this noise model encompasses multiple types ranging from independent depolarizing noise to spatially correlated Pauli-Lindblad noise, providing a unified mathematical foundation for subsequent purification module design.
Figure 1. Schematic diagram of the noise channel model for multi-qubit systems.
Figure 1. Schematic diagram of the noise channel model for multi-qubit systems.
Preprints 210743 g001
The subsequent theoretical analysis in this paper is established under the following explicit assumptions: (i) the noise acting on each state copy is independent and identically distributed (i.i.d.), meaning that each copy undergoes the same noise channel independently; (ii) the multiple copies used in the purification process are prepared independently, with no inter-copy correlations introduced during the preparation stage; (iii) the noise channels are uncorrelated across different physical qubits unless explicitly stated otherwise. These assumptions are standard in purification theory and provide a tractable analytical framework for deriving closed-form performance bounds [43]. It should be noted, however, that these conditions represent idealized scenarios; extending the proposed framework to realistic correlated or non-Markovian noise environments may require relaxation of these assumptions, which is discussed as a limitation in Section 5.3.

3.2. Overall Architecture of the Purification-Assisted Error Correction Framework

The overall architecture of this framework is shown in Figure 2, comprising four functional modules: the noisy state input module, the purification preprocessing module, the error correction encoding/decoding module, and the adaptive feedback control module. The noisy state input module is responsible for feeding the quantum states of n physical qubits after quantum gate operations into the purification pipeline. The purification preprocessing module is the key innovation of this framework; it receives m ( m 2 ) copies of quantum states that have undergone the same noise channel, and selects and extracts higher-fidelity quantum states by performing SWAP tests or projection measurement operations. From an information-theoretic standpoint, the purification preprocessing module functions as a noise entropy filter: it reduces the von Neumann entropy S ( ρ ) of the input state by projecting out antisymmetric noise components that reside in the high-entropy subspace, thereby concentrating the quantum information content of the output state into the dominant eigenspace of the density matrix. This entropy suppression mechanism enables systematic entropy compression of noisy quantum states prior to encoding, ensuring that the subsequent error correction module operates on states with substantially lower informational entropy and thus achieves information concentration before downstream processing.
The core operation of the purification preprocessing module is based on the symmetric subspace projection principle. For m independently prepared noisy states ρ m , projecting them onto the symmetric subspace Sym m ( H ) can effectively filter out antisymmetric noise components. The mathematical description of the projection operation is:
ρ pur = Π sym ρ m Π sym Tr ( Π sym ρ m )
where Π sym is the projection operator onto the symmetric subspace of m qubits; ρ pur is the quantum state output after the purification operation; the denominator Tr ( Π sym ρ m ) is the probability of successful projection, ensuring normalization of the output state. The physical implementation of this projection can be accomplished through recursive SWAP test circuits [44], as shown in Figure 3, where each round of testing uses an ancilla qubit to perform a controlled-SWAP gate and measures the ancilla qubit—when the measurement result is | 0 the system state is retained, otherwise it is discarded.
The error correction encoding/decoding module receives purified quantum states as input and employs [ [ n , k , d ] ] stabilizer codes for encoding protection. The encoding operation maps k logical qubits to the code space of n physical qubits, and the code distance d determines that the maximum weight of correctable errors is ( d - 1 ) / 2 . This framework is compatible with multiple error correcting code structures, including surface codes and quantum LDPC codes. Surface codes have engineering implementation advantages on superconducting platforms due to their characteristic of requiring only nearest-neighbor coupling [45], while quantum LDPC codes demonstrate superior resource efficiency in medium-to-large-scale systems with a higher encoding rate k / n [46]. Table 2 provides a comparison of key parameters for the two types of error correcting codes considered in this framework.
The adaptive feedback control module is the core mechanism for realizing dynamic optimization in this framework. This module extracts real-time error rate estimates p ^ eff from syndrome measurements and adjusts the resource allocation for the next round of purification operations accordingly—when the detected effective error rate exceeds the preset threshold p th , the number of purification copies m is increased to strengthen noise suppression; when the effective error rate is sufficiently below the threshold, purification consumption is reduced to conserve qubit resources. This dynamic adjustment strategy based on feedback ensures that the system maintains near-optimal fidelity-resource trade-offs under different noise conditions.

3.3. Theoretical Analysis and Performance Bounds of Fidelity Improvement

The fidelity improvement effect of purification operations on quantum states is the core of the performance analysis of this framework. Let the initial fidelity of the noisy state ρ with respect to the target pure state | ψ be F 0 = ψ | ρ | ψ . After symmetric subspace projection of m state copies, the fidelity of the output state can be expressed as [43]:
F pur ( m ) = F 0 m j = 0 d 1 λ j m
where d is the dimension of the single-qubit Hilbert space (for qubits, d = 2 ); λ j is the j -th eigenvalue of the noisy state ρ , satisfying j = 0 d 1 λ j = 1 and λ 0 = F 0 is the maximum eigenvalue; m is the number of state copies participating in purification. The above expression shows that purification fidelity converges exponentially to 1 as the number of copies m increases, with the convergence rate depending on the ratio F 0 / λ 1 of the maximum eigenvalue to the second-largest eigenvalue.
Under the depolarizing noise model, the eigenvalues of the noisy state have a highly symmetric structure: λ 0 = F 0 , and λ j = ( 1 F 0 ) / ( d 1 ) holds for j = 1 , , d 1 . Substituting into Equation (4), the analytical expression for purification fidelity under the depolarizing channel can be obtained:
F pur dep ( m ) = F 0 m F 0 m + ( d 1 ) 1 F 0 d 1 m
where each variable has the same meaning as defined in Equation (4). An intuitive corollary of this expression is that when F 0 > 1 / d (i.e., the initial state is superior to the completely mixed state), purification fidelity increases monotonically with m and approaches 1 at an exponential rate. Figure 4 shows the evolution of fidelity as a function of the number of purification copies under different initial fidelity conditions.
The fidelity improvement established above admits a deeper interpretation from the perspective of quantum information entropy, which reveals the essential role of the purification process as an entropy reduction mechanism. For a quantum state ρ with fidelity F 0 with respect to the target pure state | ψ under the depolarizing channel, the von Neumann entropy of the state can be expressed as [47]:
S ( ρ ) = F 0 log F 0 ( 1 F 0 ) log 1 F 0 d 1 = H ( F 0 ) + ( 1 F 0 ) log ( d 1 )
where H ( F 0 ) = F 0 log F 0 ( 1 F 0 ) log ( 1 F 0 ) is the binary entropy function; d is the dimension of the single-qubit Hilbert space. For qubits ( d = 2 ), Equation (6) simplifies to S ( ρ ) = H ( F 0 ) , establishing a direct monotonically decreasing relationship between fidelity and von Neumann entropy. This monotonic relationship between state purity and informational entropy has been experimentally validated in recent quantum error correction demonstrations, where the suppression of physical error rates through cyclic error correction directly corresponds to entropy reduction of the encoded quantum states [48,49].
After symmetric subspace projection with m copies, the purification process elevates the fidelity from F 0 to F pur ( m ) as given by Equation (5). Since the von Neumann entropy is a strictly decreasing function of fidelity for F 0 > 1 / d , the entropy of the purified state satisfies:
S ( ρ pur ) = H ( F pur ( m ) ) + ( 1 F pur ( m ) ) log ( d 1 ) < S ( ρ )
This inequality confirms the core information-theoretic property of the purification process: it constitutes an entropy reduction operation that actively removes noise entropy from quantum states, concentrating the quantum information into a lower-entropy output subspace. The entropy reduction magnitude Δ S ( m ) = S ( ρ ) S ( ρ pur ) increases monotonically with the number of copies m and approaches its maximum value S ( ρ ) as m , corresponding to the asymptotic elimination of noise entropy from the quantum state.
The rate of entropy compression with respect to the number of copies can be characterized quantitatively. For the depolarizing channel with d = 2 , since F pur dep ( m ) converges exponentially to 1 at a rate determined by the ratio F 0 / ( 1 F 0 ) (as shown by Equation (5)), the entropy reduction Δ S ( m ) = H ( F 0 ) H ( F pur dep ( m ) ) likewise converges exponentially to H ( F 0 ) . This exponential entropy compression rate is consistent with the optimality analysis of purification protocols under depolarizing noise [43], which establishes a precise trade-off between purification fidelity and success probability—equivalently, a fundamental trade-off between the entropy reduction rate and resource consumption. Specifically, achieving greater entropy compression requires more state copies, which reduces the success probability of each purification round; this trade-off relationship provides theoretical guidance for selecting the optimal purification depth in resource-constrained scenarios.
Table 3 presents the quantitative correspondence between purification fidelity and von Neumann entropy reduction under the depolarizing noise model for qubits ( d = 2 ) at several representative initial fidelity values, illustrating the entropy suppression effect of the purification process.
As shown in Table 3, the purification process with three copies achieves substantial entropy reduction across all tested initial fidelity conditions, with the relative reduction ratio increasing from 31.8% at F 0 = 0.70 to 84.6% at F 0 = 0.95 . This demonstrates that the purification process is particularly effective at concentrating quantum information when the initial state already possesses moderate fidelity—precisely the regime relevant to near-term quantum processors. The above information-theoretic interpretation reveals that the purification process fundamentally operates as a noise entropy filter: by exploiting the permutation symmetry of multiple noisy copies, symmetric subspace projection selectively retains the low-entropy dominant eigenvalue component while suppressing the uniformly distributed high-entropy noise eigenvalues, thereby achieving information concentration before downstream error correction processing.
The impact of purification operations on the logical error rate of error correcting codes is another important aspect of the analysis in this section. For a stabilizer code with code distance d , the logical error rate p L approximately satisfies the following relationship with the physical error rate p under independent depolarizing noise:
p L A d d / 2 + 1 p eff d / 2 + 1
where p eff is the effective physical error rate after the purification process, satisfying the relationship p eff = 1 F pur with purification fidelity; d is the code distance of the error correcting code; A is a constant prefactor related to the code structure and decoding algorithm. Substituting the purification fidelity expression, it can be seen that the purification process produces a compound exponential suppression effect on p L under the ideal assumptions established in Section 3.1 by exponentially reducing p eff —the outer layer comes from the distance scaling of the error correcting code p eff d / 2 + 1 , and the inner layer comes from the exponential compression of p eff itself by the purification process. This compound exponential suppression mechanism under ideal assumptions is the essential advantage of the purification-assisted error correction framework compared to using error correcting codes alone. From the entropy perspective established by Equations (6) and (7), this compound suppression can be interpreted as a two-stage entropy management process: the purification process first compresses the state entropy through symmetric subspace projection, and the error correcting code subsequently maintains this low-entropy condition through stabilizer measurements and recovery operations.
The success probability of the purification process is another key factor affecting resource efficiency. The success probability P succ of symmetric subspace projection decreases as the number of copies m grows, and there is a fundamental trade-off relationship between fidelity gain and success probability [43]. Pursuing higher output fidelity requires more state copies, which reduces the success probability of a single purification round and consumes more quantum resources. This trade-off relationship requires determining the optimal purification depth based on the resource constraints of the physical platform in practical design.

3.4. Scalability and Fault-Tolerance Threshold Analysis of the Framework

The scalability of the purification-assisted error correction framework depends on the extent to which purification operations improve the fault-tolerance threshold and the additional resource overhead thereby introduced. In traditional error correction schemes, the fault-tolerance threshold p th is defined as the maximum physical error rate required for the logical error rate to decrease monotonically as code distance increases. Purification operations reduce the effective physical error rate from the original value p to p eff ( m ) = 1 F pur ( m ) , and the corrected fault-tolerance condition thereby becomes p eff ( m ) < p th , which is equivalent to imposing a more relaxed constraint on the original error rate. The equivalent fault-tolerance threshold of this framework can be expressed as:
p th pur ( m ) = 1 F pur 1 ( 1 p th ; m )
where F pur 1 ( ; m ) is the inverse function of Equation (5) with respect to the initial fidelity F 0 ; p th is the original fault-tolerance threshold of the underlying error correcting code. This expression shows that the purification operation with m state copies elevates the fault-tolerance threshold from p th to p th pur ( m ) , with the magnitude of improvement increasing as m grows. This conclusion has important significance at the engineering implementation level, as it means that physical qubits need not achieve the quality required by the original threshold to realize fault-tolerant operations [50].
Resource overhead is another core dimension for evaluating the scalability of the framework. Suppose the underlying error correcting code protecting k logical qubits requires n physical qubits; purification operations additionally introduce m 1 copies for each data qubit, so the total number of physical qubits is n total = m n . The encoding rate is thereby reduced from k / n to k / ( m n ) , and the quantitative relationship between the threshold improvement brought by purification and the resource overhead can be characterized through the following optimization problem:
m * = arg min m 2 { m d min ( p , m ) 2 }
where m is the optimal number of purification copies; d min ( p , m ) is the minimum code distance required to achieve the target logical error rate given physical error rate p and number of purification copies m . The optimization objective of the above expression is the total number of physical qubits, reflecting the trade-off between increasing purification copies (reducing the required code distance) and increasing resource consumption per code distance unit (increasing the number of copies). As shown in Figure 5, when the physical error rate is in the range of 0.5% to 2.0%, the optimal number of purification copies m typically takes values between 2 and 5, and the total resource overhead can be reduced by 30% to 60% compared to a baseline scheme without purification.
When the physical error rate is far below the fault-tolerance threshold, the additional overhead of purification operations cannot be compensated by code distance reduction, and non-purification schemes are superior in resource efficiency. When the physical error rate approaches or slightly exceeds the original threshold, the purification-assisted scheme exhibits notable advantages—it can maintain reliable storage and operation of logical qubits in noise regimes where non-purification schemes have already failed. Table 4 summarizes the performance comparison between the two schemes under different noise regimes.
The behavior of the framework in terms of code distance scaling also deserves attention. Define the logical error suppression factor Λ as the reduction ratio of the logical error rate when the code distance increases from d to d + 2 ; standard surface code experiments have verified the sub-threshold scaling behavior of Λ 2.14 [44]. In the purification-assisted framework, due to the reduction of effective physical error rate p eff , the suppression factor will be elevated to:
Λ pur = Λ p p eff
where Λ is the original suppression factor of the underlying error correcting code at physical error rate p ; p eff is the effective error rate after purification. When m = 3 and F 0 = 0.99 , p eff 0.03 p , corresponding to Λ pur 33 Λ , which means that the magnitude of logical error rate improvement obtained per two additional code distances is elevated by more than one order of magnitude. Figure 6 shows the scaling behavior of logical error rate versus code distance under the purification-assisted framework, compared with the standard error correction scheme.
Synthesizing the above analysis, it can be seen that the purification-assisted error correction framework achieves an improvement in the equivalent fault-tolerance threshold and accelerated convergence of the logical error rate by inserting an adjustable purification preprocessing layer between the coding layer and the physical layer, without changing the underlying error correcting code structure. From the entropy-theoretic perspective established in Section 3.3, the purification-assisted framework can be viewed as an entropy management mechanism for quantum computation: the purification module actively compresses state entropy before encoding through symmetric subspace projection (Equation (7)), while the error correction module maintains this low-entropy condition during subsequent stabilizer measurements and recovery operations; together, the two modules form a coordinated entropy flow control system in which the purification process reduces entropy at the input stage and the error correcting code prevents entropy re-accumulation during computation. The adaptive feedback control module (described in Section 3.2) further regulates the entropy compression rate in response to real-time noise fluctuations, completing the closed-loop entropy management architecture. This framework exhibits its most significant advantages in practical scenarios where physical error rates are near the threshold, which precisely matches the noise level of current near-term quantum devices. The subsequent Chapter 4 will provide specific designs of the iterative purification error correction algorithm based on this theoretical framework, and verify the above theoretical predictions through numerical simulations.

4. Algorithm Design and Experimental Evaluation

4.1. Design of the Iterative Purification Error Correction Algorithm

Based on the purification-assisted error correction framework established in Chapter 3, this section provides a specific implementation scheme for the Iterative Purification-assisted Error Correction (IPEC) algorithm. The design objective of this algorithm is to dynamically adjust the depth and resource allocation of purification operations in each error correction cycle based on real-time syndrome feedback information, thereby achieving adaptive optimal balance between fidelity gain and qubit consumption. The IPEC algorithm can be characterized as a feedback-driven adaptive control algorithm, conceptually analogous to a closed-loop control system—a paradigm recently demonstrated in continuous quantum error correction with real-time FPGA-based feedback and in deep reinforcement learning agents for real-time quantum feedback control [51,52]: the syndrome measurements serve as sensor output providing real-time observations of the system's noise state, the error rate estimator functions as the controller that processes these observations, and the purification depth parameter m t constitutes the control variable that is adjusted to maintain the system within the desired operating regime. This closed-loop architecture distinguishes the IPEC algorithm from static purification schemes, enabling it to respond to runtime noise fluctuations in a manner fundamentally unattainable by fixed-depth approaches.
The execution process of the IPEC algorithm consists of three alternating phases: the purification phase, the syndrome extraction phase, and the feedback-driven strategy update phase. In the purification phase, the algorithm performs symmetric subspace projection operations on m t noisy state copies according to the current purification depth parameter m t , outputting the purified data state. In the syndrome extraction phase, the purified data state is encoded with a stabilizer code and a round of syndrome measurements is performed; the decoder infers the error pattern based on the syndrome sequence s t and implements recovery. In the feedback-driven strategy update phase, the algorithm uses statistical information in the syndrome sequence to estimate the current effective error rate p ^ eff ( t ) , and updates the purification depth m t + 1 for the next round according to a preset threshold criterion. This update rule employs an exponentially weighted moving average strategy to smooth the noise intensity estimate, avoiding frequent jumps in purification depth caused by single-round syndrome fluctuations.
The core criterion for strategy updates in the algorithm can be described as follows: when the estimated effective error rate p ^ eff ( t ) exceeds the upper threshold p up = 0.8 p th , the purification depth increases by one level ( m t + 1 = m t + 1 , with an upper limit of m max ); when p ^ eff ( t ) is below the lower threshold p low = 0.4 p th , the purification depth decreases by one level ( m t + 1 = m t 1 , with a lower limit of 2); when between the two thresholds, the current depth is maintained. Table 5 lists the key hyperparameters of the IPEC algorithm and their values in the simulation experiments of this paper.
The threshold-based update rule described above approximates the optimal resource allocation policy in the following sense. When the effective error rate exceeds the upper threshold, increasing purification depth yields the maximum marginal fidelity gain per additional copy, because the purification fidelity function in Equation (5) exhibits concavity in m —the marginal improvement from adding one copy is largest when the current fidelity is relatively low, making the deepening decision near-optimal in terms of fidelity gain per unit resource. Conversely, when the effective error rate falls below the lower threshold, the error correcting code alone provides sufficient suppression in that regime (as indicated by the compound exponential suppression analysis in Section 3.3), and reducing purification depth releases qubit resources without compromising the logical error rate. The intermediate dead-zone between the two thresholds provides hysteresis that prevents oscillatory behavior, a standard technique in feedback control design that ensures stability of the purification depth trajectory. This adaptive mechanism enables the system to automatically deepen the purification process to maintain error correction capability when noise increases and automatically reduce purification to release qubit resources when noise decreases—a capability that static schemes fundamentally lack.

4.2. Algorithm Complexity and Convergence Analysis

The computational overhead of each IPEC algorithm iteration consists of three parts: the quantum gate complexity of purification operations O ( m t n ) , the measurement complexity of syndrome extraction O ( n ) , and the time complexity of the classical decoder. For surface codes, the time complexity of the matching decoder is O ( n log n ) ; for quantum LDPC codes, the single-round iteration complexity of belief propagation decoders is O ( n ) , typically converging within 20 to 50 iterations. The depth of controlled-SWAP gates in purification operations grows linearly with m t ; under the setting of m max = 6 , the quantum gate depth of a single round of purification circuit does not exceed 6 times the number of physical qubits. It is worth noting that the computational overhead of the purification process scales linearly with the purification depth m t , while the fidelity gain scales exponentially with m t as established by Equation (5). This exponential-versus-linear trade-off between resource cost and performance benefit constitutes the fundamental reason why the adaptive depth control in the IPEC algorithm yields superior resource efficiency compared to fixed-depth schemes: the algorithm dynamically selects the operating point on the fidelity-resource curve that maximizes the marginal return per additional qubit copy, avoiding both the under-utilization regime (where additional copies would yield substantial gains) and the over-consumption regime (where diminishing returns make further copies wasteful).
The convergence of the algorithm is verified through numerical simulation under the depolarizing noise model. As shown in Figure 7, under conditions of physical error rate p = 1.0 % , the effective error rate p ^ eff ( t ) of the IPEC algorithm stably converges to around 0.12% after approximately 5 to 8 iterations, far below the fault-tolerance threshold of 1.1% for surface codes. The convergence speed is closely related to the choice of initial purification depth m 0 —with m 0 = 2 , approximately 12 rounds are needed to reach steady state, whereas with m 0 = 4 only 4 rounds are needed, though the latter consumes more qubit resources in the first few rounds. Table 6 records the convergence characteristics of the algorithm under different initial purification depths.

4.3. Numerical Simulation Setup and Baseline Comparison Schemes

The numerical simulation experiments in this paper are run on a classical computing cluster, using a stabilizer simulator to perform Monte Carlo sampling of quantum error correction circuits. The simulations cover two noise models: the independent depolarizing noise model (each physical qubit undergoes uniform Pauli errors with probability p ) and the circuit-level noise model (depolarizing noise is inserted after each quantum gate operation, with single-qubit gate error rate p / 10 , two-qubit gate error rate p , and measurement error rate p ). A total of 10 6 Monte Carlo samples are executed for each parameter configuration to ensure statistical significance, with the 95% confidence interval of the logical error rate controlled within ± 5 % of the reported value.
The simulation parameters adopted in this study are aligned with the performance characteristics of current mainstream quantum hardware platforms. Specifically, the physical error rate range of 0.3% to 1.5% corresponds to the typical two-qubit gate error rates reported on state-of-the-art superconducting quantum processors: the Google Willow processor achieved physical error rates on the order of 0.3%–0.5% in its surface code experiments [8], while earlier-generation processors such as Google Sycamore and IBM Eagle operate in the 0.5%–1.5% range [16,30]. The circuit-level noise model with single-qubit gate error rate p / 10 and measurement error rate p reflects the empirically observed noise hierarchy on superconducting platforms, where single-qubit gates typically exhibit error rates approximately one order of magnitude lower than two-qubit gates [45]. The selection of these parameter ranges is motivated by the operating regime where the purification-assisted framework provides the most notable practical benefit—namely, the near-threshold regime ( p [ 0.5 % , 1.5 % ] ) where standard error correction alone begins to lose its suppressive power, and where the resource efficiency metric η of the IPEC scheme reaches its peak value as analyzed in Section 3.4.
The baseline comparison schemes in the simulation include three: (a) standard surface code error correction scheme (without the purification process), with code distances d = 3 , 5 , 7 , 9 , 11 ; (b) standard quantum LDPC code error correction scheme (without the purification process), using the [ [ 144 , 12 , 12 ] ] bivariate bicycle code; (c) fixed purification depth scheme ( m = 3 , without adaptive feedback). Table 7 lists the detailed configuration parameters for each scheme in the simulation.

4.4. Experimental Results and Performance Evaluation

4.4.1. Logical Error Rate Under Independent Depolarizing Noise

As shown in Figure 8, under the independent depolarizing noise model, the IPEC algorithm combined with surface codes demonstrates superior logical error rate performance compared to the standard error correction scheme across all tested physical error rate ranges. Taking code distance d = 7 as an example, when the physical error rate p = 0.5 % , the logical error rate of the standard surface code is 8.3 × 10 5 per round, and the IPEC scheme reduces it to 1.7 × 10 6 per round, an improvement of approximately 49-fold. When the physical error rate rises to p = 1.0 % (approaching the surface code threshold), the logical error rate of the standard scheme increases to 2.1 × 10 3 per round, while the IPEC scheme still maintains a level of 4.6 × 10 5 per round. Table 8 summarizes the logical error rate data for each scheme under different physical error rates and code distance combinations.

4.4.2. Performance Verification Under Circuit-Level Noise

The circuit-level noise model more closely approximates the operating environment of actual quantum processors, because it not only accounts for errors on data qubits but also incorporates multiple noise sources such as ancilla qubit preparation errors, two-qubit gate crosstalk, and measurement readout errors. Under this noise model, the performance gain of the IPEC algorithm decreases compared to the independent noise model but remains significant. As shown in Figure 9, under conditions of circuit-level noise p = 0.5 % and code distance d = 7 , the logical error rate of the standard surface code is 3.7 × 10 4 per round, and the IPEC scheme reduces it to 2.4 × 10 5 per round, with an improvement factor of approximately 15-fold. The reason for the decrease in performance gain is that measurement errors polluting the syndrome sequence cause the variance of the error rate estimate p ^ eff ( t ) to increase, which in turn affects the precision of the adaptive strategy.

4.4.3. Resource Efficiency and Overhead Analysis

Purification-assisted schemes trade additional qubit consumption for reduction in logical error rate; therefore, resource overhead must be incorporated into consideration when evaluating their practical value. This paper introduces a resource efficiency metric η to measure the logical error rate suppression benefit obtained per physical qubit, defined as η = log ( p L std / p L IPEC ) / ( n IPEC / n std ) , where p L std and p L IPEC are the logical error rates of the standard scheme and the IPEC scheme, respectively, and n std and n IPEC are the numbers of physical qubits consumed by each. η > 1 indicates that the IPEC scheme is superior to the strategy of simply increasing code distance in terms of resource efficiency. Table 9 shows resource efficiency comparison data under different configurations.
As shown in Figure 10, the resource efficiency η reaches its peak value ( η > 2.0 ) when the physical error rate is in the range of 0.6% to 1.2%, which precisely corresponds to the typical noise level of current mainstream superconducting quantum processors. In the low-noise regime of p < 0.3 % , the η value drops below 1, indicating that when qubit quality is sufficiently high, directly increasing the code distance is a more economical choice.

4.4.4. Comparison of Adaptive Strategy and Fixed Purification Scheme

The practical effect of the adaptive feedback mechanism in the IPEC algorithm is verified through comparison with the fixed purification depth scheme. As shown in Figure 11, in non-stationary scenarios where noise intensity undergoes sudden changes (simulating quantum processor calibration drift), the IPEC algorithm can adjust the purification depth to adapt to the new noise level within approximately 3 to 5 rounds, while the fixed scheme experiences sharp rises in logical error rate when noise increases and wastes excessive qubit resources when noise decreases. Table 10 quantifies the performance differences between the two types of schemes under stationary and non-stationary noise scenarios.

4.4.5. Validation on Quantum LDPC Codes

To verify the code-structure independence of the framework, this paper reproduces the IPEC algorithm experiment on the [ [ 144 , 12 , 12 ] ] bivariate bicycle code. As shown in Figure 12, under circuit-level noise p = 0.5 % , the IPEC scheme reduces the average logical error rate of 12 logical qubits from 5.8 × 10 4 per round in the standard LDPC scheme to 7.3 × 10 5 per round, an improvement of approximately 8-fold, and the error rate variance among the 12 logical qubits is also significantly reduced. This result confirms the universal enhancement effect of purification preprocessing on different code structures—the introduction of the purification layer does not depend on the stabilizer structure of specific error correcting codes, but uniformly reduces the noise intensity fed into the encoder at the physical level.
The above experimental results collectively demonstrate that the IPEC algorithm can achieve stable logical error rate improvement under multiple noise models, different code distances, and different code structures. The improvement magnitude is most notable when the physical error rate approaches the fault-tolerance threshold (reaching one to two orders of magnitude), and the resource efficiency metric η also reaches its peak in that range. The adaptive feedback mechanism endows the algorithm with robustness in non-stationary noise environments, enabling it to maintain near-optimal performance even when facing quantum processor noise drift. These numerical results maintain qualitative consistency with the theoretical predictions derived in Chapter 3—the exponential compression of effective error rate by the purification process is further amplified through the distance scaling of error correcting codes, forming the cascaded gain of compound exponential suppression under the ideal assumptions established in Section 3.1.
Furthermore, the feasibility of implementing the purification module on real quantum hardware is supported by two key observations. First, on current superconducting quantum processors, two-qubit gate fidelity has commonly reached levels above 99.5% [8], and controlled-SWAP gates can be decomposed into three CNOT gates with a cumulative error rate of approximately 1.5%; although this value has a gap from ideal noiseless purification, it can still produce net positive fidelity gain in the purification depth range of m = 2 to m = 3 —the circuit-level noise simulations presented in this chapter have fully considered this factor. Second, on neutral atom platforms, reconfigurable atomic arrays natively support parallel preparation of quantum state copies and flexible long-range connections [18,23], making the multi-copy preparation and controlled-SWAP gate execution required for the purification process more convenient in engineering terms. These hardware considerations confirm that the simulation parameters and performance gains reported in this chapter are grounded in the demonstrated capabilities of current quantum hardware, rather than being chosen under unrealistic assumptions.
Figure 12. Violin plot comparison of error rates of 12 logical qubits on LDPC codes.
Figure 12. Violin plot comparison of error rates of 12 logical qubits on LDPC codes.
Preprints 210743 g012

5. Discussion and Conclusion

5.1. Main Findings and Theoretical Significance

The purification-assisted quantum error correction framework proposed in this paper reveals at the theoretical level a compound exponential suppression mechanism under ideal assumptions that had not previously been systematically characterized—purification operations exponentially compress the effective physical error rate from the original value p to p eff through symmetric subspace projection, while the error correcting code subsequently further suppresses the logical error rate with the power-law relationship p eff d / 2 + 1 ; the cascade of two exponential factors causes the logical error rate to achieve attenuation speed with respect to the original physical error rate exceeding what can be achieved by using either technology alone. The numerical simulation results in Chapter 4 provide quantitative validation of this theoretical prediction: in the typical configuration of code distance d = 7 and physical error rate p = 1.0 % , the IPEC algorithm reduces the logical error rate from 2.1 × 10 3 per round in the standard surface code scheme to 4.6 × 10 5 per round, an improvement of approximately 46-fold; this data is in close agreement with the order of magnitude predicted by Equation (8), confirming the validity of the theoretical analysis. The improvement of the equivalent fault-tolerance threshold is another finding of important theoretical significance in this framework—purification operations extend the operating range of the surface code from below the original threshold p th 1.1 % to below p th pur 2.0 % (when m = 3 ); this magnitude of improvement means that the manufacturing yield requirements for physical qubits can be substantially relaxed.
At the algorithm design level, the adaptive purification depth adjustment mechanism based on real-time syndrome feedback in the IPEC algorithm introduces a new dynamic resource allocation paradigm into the field of quantum error correction. Once the encoding structure is determined in traditional error correction schemes, the resource consumption is fixed and cannot respond to runtime noise fluctuations; whereas the IPEC algorithm performs online estimation of the effective error rate through an exponentially weighted moving average strategy and adjusts resource allocation in real time within a purification depth range of 2 to 6, enabling the system to automatically deepen purification to maintain error correction capability when noise increases and automatically reduce purification to release qubit resources when noise decreases. The experimental data in Table 10 of Chapter 4 shows that in the non-stationary scenario where noise suddenly jumps from 0.5% to 1.5%, the logical error rate of the IPEC scheme is only approximately 30% of that of the fixed purification scheme, while after noise recovers, its average qubit consumption is approximately 39% less than that of the fixed scheme; these characteristics that balance both robustness and resource efficiency have non-negligible application value in engineering scenarios oriented toward actual quantum processor deployment.
From an entropy-theoretic perspective, the purification-assisted error correction framework constitutes an entropy management mechanism for quantum computation. As established by the information-theoretic analysis in Section 3.3, the purification module actively reduces the von Neumann entropy of noisy quantum states by projecting out high-entropy noise components through symmetric subspace projection, satisfying S ( ρ pur ) < S ( ρ ) (Equation (7)), with the entropy reduction ratio reaching 84.6% at initial fidelity F 0 = 0.95 with three copies (Table 3). The error correction module subsequently maintains this low-entropy condition during stabilizer measurements and recovery operations, preventing entropy re-accumulation introduced by residual physical noise during computation. The synergistic operation of these two modules can be interpreted as a closed-loop entropy flow control system: the purification process compresses entropy at the input stage, the error correcting code preserves the low-entropy state during computation, and the adaptive feedback mechanism in the IPEC algorithm dynamically regulates the entropy compression rate in response to environmental fluctuations by adjusting the purification depth m t based on real-time syndrome information. This entropy flow perspective provides a unified information-theoretic understanding of the framework: the compound exponential suppression of logical error rates under ideal assumptions, as derived in Section 3.3, corresponds to a two-stage entropy management process in which the purification process first achieves entropy compression and the error correcting code subsequently maintains the compressed entropy level through its distance-dependent suppression capability. The proposed framework may thus be viewed as an entropy management mechanism for quantum computation, offering a conceptual paradigm that connects purification fidelity, von Neumann entropy, and logical error rate within a coherent information-theoretic picture.

5.2. Practical Application Scenarios and Applicability Discussion

From the perspective of hardware platform adaptation, this framework places two core requirements on the implementation of purification operations: first, the quantum processor needs to have the capability to prepare multiple copies of the same quantum state; second, the fidelity of controlled-SWAP gates must exceed a certain threshold to ensure that the purification operations themselves do not introduce excessive additional noise. On current superconducting quantum processors, two-qubit gate fidelity has commonly reached levels above 99.5% [8,16], and controlled-SWAP gates can be decomposed by three CNOT gates with a cumulative error rate of approximately 1.5%; although this value has a gap from ideal noiseless purification, it can still produce net positive fidelity gain in the purification depth range of m = 2 to m = 3 —the circuit-level noise simulation in Chapter 4 has fully considered this factor, and the IPEC scheme still achieves 15-fold logical error rate improvement under circuit-level noise of p = 0.5 % ( 3.7 × 10 4 reduced to 2.4 × 10 5 ). On neutral atom platforms, reconfigurable atomic arrays natively support parallel preparation of quantum states and flexible long-range connections [18,23], making the multi-copy preparation and controlled-SWAP gate execution required for purification operations more convenient in engineering terms; the adaptation prospects of this framework on that platform are worthy of expectation. The experimental parameters adopted in this study are thus grounded in the demonstrated capabilities of current quantum hardware, rather than being chosen ad hoc.
In terms of application scenarios, the purification-assisted error correction framework has particularly prominent applicability for medium-depth circuit tasks such as quantum chemistry simulation and variational quantum algorithms, because such tasks typically require logical error rates to reach the 10 6 order of magnitude to ensure chemical accuracy of final results, and the physical error rate of current quantum processors is precisely in the range where the resource efficiency metric η of this framework reaches its peak ( p [ 0.6 % , 1.2 % ] ). The data in Table 9 of Chapter 4 shows that under configuration p = 1.0 % , d = 7 , the resource efficiency of the IPEC scheme η 2.51 , meaning that compared to the strategy of simply increasing code distance without using purification, the IPEC scheme achieves lower logical error rates with fewer total physical qubits. For improving the entanglement distribution fidelity in quantum key distribution networks, this framework also provides a viable technical path—the purification preprocessing layer can first elevate the fidelity of entangled pairs before they enter error correction encoding, thereby reducing the burden on subsequent error correction stages and shortening key generation latency.The feasibility of connecting error-corrected qubits across separate modules through noisy quantum links has been recently demonstrated, providing further support for the applicability of purification-assisted approaches in distributed quantum computing architectures [53]. However, it should be noted that on high-quality qubit platforms where the physical error rate is already below 0.3%, the additional resource overhead of this framework will exceed its logical error rate benefit ( η < 1 ), in which case directly extending the code distance is still the more economical choice.

5.3. Research Limitations and Future Work

At the theoretical level, the compound exponential suppression result derived in this work holds under the ideal assumptions established in Section 3.1—namely, i.i.d. noise acting independently on each state copy, independent copy preparation without inter-copy correlations, and uncorrelated noise across qubits. Relaxing these assumptions to accommodate correlated or non-Markovian noise environments may alter the quantitative conclusions regarding threshold enhancement and logical error rate suppression factors, and the extent of such alterations remains to be investigated.
The research in this paper has limitations in several additional aspects that remain to be improved. In terms of noise model completeness, the numerical simulations in this paper are mainly based on two models, depolarizing noise and standard circuit-level noise, and have not yet covered noise types that universally exist in real quantum processors such as leakage errors, burst errors induced by cosmic ray events, and 1 / f noise with long-range temporal correlations—these non-Markovian noise characteristics may alter the effective gain of purification operations, and in extreme cases may even cause the fidelity improvement effect of symmetric subspace projection to degrade. In terms of the online computational overhead of the algorithm, the strategy update phase of the IPEC algorithm needs to complete error rate estimation and purification depth decisions after each error correction cycle ends; although this process only involves simple sliding window statistical operations in classical computing, there may be timing conflicts with the real-time requirements of decoders, especially on superconducting platforms where the real-time decoding latency of surface codes is already approaching the microsecond upper limit, where any additional classical computational overhead may cause throughput degradation in the decoding pipeline. Furthermore, in the theoretical analysis in this paper, fidelity is used as a performance metric for purification in the average sense, which cannot completely characterize the round-by-round fluctuation characteristics of purification output states; this fluctuation may have non-negligible effects on the tail distribution of logical error rates under finite copy number conditions.
Future research directions can be developed along three dimensions. In terms of purification strategy optimization, introducing reinforcement learning into the joint decision-making of purification depth and error correction strategy is expected to break through the performance upper bound of the heuristic rules based on fixed thresholds adopted in this paper, enabling the system to autonomously learn optimal resource allocation strategies in more complex non-stationary noise environments. In terms of theoretical extension of the framework, extending purification operations from the quantum state level to the virtual channel purification scheme at the quantum channel level deserves in-depth exploration—if channel-level purification can be combined with logical gate operations of error correcting codes, it is expected to achieve purification enhancement for the computation process itself (rather than only the storage process), which would greatly expand the application scope of this framework. In terms of experimental validation, the next step urgently requires implementing small-scale prototype experiments on real quantum processors to verify whether the actual gain of purification operations in real noise environments is consistent with simulation predictions, and to identify hardware-specific factors not adequately captured in simulations.
In summary, the purification-assisted quantum error correction framework and IPEC algorithm proposed in this paper achieve improvement of the equivalent fault-tolerance threshold from approximately 1.1% to approximately 2.0% and one-to-two orders of magnitude improvement in logical error rates by embedding an adjustable purification preprocessing module between the coding layer and the physical layer, without changing the underlying error correcting code structure. From the entropy-theoretic perspective, this framework realizes a two-stage entropy management process: the purification module compresses the von Neumann entropy of noisy quantum states through symmetric subspace projection, while the error correction module maintains the low-entropy condition during subsequent computation, and the adaptive feedback mechanism dynamically regulates the entropy compression rate—together forming a closed-loop entropy flow control system for quantum computation. Numerical simulations have validated the effectiveness of this framework and the robustness of the adaptive feedback mechanism under two code structures—surface codes and quantum LDPC codes—and two noise models—independent noise and circuit-level noise—providing a technical path that balances performance and resource efficiency for near-term quantum devices to conduct reliable quantum information processing under realistic conditions where physical error rates have not yet sufficiently fallen below the fault-tolerance threshold.

Author Contributions

Jiaqi Tang and Zhaohui Liao: Conceptualization, Methodology, Formal analysis, Writing—original draft; Mu-Jiang-Shan Wang: Methodology, Supervision, Writing—original draft, Writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Pompili, M,. Hermans, S.L.N,. Baier, S,. et al. Realization of a multinode quantum network of remote solid-state qubits. Science. 2021, 372(6539), 259–264. [CrossRef]
  2. Hermans, S.L.N,. Pompili, M,. Beukers, H.K.C,. et al. Qubit teleportation between non-neighbouring nodes in a quantum network. Nature. 2022, 605, 663–668. [CrossRef]
  3. Campbell, E. A series of fast-paced advances in quantum error correction. Nature Reviews Physics. 2024, 6, 160–161. [CrossRef]
  4. Van Leent, T,. Bock, M,. Fertig, F,. et al. Entangling single atoms over 33 km telecom fibre. Nature. 2022, 607, 69–73. [CrossRef]
  5. Yan, P.S,. Zhou, L,. Zhong, W,. et al. Advances in quantum entanglement purification. Science China Physics, Mechanics & Astronomy. 2023, 66, 250301. [CrossRef]
  6. Zhao, L., Wang, M.-J.-S., Zhang, X., Lin, Y., & Wang, S. An algorithm for the orientation of complete bipartite graphs. In Proceedings of the 2017 International Conference on Applied Mathematics, Modelling and Statistics Application. Atlantis Press. 2017, 361–364. [CrossRef]
  7. O'Brien, T.E,. Anselmetti, G,. Gkritsits, F,. et al. Purification-based quantum error mitigation of pair-correlated electron simulations. Nature Physics. 2023, 19(12), 1787–1792. [CrossRef]
  8. Google Quantum AI. Quantum error correction below the surface code threshold. Nature. 2025, 638(8052), 920–926. [CrossRef]
  9. Bravyi, S,. Cross, A.W,. Gambetta, J.M,. et al. High-threshold and low-overhead fault-tolerant quantum memory. Nature. 2024, 627(8005), 778–782. [CrossRef]
  10. Cai, Z,. Babbush, R,. Benjamin, S.C,. et al. Quantum error mitigation. Reviews of Modern Physics. 2023, 95(4), 045005. [CrossRef]
  11. Rengaswamy, N,. Raina, A,. Seshadri, S,. et al. Entanglement purification with quantum LDPC codes and iterative decoding. Quantum. 2024, 8, 1233.
  12. Monteiro, R.A,. Coutinho, B.C,. Roque, A,. et al. Efficient entanglement purification based on noise guessing decoding. Quantum. 2024, 8, 1476.
  13. Ryan-Anderson, C,. Bohnet, J.G,. Lee, K,. et al. Realization of real-time fault-tolerant quantum error correction. Physical Review X. 2021, 11(4), 041058. [CrossRef]
  14. Postler, L,. Heußen, S,. Pogorelov, I,. et al. Demonstration of fault-tolerant universal quantum gate operations. Nature. 2022, 605(7911), 675–680. [CrossRef]
  15. Ni, Z,. Li, S,. Deng, X,. et al. Beating the break-even point with a discrete-variable-encoded logical qubit. Nature. 2023, 616, 56–60. [CrossRef]
  16. Google Quantum AI. Suppressing quantum errors by scaling a surface code logical qubit. Nature. 2023, 614(7949), 676–681. [CrossRef]
  17. Sivak, V.V,. Eickbusch, A,. Royer, B,. et al. Real-time quantum error correction beyond break-even. Nature. 2023, 616(7955), 50–55. [CrossRef]
  18. Bluvstein, D,. Evered, S.J,. Geim, A.A,. et al. Logical quantum processor based on reconfigurable atom arrays. Nature. 2024, 626(7997), 58–65. [CrossRef]
  19. Wang, M.-J.-S., & Wang, S. Diagnosability of Cayley graph networks generated by transposition trees under the comparison diagnosis model. Annals of Applied Mathematics. 2016, 32(2), 166–173..
  20. Wang, S.Y,. Wang, M.J.S. The edge connectivity of expanded k-ary n-cubes. Discrete Dynamics in Nature and Society. 2018, 2018, 1–7. [CrossRef]
  21. Wang, M.-J.-S., Yuan, J., Lin, S. W., et al. Ordered and Hamilton digraphs. Chinese Quarterly Journal of Mathematics. 2010, 25(3), 317–326.
  22. Wang, S., & Wang, M.-J.-S. The edge connectivity of expanded k-ary n-cubes. Discrete Dynamics in Nature and Society, 2018, 7867342. [CrossRef]
  23. Xu, Q,. Bonilla Ataides, J.P,. Pattison, C.A,. et al. Constant-overhead fault-tolerant quantum computation with reconfigurable atom arrays. Nature Physics. 2024, 20(7), 1084–1090. [CrossRef]
  24. Breuckmann, N.P,. Eberhardt, J.N. Quantum low-density parity-check codes. PRX Quantum. 2021, 2(4), 040101.
  25. Bausch, J,. Senior, A.W,. Heras, F.J.H,. et al. Learning high-accuracy error decoding for quantum processors. Nature. 2024, 635(8040), 834–840. [CrossRef]
  26. Eickbusch, A,. McEwen, M,. Sivak, V,. et al. Demonstration of dynamic surface codes. Nature Physics. 2025, 21, 1994–2001. [CrossRef]
  27. Gupta, R.S,. Sundaresan, N,. Alexander, T,. et al. Encoding a magic state with beyond break-even fidelity. Nature. 2024, 625(7994), 259–263. [CrossRef]
  28. Bombín, H,. Pant, M,. Roberts, S,. et al. Fault-tolerant postselection for low-overhead magic state preparation. PRX Quantum. 2024, 5(1), 010302. [CrossRef]
  29. Van den Berg, E,. Minev, Z.K,. Kandala, A,. et al. Probabilistic error cancellation with sparse Pauli-Lindblad models on noisy quantum processors. Nature Physics. 2023, 19(8), 1116–1121. [CrossRef]
  30. Kim, Y,. Eddins, A,. Anand, S,. et al. Evidence for the utility of quantum computing before fault tolerance. Nature. 2023, 618(7965), 500–505. [CrossRef]
  31. Quek, Y,. Stilck França, D,. Khatri, S,. et al. Exponentially tighter bounds on limitations of quantum error mitigation. Nature Physics. 2024, 20(10), 1648–1658. [CrossRef]
  32. Kim, Y,. Wood, C.J,. Yoder, T.J,. et al. Scalable error mitigation for noisy quantum circuits produces competitive expectation values. Nature Physics. 2023, 19(5), 752–759. [CrossRef]
  33. Wei, Z. L., An, H. Y., Yao, Y., Su, W. C., Li, G., Saifullah, Sun, B. F., & Wang, M.-J.-S. FSTGAT: Financial spatio-temporal graph attention network for non-stationary financial systems and its application in stock price prediction. Symmetry. 2025, 17(8), 1344. [CrossRef]
  34. Childs, A.M,. Fu, H,. Leung, D,. et al. Streaming quantum state purification. Quantum. 2025, 9, 1603. [CrossRef]
  35. Yao, H,. Chen, Y.-A,. Huang, E,. et al. Protocols and trade-offs of quantum state purification. Quantum Science and Technology. 2025, 10(3), 035020. [CrossRef]
  36. Liu, Z,. Zhang, X,. Fei, Y.-Y,. et al. Virtual channel purification. PRX Quantum. 2025, 6(2), 020325. [CrossRef]
  37. Yamasaki, H,. Koashi, M. Time-efficient constant-space-overhead fault-tolerant quantum computation. Nature Physics. 2024, 20(2), 247–253. [CrossRef]
  38. Zhang, A,. Xie, H,. Gao, Y,. et al. Demonstrating quantum error mitigation on logical qubits. Nature Communications. 2026, 17, 1021. [CrossRef]
  39. Bluvstein, D,. Geim, A.A,. Li, S.H,. et al. A fault-tolerant neutral-atom architecture for universal quantum computation. Nature. 2026, 649, 39–46. [CrossRef]
  40. Wills, A,. Kang, M.-H,. Stage, H. Constant-overhead magic state distillation. Nature Physics. 2025, 21, 1637–1643.
  41. Itogawa, T,. Takada, Y,. Hirano, Y,. et al. Even more efficient magic state distillation by zero-level distillation. PRX Quantum. 2025, 6(2), 020356. [CrossRef]
  42. Sohn, I,. Lee, C,. Song, W; et al. Quantum error mitigation via structural encoding with classical error correction codes. EPJ Quantum Technology. 2026, 13, 474.
  43. Cirac, J.I,. Ekert, A.K,. Macchiavello, C. Optimal purification of single qubits. Physical Review Letters. 1999, 82(21), 4344–4347. [CrossRef]
  44. Barenco, A,. Berthiaume, A,. Deutsch, D,. et al. Stabilization of quantum computations by symmetrization. SIAM Journal on Computing. 1997, 26(5), 1541–1557.
  45. Krinner, S,. Lacroix, N,. Remm, A,. et al. Realizing repeated quantum error correction in a distance-three surface code. Nature. 2022, 605(7911), 669–674. [CrossRef]
  46. Wang, S., Wang, Z., Wang, M.-J.-S., & Han, W. g-Good-neighbor conditional diagnosability of star graph networks under PMC model and MM* model. Frontiers of Mathematics in China. 2017, 12(5), 1221–1234. [CrossRef]
  47. Nielsen, M.A,. Chuang, I.L. Quantum Computation and Information, 10th Anniversary Edition; Cambridge University Press: Cambridge, UK, 2010.
  48. Zhao, Y,. Ye, Y; Huang, H,. et al. Realization of an error-correcting surface code with superconducting qubits. Physical Review Letters. 2022, 129(3), 030501. [CrossRef]
  49. Google Quantum AI. Exponential suppression of bit or phase errors with cyclic error correction. Nature. 2021, 595, 383–387. [CrossRef]
  50. Terhal, B.M. Quantum error correction for quantum memories. Reviews of Modern Physics. 2015, 87(2), 307–346. [CrossRef]
  51. Fowler, A.G,. Mariantoni, M,. Martinis, J.M,. et al. Surface codes: towards practical large-scale quantum computation. Physical Review A. 2012, 86(3), 032324. [CrossRef]
  52. Livingston, W.P,. Blok, M.S,. Flurin, E,. et al. Experimental demonstration of continuous quantum error correction. Nature Communications. 2022, 13, 2307. [CrossRef]
  53. Reuer, K,. Landgraf, J,. Fösel, T,. et al. Realizing a deep reinforcement learning agent for real-time quantum feedback. Nature Communications. 2023, 14, 7138. [CrossRef]
Figure 2. Overall architecture diagram of the purification-assisted quantum error correction framework.
Figure 2. Overall architecture diagram of the purification-assisted quantum error correction framework.
Preprints 210743 g002
Figure 3. Purification circuit structure diagram based on recursive SWAP tests.
Figure 3. Purification circuit structure diagram based on recursive SWAP tests.
Preprints 210743 g003
Figure 4. Curves of purification fidelity versus number of copies m under depolarizing noise.
Figure 4. Curves of purification fidelity versus number of copies m under depolarizing noise.
Preprints 210743 g004
Figure 5. Comparison diagram of resource overhead between purification-assisted schemes and non-purification schemes under different physical error rates.
Figure 5. Comparison diagram of resource overhead between purification-assisted schemes and non-purification schemes under different physical error rates.
Preprints 210743 g005
Figure 6. Comparison diagram of logical error rate scaling with code distance between the purification-assisted framework and standard error correction schemes.
Figure 6. Comparison diagram of logical error rate scaling with code distance between the purification-assisted framework and standard error correction schemes.
Preprints 210743 g006
Figure 7. Convergence behavior of effective error rate of the IPEC algorithm with iteration rounds (heatmap superimposed with convergence curves).
Figure 7. Convergence behavior of effective error rate of the IPEC algorithm with iteration rounds (heatmap superimposed with convergence curves).
Preprints 210743 g007
Figure 8. Comparison of logical error rates of each scheme as a function of physical error rate (log-log contour plot).
Figure 8. Comparison of logical error rates of each scheme as a function of physical error rate (log-log contour plot).
Preprints 210743 g008
Figure 9. Scaling behavior of logical error rate of the IPEC scheme under circuit-level noise model (faceted box plots).
Figure 9. Scaling behavior of logical error rate of the IPEC scheme under circuit-level noise model (faceted box plots).
Preprints 210743 g009
Figure 10. Radar-area composite plot of resource efficiency metric η as a function of physical error rate.
Figure 10. Radar-area composite plot of resource efficiency metric η as a function of physical error rate.
Preprints 210743 g010
Figure 11. Time series tracking diagram of adaptive response of the IPEC algorithm under non-stationary noise (dual-axis linked panel).
Figure 11. Time series tracking diagram of adaptive response of the IPEC algorithm under non-stationary noise (dual-axis linked panel).
Preprints 210743 g011
Table 1. Comparison of representative purification-related methods and the proposed framework.
Table 1. Comparison of representative purification-related methods and the proposed framework.
Method Purification Adaptive Mechanism Threshold Enhancement Resource Efficiency
Rengaswamy et al. [11] Yes (entanglement-level) No Limited High overhead
Liu et al. [36] Channel-level No Theoretical analysis only Not quantified
Childs et al. [34] Yes (streaming) No Not addressed Asymptotically optimal samples
This work Yes (state-level, integrated with QEC) Yes (syndrome feedback-driven) Explicit (≈1.1% → ≈2.0%) Optimizable via adaptive depth
Table 2. Comparison of key parameters for the two types of error correcting codes compatible with this framework.
Table 2. Comparison of key parameters for the two types of error correcting codes compatible with this framework.
Parameter Surface Code Quantum LDPC Code
Encoding rate k / n O ( 1 / d 2 ) Θ ( 1 ) (constant order)
Fault-tolerance threshold (depolarizing noise) ~ 1.1 % ~ 0.7 %
Connectivity topology requirements Nearest-neighbor two-dimensional lattice Non-local long-range connections
Physical overhead per logical qubit ( d = 7 ) O ( n ) physical qubits ~ 24 physical qubits
Decoding algorithm complexity O ( n ) near-linear O ( n log n ) quasi-linear
Table 3. Correspondence between purification fidelity and von Neumann entropy under depolarizing noise ( d = 2 ).
Table 3. Correspondence between purification fidelity and von Neumann entropy under depolarizing noise ( d = 2 ).
Initial Fidelity   F 0 S ( ρ )  (bits) F pur ( m = 3 ) S ( ρ pur )  (Bits) Entropy Reduction   Δ S (bits) Reduction Ratio   Δ S / S ( ρ )
0.7 0.881 0.845 0.601 0.28 31.80%
0.8 0.722 0.927 0.353 0.369 51.10%
0.9 0.469 0.966 0.175 0.294 62.70%
0.95 0.286 0.993 0.044 0.242 84.60%
Table 4. Performance comparison between purification-assisted schemes and standard error correction schemes under different noise regimes.
Table 4. Performance comparison between purification-assisted schemes and standard error correction schemes under different noise regimes.
Noise Regime Standard Error Correction Scheme Purification-Assisted Scheme ( m = 3 ) Resource Change
p < 0.5 p th (low noise) Logical error rate exponentially suppressed, resource optimal Logical error rate further reduced, but additional overhead 3 × Resource increase ~ 200 %
0.5 p th p < p th (near threshold) Logical error rate suppression factor Λ decreases Exponential suppression restored, Λ significantly improved Total resource reduced 30%–60%
p p th (super-threshold) Error correction fails, logical error rate increases with code distance Equivalent error rate reduced below threshold, error correction becomes effective again Makes infeasible scheme feasible
Table 5. Key hyperparameter settings of the IPEC algorithm.
Table 5. Key hyperparameter settings of the IPEC algorithm.
Hyperparameter Symbol Value Description
Initial purification depth m 0 3 Balances initial fidelity gain and resource consumption
Maximum purification depth m max 6 Constrained by total qubit budget
Upper threshold coefficient p up / p th 0.8 Sensitivity for triggering purification deepening
Lower threshold coefficient p low / p th 0.4 Sensitivity for triggering purification shallowing
Smoothing factor α 0.3 Decay weight of exponential moving average
Syndrome window length W 10 rounds Number of historical syndrome rounds used for error rate estimation
Table 6. Convergence characteristics of the IPEC algorithm under different initial purification depths ( p = 1.0 % , surface code d = 5 ).
Table 6. Convergence characteristics of the IPEC algorithm under different initial purification depths ( p = 1.0 % , surface code d = 5 ).
Initial Depth m 0 Convergence Rounds Steady-State Effective Error Rate Steady-State Purification Depth   m Average Qubit Consumption per Round
2 12 0.18% 3 147
3 7 0.12% 3 153
4 4 0.11% 3 196 (first 4 rounds) → 153
5 3 0.11% 3 245 (first 3 rounds) → 153
Table 7. Configuration parameters of each baseline scheme in numerical simulations.
Table 7. Configuration parameters of each baseline scheme in numerical simulations.
Scheme Error Correcting Code Type Code Distance d Purification Depth m Adaptive Feedback Number of Physical Qubits
Baseline A Surface code 3, 5, 7, 9, 11 None No 17–221
Baseline B LDPC code [ [ 144 , 12 , 12 ] ] 12 None No 144
Baseline C Surface code 3, 5, 7 3 (fixed) No 51–147
IPEC Surface code 3, 5, 7 2–6 (dynamic) Yes 34–147 (mean)
IPEC-LDPC LDPC code [ [ 144 , 12 , 12 ] ] 12 2–6 (dynamic) Yes 288–864 (mean)
Table 8. Logical error rates (per round) of each scheme under independent depolarizing noise.
Table 8. Logical error rates (per round) of each scheme under independent depolarizing noise.
Physical Error Rate   p Code Distance d Standard Surface Code Fixed Purification ( m = 3 ) IPEC Scheme IPEC Improvement Factor
0.30% 5 4.2 × 10 4 8.5 × 10 5 6.1 × 10 5 6.9×
0.30% 7 1.6 × 10 5 2.8 × 10 6 1.9 × 10 6 8.4×
0.50% 5 1.9 × 10 3 3.1 × 10 4 2.2 × 10 4 8.6×
0.50% 7 8.3 × 10 5 3.4 × 10 6 1.7 × 10 6 49×
1.00% 5 1.4 × 10 2 1.8 × 10 3 1.1 × 10 3 12.7×
1.00% 7 2.1 × 10 3 8.7 × 10 5 4.6 × 10 5 45.7×
1.50% 7 1.8 × 10 2 6.2 × 10 4 3.5 × 10 4 51.4×
Table 9. Resource efficiency comparison between the IPEC scheme and standard schemes.
Table 9. Resource efficiency comparison between the IPEC scheme and standard schemes.
Configuration Standard Scheme ( n std ,   p L std ) IPEC Scheme ( n IPEC ,   p L IPEC ) Resource Efficiency   η
p = 0.5 % , d = 5 49 qubits, 1.9 × 10 3 147 qubits, 2.2 × 10 4 1.03
p = 0.5 % , d = 7 97 qubits, 8.3 × 10 5 147 qubits, 1.7 × 10 6 2.57
p = 1.0 % , d = 5 49 qubits, 1.4 × 10 2 147 qubits, 1.1 × 10 3 1.18
p = 1.0 % , d = 7 97 qubits, 2.1 × 10 3 147 qubits, 4.6 × 10 5 2.51
p = 1.5 % , d = 7 97 qubits, 1.8 × 10 2 219 qubits, 3.5 × 10 4 1.7
Table 10. Performance comparison between the IPEC scheme and fixed purification scheme under stationary and non-stationary noise scenarios ( d = 7 ).
Table 10. Performance comparison between the IPEC scheme and fixed purification scheme under stationary and non-stationary noise scenarios ( d = 7 ).
Noise Scenario Fixed Scheme p L ( m = 3 ) IPEC Scheme p L Fixed Scheme Average Qubit Consumption IPEC Average Qubit Consumption
Stationary p = 0.5 % 3.4 × 10 6 1.7 × 10 6 291 204
Stationary p = 1.0 % 8.7 × 10 5 4.6 × 10 5 291 257
Non-stationary p : 0.5 % 1.5 % 9.3 × 10 4 2.8 × 10 4 291 312
Non-stationary p : 1.5 % 0.5 % 3.4 × 10 6 2.1 × 10 6 291 178
Periodic fluctuation p [ 0.4 % , 1.2 % ] 5.1 × 10 5 2.7 × 10 5 291 231
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated