1. Introduction
Quantum computing systems are anticipated to surpass their classical counterpart by implementing quantum mechanical principles like superposition, entanglement, and interference [
1,
2]. Driven by this potential, Quantum Computing has been attracting the attention of researchers and investors alike. Funding for the development of quantum technologies, from both public and private investors, increased 54% between 2023 and 2024, reaching US
$2.0 billion worldwide, and is expected to reach around US
$16.4 billion by 2027 [
3,
4]. This trend is also reflected in the rapid growth of quantum-focused companies such as D-Wave, IonQ, and Quantinuum, which have increased their market value by more than 2,530%, 800%, and 812%, respectively, over the past 12 months, reaching a combined market capitalization of approximately US
$50 billion [
5,
6,
7]. The massive investments in this sector have enabled significant advancements in multidisciplinary fields, such as finance [
8,
9], materials science [
10,
11], chemistry [
12], pharmacology and health sciences [
13,
14], and machine learning [
15,
16].
Another major area impacted by quantum technology is cryptography [
17,
18,
19]. Since Shor proposed a new algorithm using quantum information processing for efficient number factoring back in 1994 [
20], his work proved the feasibility of a quantum-based algorithm for number factoring. By using modular exponentiation and the Quantum Fourier Transform (QFT), previously conceived by Coppersmith [
21], Shor’s algorithm successfully captured the period of a function
given an initial superposition state. By leveraging these components, this algorithm performs in polynomial time
, in opposition to the exponential time required by the classical procedures for RSA integer factorization [
22,
23]. The creation of this algorithm demanded a novel way to tackle quantum-based hazards [
24].
Building upon this knowledge, several subsequent studies sought to improve Shor’s algorithm. In [
25], the authors focused on speeding up the arithmetic operations by using improved adder designs, allowing the parallel execution of quantum gates, while also optimizing the overall circuit’s structure. By doing so, they improved modular exponentiation performance by up to 700 times compared with existing approaches. Building upon the knowledge developed by Meter and Itoh [
25], the authors in [
26] proposed a new reversible circuit for modular exponentiation using linear-size circuits, and working on a register-transfer level instead of the commonly used qubit-transfer level. Their methodology’s overall performance showed better scalability than previously available methods, requiring four times fewer qubits.
The work by Ekerå [
27] proposed changes to the QFT algorithm and the post-processing part alike. The author introduced a Shor’s discrete logarithm variant that is optimized when the discrete logarithm
is significantly smaller compared to the group order
. The algorithm uses smaller QFTs whose sizes reduce the total qubit requirements in this setting. Furthermore, instead of the classical continued fractions method used in Shor’s post-processing, the author employed lattice-based techniques to recover the discrete logarithm from the quantum measurement outcomes. This work has been further expanded in the following work by Ekerå and Håstad [
28], where the authors showed that the RSA factorization can be formulated as a short discrete logarithm, thus reducing the burden on quantum computers.
Chevignard et al. [
29] optimized the number of required logical qubits for Shor’s algorithm by providing an alternative algorithm for the modular exponentiation part. By combining Ekerå–Håstad’s algorithm, compression techniques and residual arithmetic, the authors could reduce the number of logical qubits required for RSA integer factoring. However, this simplification introduces a trade-off between the total number of qubits and the number of gates necessary for its implementation. Gidney [
30] provides a review on the progress of factoring RSA-2048 using optimized, fault-tolerant variants of Shor’s algorithm. The author focused on minimizing logical qubit counts and circuit depth by combining algorithmic refinements, advanced quantum arithmetic, and lattice-based post-processing techniques, while explicitly accounting for surface-code error correction. Under optimistic physical error rates, the work estimates that RSA-2048 factorization would require in the order of hundreds of thousands of logical qubits, corresponding to fewer than one million noisy physical qubits, and would take up to days to execute.
Other methodologies investigated different combinations of classical and quantum subroutines to factor composite numbers, which are integers produced as a result of the product of two prime numbers. This strategy has been reported in several studies [
31,
32,
33]. By combining both classical and quantum frameworks, a significant reduction in the computational burden for Shor’s factorization was achieved, proving the feasibility and utility of such an approach.
As the literature mentions, there are many efforts in different areas of Shor’s algorithm components, ranging from modular exponentiation to post-processing. However, a common feature of these works is their reliance on the QFT circuit or its variants for period extraction. Therefore, it remains necessary to investigate alternatives to the QFT structure itself. For example, alternative transforms may be more practical on quantum hardware for specific applications. One strong contender for this task is the Number Theoretic Transform (NTT), a specialized variant of the DFT that operates over finite fields via modular arithmetic rather than complex numbers [
34,
35]. Classically, the NTT runs in
time and avoids floating-point precision issues, making it well-suited to cryptographic computations [
36]. A quantum version of the NTT could, in principle, serve as a substitute for the QFT in algorithms involving integer or polynomial structures. Such an implementation might offer advantages in precision and potentially simpler gate constructions, since modular addition can be easier to realize than arbitrary quantum rotations, especially in the current noisy quantum computer architectures.
In this context, incorporating a quantum version of the NTT within a quantum framework could lead to a modular design of QFT circuits. By breaking down the QFT into smaller components and selectively substituting them with specialized transforms, the overall circuit can be simplified and made more efficient for NISQ hardware. Such modularization not only streamlines the implementation of fundamental quantum algorithms but also improves their adaptability and performance [
37]. Modular QFT architectures support more efficient Quantum Phase Estimation (QPE), a key subroutine in many quantum applications such as quantum chemistry, Hamiltonian learning, and Variational Quantum Eigensolver (VQE) [
38]. Decomposing QFT into optimally connected building blocks helps to overcome limitations in qubit connectivity, mitigate gate errors, and reduce decoherence, thereby enabling algorithms to tackle larger instances and deeper circuits [
39]. Additionally, this strategy enables dynamic error mitigation and adaptive allocation of quantum resources, thereby enhancing the reliability and scalability of computations on available quantum devices.
Additionally, to the best of the authors’ knowledge, an effort of a purely classical computation of the phase estimation part of the factorization algorithm is yet to be addressed by the specialized literature. This is a crucial point to be addressed, since the quantum modular exponentiation is the most computationally demanding part of the factorization algorithm as proposed by Shor [
40]. In this case, it is expected that by conveying part of the quantum algorithm into a classical framework, it could not only enable faster quantum computations but also factorize larger composite numbers on currently available quantum machines.
The reviewed literature indicates that the qubit requirements for factoring RSA-2048 remain in the order of 1 million, under commonly assumed architectures and error-correction models, leaving a substantial gap between current resource estimates and near-term practical feasibility. Reducing this requirement to the low thousands qubit regime therefore remains an important open research objective. To bridge this knowledge gap, we introduce a novel hybrid classical-quantum factorization algorithm, named Jesse-Victor-Gharabaghi (JVG) algorithm, separating the portions of the processing that can be effectively executed in classical computers, from the complex period finding that can be more effectively handled by quantum circuits. This way, the novel architecture incorporates a purely classical modular exponentiation subroutine, followed by a Quantum Number Theoretic Transform as an alternative to the usual QFT circuit. It is important to emphasize that JVG’s novelty lies in advancing number theory period finding by extracting it from a finite ring rather than a complex field.
The present work constitutes the first empirical validation of a hybrid QNTT-based approach through comprehensive benchmarking on both simulated and real quantum backends. This includes resource scaling projections under realistic NISQ constraints. This methodology offers a distinct and measurable contribution beyond previous QNTT formulations, which were neither integrated into nor tested within a complete quantum factoring framework.
At this stage of the study, the comparison will be limited to the proposed QNTT-based circuit and the standard Shor’s algorithm using QFT, as provided by IBM, which is our gold standard. This is so we have a clear baseline for evaluating performance and scalability. By establishing a direct comparison, it becomes possible to quantify the resource savings and noise resilience provided by the QNTT circuit, while maintaining the same algorithmic framework.
To keep the comparison interpretable, we use two baselines: (i) IBM’s reference Shor implementation (including controlled modular exponentiation and IQFT) and (ii) a matched hybrid baseline in which modular-exponentiation values are generated classically and embedded, while the transform block is IQNTT. This lets us isolate the effect of choosing IQFT vs. IQNTT within the same hybrid workflow, while still reporting end-to-end results against IBM’s full Shor pipeline.
Additionally, to the best of the authors’ knowledge, the specialized literature has not yet examined a Shor-like factorization workflow in which the modular-exponentiation values used for period estimation is generated in a purely classical subroutine and then encoded into a quantum register for the subsequent spectral step. Because quantum modular exponentiation is widely recognized as the dominant computational bottleneck in Shor’s algorithm [
36], we introduce a hybrid classical–quantum factorization workflow, named the Jesse–Victor–Gharabaghi (JVG) algorithm, that delegates modular-exponentiation value generation to a classical routine and then performs quantum period estimation via a transform-and-measurement stage. In JVG, this transform stage is implemented using a Quantum Number Theoretic Transform (QNTT) circuit as an alternative to the conventional inverse Quantum Fourier Transform (IQFT).
3. Results
To evaluate the performance of the proposed methodology, we conducted experiments using two different approaches. The first one used the Qiskit SDK to simulate the quantum circuit on a classical device.
Table 1 has information about the hardware used for quantum simulations in a Windows 11 operating system.
The first part consisted of simulating both Shor’s and JVG on Qiskit AER simulator. To this end, we selected different composite numbers ranging from 2 to 75 digits. Selecting these values is necessary to assess the algorithms’ performance on composite values with different numbers of digits. For each one of these values, the total number of qubits used in the circuit also changes. The implemented Shor’s algorithm could not factor numbers larger than 5 digits. We also experimented with larger composite numbers, but due to hardware limitations, given that simulating quantum operations in classical devices is still very demanding [
59,
60], the Shor’s algorithm failed to run. However, given JVG’s hybrid configuration, it was possible to reduce the computational burden and use less resources during simulation, which allowed us to investigate the algorithm’s performance for larger composite numbers up to 75 digits. This large composite number was selected because we wanted to show the proposed algorithm’s performance using a similar circuit size comparable to the largest instance achieved by Shor’s. This choice allows a more direct and meaningful comparison between the two methodologies.
The second part involved implementing the same algorithms on an IBM quantum computer using the same numbers. At this stage, it is relevant to note that both methodologies compared the JVG against the Shor’s algorithm implemented by IBM [
61], which here will be considered the gold standard for evaluations.
While the present results do not target many factorizations, i.e. RSA-2048, they establish a clear trajectory of improvement, indicating that the same principles could translate into significant resource savings in the given NISQ era.
Table 2 contains information on the number of digits of the composite numbers, and whether they were factored or not using each algorithm. Henceforth, whenever two-digit numbers are presented in the following tables, they will be listed in ascending order.
To maintain clarity between simulated and experimental outcomes, all performance metrics in this work are reported within their native execution contexts. The simulation-based results use Qiskit’s gate model with the CX and U primitives, along with wall-clock execution time and RAM (memory) consumption as system-level metrics. These measurements capture algorithmic behavior independent of hardware-specific constraints. Conversely, the quantum computer evaluations are based on IBM Q backends that utilize a different native gate basis, primarily SX, CZ, and RZ, and we also report the Qiskit Runtime (QR) wall-clock time returned by the runtime service as a practical indicator of end-to-end execution demand. Because QR can include system-level overheads beyond the circuit itself (e.g., service/runtime orchestration and compilation effects), we use it primarily for within-setting comparisons and trend analysis, while native gate counts remain the primary hardware-independent resource indicators. This ensures that simulator results reflect algorithmic scaling properties, while hardware results capture physical implementation behavior under realistic noise and transpilation conditions.
3.1. Results for Quantum Simulation
To account for stochastic effects in the simulation, each experiment was repeated 10 times. The mean and standard deviation were computed for every configuration across these runs. The average coefficient of variation is reported in
Table 3 for both approaches.
Results from
Table 3 reveal a substantial performance advantage of JVG’s algorithm compared to Shor’s implementation. Most notably, it is noticeable that the proposed methodology was able to address the factorization of significantly larger numbers containing up to 75 digits. This reflects directly on the total amount of qubits used in each circuit. For Shor’s, the number of qubits increased as the composite number increased. However, this relationship is not necessarily true for JVG’s approach. For example, from
Table 4, we can observe that JVG’s circuit size did not change for factoring 15, 21, 143 and 1363, where all of them needed only a 6 qubit-long circuit. Furthermore, considering the largest composite number for both algorithms, there is a significant difference in the number of qubits. While Shor’s algorithm required 70 qubits to factor a 5-digit composite number, the JVG approach required only 6 qubits to execute the circuit corresponding to the same number, a difference of over 90% in qubit count.
As a direct implication of the circuit size, the simulation time will also differ significantly. The data in
Table 3 also reveal that JVG uses significantly less resources than the original Shor’s algorithm.
Figure 7 and
Figure 8 show the plot for the time and memory (RAM) metrics using Log
10 scale. Note that the x-axis uses the number of digits required to factor each composite, not the composite number itself. These plots should be read as resources required by each pipeline to factor the tested instances, not as a matched-qubit comparison.
Figure 7 and
Figure 8 reveal a clear divergence in scaling behavior between the two approaches. Over the measured range, the JVG pipeline exhibits an approximately linear growth trend, in the Log-Log plot, with respect to the number of digits, as indicated by the best-fit regression, while Shor’s implementation displays a markedly steeper growth consistent with exponential-like behavior under the same fitting procedure. This resulted in the JVG model requiring around 76 s to finish the factorization of a 75-digit composite number, compared to 170 s for Shor’s to factor a 5-digit-long instance, 55% less time per simulation run. Considering a 5-digit composite number, JVG alternative shows improved RAM usage, requiring on average 267 MB of memory, compared to 12.5 GB for the Shor quantum circuit, 98% lower RAM usage between the two approaches. Considering the largest composite number for JVG, the RAM usage was around 345 MB, which remains substantially lower than the values reported by Shor’s in the simulated outcomes.
The gate count also reveals that number factorization using the JVG circuit has superior scalability than the traditional Shor’s circuit. Still considering the 5-digit-long case, JVG reduced the number of CX and U gates by 99% compared to the traditional Shor’s. Furthermore, direct assessment on the data in
Table 3 reveals better scalability for JVG’s methodology. This implies that the total computational resources will increase, but at a slower rate than Shor’s algorithm.
Considering the variation in resource usage from the smallest to the largest composite number successfully factored by both algorithms, it is possible to observe how each approach scales as the circuit size increases. For instance, Shor’s circuit exhibited an increase of 172 times in runtime and nearly 29 times for RAM consumption considering this interval. JVG, in turn, increased 1.5 times considering the run time, while the memory presented a small variation of 2%, remaining stable. For gate count, the most significant difference is observed for the U gate, which increased by over 175 times for Shor, while JVG increased nearly 3 times for this same indicator.
As previously stated, Shor’s algorithm, under the same experimental and hardware constraints, could not be executed for composite numbers larger than 5 digits due to prohibitive computational and memory requirements. On the other hand, JVG’s algorithm was able to factor the corresponding composite numbers. This distinction is a significant cornerstone, as it is possible to verify the algorithm’s performance beyond Shor’s feasibility and gaining access to information related to data performance, resource scaling and noise sensitivity. Consequently, these results demonstrate that the proposed JVG algorithm has space for further improvements that may lead to successful runs at scales currently unattainable by fully quantum algorithms.
3.2. Results for Implementation on a Real Quantum Hardware
We also repeated the circuits ten times when implementing the circuits in a quantum device. Different from the simulation methodology, we could not run the circuits for 5 digits for Shor’s algorithm. This was due to the current constraints of the NISQ devices. They still offer limited coherence time and qubit connectivity, which prevent more profound and more complex quantum circuits from being implemented. Given Shor’s configuration with a computationally expensive modular exponentiation quantum circuit, larger configurations proved to be unfeasible. Nevertheless, we managed to implement JVG’s algorithm for the same previously presented composite values, which offers a better take on the proposed methodology.
At this phase, the quantum device used in our methodology was “ibm_torino”, which uses the Heron R1 quantum processor and has 133 qubits. We recommend addressing reference [
62] for further information on this device and its processor.
Additionally, for every configuration, the mean and standard deviation were calculated across these runs, in the same fashion previously described.
Table 4 contains the results for implementing the algorithms.
In
Table 4, both methodologies exhibited stable behavior across repeated runs, with relative standard deviations remaining within a few percentage points of the mean. The inclusion of add/subtract blocks in the QNTT circuit, together with its hybrid architecture correlates with reduced numbers of gates SX, CZ and RZ. The same behavior was observed in the data in
Table 3 for the simulated results. The JVG algorithm showed more stable runs concerning the QR variation, as its standard deviation was near zero across the assessed circuits, a significant improvement over the 3.80% reported by Shor’s algorithm. The JVG implementation maintained comparable variability for the primary quantum gate metrics, reaching 0.73% for SX, 0.82% for CZ, and 0.39% for RZ gates. The counterpart version reached 0.26%, 0.29%, and 0.33%, respectively.
Table 4 confirms the simulated results previously presented, where the proposed JVG algorithm managed to use less resources than the traditional Shor’s algorithm.
Figure 9 presents the plots comparing QR versus the total number of qubits in the circuit for each composite number. Note that the following plots use the number of digits required to factor each composite number, not the composite number itself.
In
Figure 9, the QR consumption associated with Shor’s algorithm follows a different empirical trend than that observed in
Figure 7. In this case, the data are better described by a power-law fit rather than the exponential-like growth previously observed. By contrast, the JVG algorithm preserves an approximately linear growth profile, in the Log-Log plot, across both methodologies and experimental settings. Consequently, the increase in QR consumption for JVG remains close to linear with circuit size, whereas Shor’s implementation exhibits substantially steeper growth that is better captured by a power-law relationship in this regime. Across all evaluated composite numbers, JVG maintained a QR time well below 10 s. In contrast, Shor’s algorithm presented a significant variation in its QR-reported runtime, growing from 4 s to around 68 s between its initial and final configurations. A similar disparity is observed in quantum gate usage. Relative to the full Shor factoring pipeline for the same tested instances, the total quantum gate count is reduced by more than 99% in the JVG approach.
Moreover, Shor’s algorithm exhibited an approximately 16-fold increase in QR from its initial to its final configuration, whereas the QR associated with JVG increased by only a factor of three over a substantially larger problem-size. Circuit size followed a similar trend. As the composite number increased from two to four digits, Shor’s implementation expanded markedly from 18 to 46 qubits, leading to a rapid escalation in resource consumption, as shown in
Table 4. By contrast, the JVG architecture displayed considerably more stable behavior, particularly for smaller composite numbers, and was able to execute circuits corresponding to a 75-digit composite number using a 70-qubit configuration. This reaffirms the intense resource consumption by Shor’s algorithm, compared to JVG, especially in terms of QR and qubit usage.
3.3. Projections for RSA Relevant Scenario
The goal of this extrapolation is to compare measured resource-growth behavior between the two pipelines, and not to claim a change in the known asymptotic complexity. These projections should therefore be interpreted as indicative scaling trends rather than precise forecasts, because extrapolation beyond the measured range is sensitive to error accumulation, compilation choices, and hardware-specific constraints that can change at larger qubit counts.
For clarity, we focus on a single cryptographically relevant reference point, namely RSA-2048. Importantly, the projected “QR time” reported represents an estimate of the quantum execution cost for a single order-finding attempt under the same experimental compilation/runtime context, rather than a guaranteed end-to-end wall-clock “time-to-break RSA” under all conditions. The total time-to-factor depends on the number of attempts required (e.g., repetitions over shots and/or co-prime choices of ) to obtain a usable order and nontrivial factors.
Considering that the same circuit size to factor RSA-2048 is used for both methodologies, we can observe a substantial divergence relative to Shor’s baseline in the projected quantum execution cost per order-finding attempt. For a cryptographic relevant quantum computer, Shor’s methodology is projected to demand years to factor RSA-2048 number, while JVG is estimated to require around minutes to perform the same task. Converting per-attempt values into an end-to-end “time-to-factor” requires accounting for the expected number of repetitions needed to recover a valid order and nontrivial factors; for example, one thousand attempts would scale the JVG estimates to roughly 11 hours, while preserving the large gap relative to the Shor baseline.
4. Discussion
Before interpreting these findings, it is important to clarify the conceptual scope of our contribution. The novelty of the JVG algorithm lies in its hybrid architecture, which combines classical modular exponentiation with a QNTT step, resulting in a compact quantum circuit. By removing the most resource-intensive components of the factoring pipeline to classical computation, i.e. the quantum modular exponentiation circuit, the quantum portion of the algorithm is reduced to a minimal and well-structured configuration that performs spectral analysis more efficiently than standard Shor’s, as observed in both simulated and experimental results.
4.1. Implications of Simulated Results for JVG’s Algorithm
The overall performance of the proposed JVG’s algorithm showed remarkable improvement over the standard Shor’s methodology, as observed in
Table 3. JVG’s faster run time of quantum circuits is essential for the feasibility testing of the algorithm before moving to a real quantum device. The lower memory usage by JVG indicates better efficiency in terms of classical hardware use. This is of relevant interest given that currently, the simulation of quantum operations in classical devices is heavily constrained in memory usage [
59,
60]. Thus, more efficient algorithms offer a superior option for modelling and simulating more complex circuits.
Considering number factorization using Shor’s algorithm, the high amount of gates makes it hard to implement on the current noisy machines [
58]. The proposed JVG methodology, combining a classical modular exponentiation subroutine with a QNTT structure, demonstrated improved scaling for quantum gate usage. The implementation of the proposed factorization methodology opens the way for a more NISQ-friendly alternative, in which scalability is essential [
63]. Additionally, the proposed JVG algorithm offers more efficient simulations on classical backends, which remain an essential step for validating quantum algorithms before moving to a real quantum machine [
64,
65].
Therefore, the improvements achieved with JVG provide more tractable simulations of larger problem instances and a more realistic foundation for implementing integer factorization, while enabling significative savings on simulation cost per-run-basis.
4.2. Implications of Experimental Quantum Results for the JVG Algorithm
Considering the experimental data in
Table 4, once again the JVG’s methodology managed to reach better metric indicators. Considering the QR metric, the proposed methodology requires less processing time and executes more efficiently, demonstrating that the JVG algorithm can better manage resource growth when implemented on real quantum devices. This is important for current NISQ machines, since they still lack a larger coherence time, thus preventing long computations [
58,
63]. In this context, it would be possible to implement factorization algorithms more efficiently for more complex circuits [
28,
66].
On real quantum hardware, such as IBM Quantum backends, operational cost is dominated by backend occupancy time, circuit depth, and the number of repetitions required to obtain a valid result, rather than by directly observable energy consumption. In this context, higher throughput and shorter quantum runtimes translate into lower effective cost per successful execution. Assuming identical experimental configuration, Shor’s algorithm needed 68 s to factor 4-digit long composite number, while JVG needed only 2 s. This represents a 97% reduction in effective backend occupancy time per run, under identical experimental configuration. This highlights the more efficient use of quantum hardware resources achieved by the JVG approach.
The reduced number of quantum gates in JVG’s configuration is particularly relevant in the current NISQ context. In particular, two-qubit controlled gates such as CZ are significant sources of error and reduced entanglement fidelity on quantum hardware when compared with single-qubit operations [
67,
68]. By mitigating their use, it is possible to obtain more reliable circuits in terms of both stability and noise resistance, which are highly desirable properties. When combined with reductions in single-qubit gates such as SX and RZ, these improvements enable shallower and more error-resistant implementations of number factorization algorithms.
The values presented in
Table 4 should be regarded as indicative of performance trends rather than absolute benchmarks, as the experimental setup remains constrained by the current limitations of NISQ hardware. Likewise, percentage-based improvements on small baselines may exaggerate significance. Thus, this discussion focuses on comparative growth rates to assess true scalability and robustness.
Nevertheless, despite these hardware constraints, the experimental findings reinforce the robustness of the JVG methodology when implemented on real quantum devices, validating its projected trends observed in the experimental setup, further supporting the reliability and practical significance of the proposed approach for in NISQ computing applications.
4.3. Implications of the Projected Values for RSA Relevant Scenario
The RSA projections are entirely data-oriented and derived from the experimentally observed relationships between gate count and qubit number. They provide an evidence-based estimation of how both methodologies might behave under more complex circuit configurations, and for relevant scales, considering the experimental setup. The projected results indicate that the JVG algorithm sustains a markedly slower resource growth rate across all gate types and circuit-depth metrics compared to Shor’s framework. Specifically, they suggest that the JVG architecture maintains its scalability advantage even for RSA-size problems, reinforcing its potential as a more hardware-efficient, noise-resilient solution for future quantum cryptographic implementations. This behavior is already evident in the measured results presented in tables 3 and 4, where the JVG resource consumption exhibits an approximately linear increase with circuit size (and digit increase), in opposition to the growth observed for Shor’s algorithm. Therefore, JVG exhibits markedly slower empirical growth in run time and quantum gate count.
Interpreting the projected QR values as per-attempt quantum execution cost, the RSA-scale estimates still indicate an extreme separation between the pipelines: the Shor baseline grows into multi-year projected QR-time. At the same time, JVG remains in the seconds-to-minutes regime per attempt for the exact key sizes. The end-to-end “time-to-factor” then follows by multiplying these per-attempt values by the expected number of attempts required to recover a valid order and nontrivial factors. However, that repetition factor depends on implementation details and success probability, it does not alter the central observation that JVG exhibits a substantially slower empirical growth trend under the measured compilation and hardware constraints
Nevertheless, the projected run time for RSA-relevant scenarios discloses an important difference between the two methodologies. We must reinforce the idea that projected time values are calculated directly using a power-fit regression equation. This means that the projections are performed using a small-scale and noisy machine to extrapolate values for a quantum circuit which is still not physically feasible within the current NISQ machine limitations. As such, the projected values fall within an area of uncertainty and should not be understood as realistic resource estimate for breaking RSA-sized keys, but rather as indicators measures for the relative scale difference between Shor’s and JVG’s algorithms under the current methodology and how adopting the hybrid configuration could be beneficial to this specific task. In fact, specialized literature reported that near-future quantum devices could be able to tackle RSA encryption keys but would require many hours to completely factor it [
30,
69].
JVG offers both a more efficient framework for validating algorithms on classical backends and a more suitable alternative for implementing integer factorization on current devices, where reducing computational cost is paramount [
30,
66].
4.4. Statistical Consistency and Implications on NISQ Hardware Devices
The statistical analysis for both simulated and experimental benchmarks provide insight into the stability and reliability of the obtained results. The consistently low coefficients of variation observed in both environments indicate that JVG architecture exhibits robustness to errors and predictability comparable to Shor’s within the same benchmarking framework. The results suggest that its JVG gate structure has a favorable interaction with NISQ noisy characteristics. The statistical analysis validates the repeatability of the experiments. Also, it provides strong evidence that the JVG framework constitutes a more noise-resilient and hardware-efficient approach to quantum integer factorization.
4.5. Impact of Classical Modular Exponentiation and QNTT-Based Period Finding
As previously stated, the most significant bottleneck in Shor’s pipeline, is the quantum modular exponentiation subroutine [
40,
70]. By replacing quantum modular exponentiation with a classical computation followed by quantum-state encoding, JVG’s algorithm achieved significant reductions in both execution time and overall computational resource requirements for the tested composite integers when compared with Shor’s approach. This resulted in successful factorization of a 75-digit-long number in both simulated and experimental frameworks.
Beyond the hybrid decomposition itself, the integration of the QNTT circuit within the order-finding stage further contributed to the superior performance of the JVG methodology. The QNTT offered a novel methodology for number factorization by expanding QFT period finding from to a finite ring. Thus, rather than sample phase estimations with complex roots of unity as in QFT, QNTT retrieves the period for number factorization by using mod . This ring-based alternative expands number theory for factorization by implementing modular arithmetic. Empirically, this resulted in the JVG circuit’s slower growth in time and gates and its low variability on hardware, indicating that periodicity of ring structures can be directly leveraged to improve quantum integer factorization on NISQ devices. As shown in tables 3 and 4 and figures 7 to 9, it is noticeable the linear behavior of QR-time for the proposed methodology when compared to Shor’s.
While QFT relies on rotational gates (
Figure 1 and
Figure 2), the JVG is less dependent on these operations. These gates require long lasting qubit interactions, which can amplify not only errors in NISQ machines but also offer worse scalability, as reported. Contrarywise, the inclusion of the QNTT structure proved to be NISQ-friendly. Here, JVG replaces many fine-angle controlled rotations with regular add and subtract structures implemented by CNOT and Toffoli-style primitives [
53,
55]. This architecture enables a more regular, localized gate topology, reducing errors from entanglement and other qubit interactions, as evidenced by the reported benchmark results and projected values.
On real backends, this yields shallower depth and lower run-to-run variability, consistent with the observed small coefficients of variation. Additionally, although modular exponentiation dominates the asymptotic cost in Shor’s algorithm, the proposed approach effectively reduces both the depth and the number of two-qubit operations across the full pipeline, improving scalability within fixed hardware budgets.
5. Conclusion
This study proposes testing the hypothesis that a hybrid classical-quantum algorithm for number factorization can outperform a fully quantum one, such as the Shor’s algorithm. It leverages the properties of number theory to theoretically transform into a quantum framework, combining a classical modular exponentiation subroutine with the QNTT circuit into Shor’s pipeline to achieve a more efficient alternative for number factorization, while also expanding the number theory for number factorization. To verify the feasibility and performance of the proposed methodology, we investigated several composite numbers across quantum circuits of varying qubit sizes.
In the simulation results, the JVG approach was consistently more scalable regarding runtime, memory and gate count. Furthermore, performance gains became more relevant with the growth of circuit size, leading to improved handling of larger problem sizes. For both simulated and experimental setups, the proposed methodology managed to reach faster computational times while using fewer quantum gates, achieving a shallower circuit. These results are relevant because they provide evidence of a better alternative to Shor-based algorithms, reducing resource overhead and mitigating noise-prone operations involving quantum gates, including RSA-scale scenarios. These characteristics are desired in circuits for near-term quantum machines.
Future research should investigate ways to improve the proposed methodology. A possible topic of investigation is to examine other mapping methodologies for the hybrid circuit. Another possibility is to change the post-processing strategy, which could benefit from the proposed methodology.
Overall, this study establishes JVG’s hybrid classical-quantum circuit as a promising alternative toward implementing practical quantum integer factorization. The demonstrations of consistent improvements on runtime, memory usage and gate counts during both simulated and experimental phases make the proposed methodology a more efficient and error-resilient approach for NISQ devices. These gains were achieved by segregating exponentiation to the classical computer and replacing computationally intensive QFT with a more NISQ-friendly quantum period finding quantum circuit, which in the validation employed the QNTT algorithm. Given the importance of number factorization with reduced quantum resources, it directly affects fields such as cryptography.
The validation findings suggest that JVG is a promising and scalable alternative for quantum integer factorization, including RSA-scale security in the NISQ era. As future advances in quantum hardware unfold, the JVG framework is expected to undergo refinements and adaptations similar to those experienced by Shor’s algorithm. Such developments could ultimately yield more practical and resource-efficient quantum factoring methods, enabling efficient cryptographic applications on quantum devices within the NISQ era.