Submitted:
03 April 2025
Posted:
03 April 2025
You are already at the latest version
Abstract
Artificial Intelligence (AI) has achieved remarkable successes, but it continues to face critical challenges—including inefficiency, limited interpretability, lack of robustness, alignment issues, and high energy consumption. Quantum computing offers a promising path to fundamentally accelerate and enhance AI by leveraging quantum parallelism and entanglement. This paper proposes QHM—Quantum Hybrid Multimodal, a unified framework that integrates superconducting and topological quantum computing into multimodal AI architectures. We establish a theoretical foundation for embedding quantum subroutines—such as a Quantum Self-Attention Neural Network (QSANN) and Quantum Enhanced Quantum Approximate Optimization Algorithm (QAOA)- QQAOA—within classical deep learning models. We survey key quantum algorithms, including Grover’s search, the HHL algorithm for solving linear systems, QAOA, and variational quantum circuits, evaluating their computational complexity and suitability for AI workloads. We also analyze cutting-edge quantum hardware platforms: superconducting qubit systems like Google’s 105-qubit Willow, IBM’s 1,121-qubit Condor, Amazon’s bosonic Ocelot, and Microsoft’s topological Majorana-1, discussing their potential for accelerating AI. The paper explores how quantum resources can enhance large language models, Transformers, mixture-of-experts architectures, and cross-modal learning via quantum-accelerated similarity search, attention mechanisms, and optimization techniques. We also examine practical engineering challenges, including cryogenic cooling, control electronics, qubit noise, quantum error correction, and data encoding overhead, offering a cost-benefit analysis. An implementation roadmap is outlined, progressing from classical simulations to hybrid quantum-classical prototypes, and ultimately to fully integrated systems. We propose benchmarking strategies to evaluate quantum-AI performance relative to classical baselines. Compared to conventional approaches, the QHM hybrid framework promises improved computational scaling and novel capabilities—such as faster search and more efficient training—while acknowledging current limitations in noise and infrastructure. We conclude by outlining future directions for developing quantum-enhanced AI systems that are more efficient, interpretable, and aligned with human values, and we discuss broader implications for AI safety and sustainability.
Keywords:
1. Introduction
1.1. Motivation for a Quantum Hybrid AI Framework
- Efficiency and Scalability: Training large models demands enormous computational power and energy. For example, the GPT-3 family required petaflop/s-years of compute and significant energy resources [1,2]. In deployment, serving millions of queries can consume power on the order of a small country’s electricity usage [3].
- Interpretability: Deep neural networks often operate as black boxes, making it difficult to explain their decisions [4]. This opaqueness raises concerns in high-stakes applications.
- Robustness: AI models can be brittle, vulnerable to adversarial examples or distribution shifts, which undermines reliability [5].
- Energy Constraints: The environmental footprint of AI is significant; data centers running large AI workloads have skyrocketing energy emissions [3].
1.2. Relationship to Theoretical Foundations
2. Theoretical Framework
2.1. Quantum Self-Attention Neural Network (QSANN)
2.2. Quantum QAOA (QQAOA)
2.3. Computational Complexity Considerations
2.4. Neurosymbolic AI and Quantum Integration
- By efficiently exploring large discrete state spaces for reasoning tasks using quantum search or optimization [10]. For instance, searching a logical rule space or performing logical inference might be sped up by Grover’s algorithm or quantum backtracking algorithms.
- By enabling novel representational capacities – a quantum state can, for example, represent a superposition of many logic propositions, and quantum operations can implement certain logical operations in parallel. We might imagine a quantum circuit that encodes a truth table of a learned rule and verifies properties against it much faster than exhaustive classical checks [12].
2.5. Comparison of Classical and Quantum Paradigms
3. Quantum Computing for AI
3.1. Superconducting Quantum Processors for AI
- Quantum Kernels and Feature Maps: A quantum circuit can act as a feature transformer, mapping input data to a quantum state that is implicitly a high-dimensional feature vector. Measuring overlaps between states computes a kernel [22]. This has been explored for small-scale classification, where quantum kernel methods showed some advantages on synthetic data. In a multimodal setting, one might have a quantum feature map for images and another for text, then use a classical model to fuse them.
- Quantum Optimizers: Variational quantum algorithms like QAOA [26] or VQE can serve as optimizers for specific layers. For example, one could encode the loss function of a neural network layer into a cost Hamiltonian and use QAOA to find optimal weights (this is theoretical at this stage). Alternatively, a quantum circuit could solve a lower-level optimization (like allocating resources in a scheduling problem that an AI agent needs to solve as part of its tasks).
- Quantum Sampling and Generative Models: Generative AI might benefit from quantum circuits that natively produce probability amplitudes corresponding to complex distributions [34]. A quantum circuit can in principle generate samples from distributions that are hard to even approximate classically (quantum supremacy circuits are one example). There is active research on quantum Boltzmann machines and quantum circuit Born machines as generative models. If one could train a quantum circuit to generate images or text representations, it might require far fewer qubits than an equivalent classical parameters due to the exponential state space (e.g., n qubits can represent basis states). On current hardware, limited depth and noise make this challenging, but small demonstrations (with 4–8 qubits) have been done.
3.2. Topological Quantum Computing for AI
- Quantum Accelerated Training: One could imagine using such a machine to implement quantum linear algebra routines that speed up training of very large neural networks [38]. For example, performing large matrix multiplications via quantum means or solving huge systems of equations that appear in second-order optimization methods.
- Full-scale Quantum Neural Networks: With that many qubits, one could encode an entire dataset or very large vectors as quantum states [39]. A quantum neural network (QNN) could be a sequence of unitary operations and measurements that process these states. If the QNN is built to mimic the structure of a classical network (like convolution or attention), it could achieve the same tasks with potentially fewer steps. There is ongoing research on whether QNNs can be more efficient in terms of number of parameters or better at generalizing – the theory is not fully settled. But the capacity of basis states is unimaginably larger than any classical network’s representational size.
- Quantum Simulation for Model Understanding: Topological qubits would also allow quantum simulation of complex systems – like simulating the physics of a neuromorphic chip or brain tissue – which might inform new AI model designs (though this is more indirect to AI).
3.3. Hybrid Quantum-Classical Models and Feasibility
3.3.1. Current Feasibility
- Up to qubits gate-based can be used, but at that scale, only relatively shallow circuits can be run before noise dominates [9]. This means any near-term hybrid algorithm must use shallow circuits (perhaps depth ). Variational algorithms fit this bill since they often intentionally limit depth.
- Error mitigation can improve result fidelity but at cost of more runs [19]. For inference tasks, one can afford many circuit executions (since it’s offline or batch); for real-time tasks, repeated runs are expensive time-wise. So we likely start with offline use of quantum (like using quantum to pre-compute some component of a model, rather than in a live application loop).
- Cloud integration: All major quantum hardware is accessible via cloud APIs. One could connect a PyTorch or TensorFlow pipeline to call a quantum circuit execution on IBM Q or AWS Braket. This adds network latency that is large (tens of milliseconds at least) – too slow for each forward pass of a large model. So maybe quantum is better used during a training phase asynchronously (e.g., help initialize parameters or refine them periodically) for now. As integrated systems like AWS’s local control or on-premise systems become available, the latency can shrink.
4. Multimodal AI and Quantum Enhancement
4.1. Quantum-Accelerated Similarity Search
4.2. Attention and Optimization
4.3. Multi-modal Fusion and Representation Learning
4.4. Potential Applications in Multimodal AI
5. Real-World Constraints and Engineering Challenges
5.1. Cryogenic Cooling and Infrastructure
5.2. Quantum Control Electronics
5.3. Qubit Fabrication and Yield
5.4. Noise and Error Correction
5.5. Data Encoding Overheads
- Preparing basis states representing an index (for search or lookup) – that’s relatively easy via bit flips if you have the index in binary.
- Loading amplitude-encoded vectors – which can be complex [20]. If we have a vector , preparing might require gates generally. If N is large (dimension or number of data points), this is a serious cost. Some specialized techniques use quantum random access memory (qRAM) hardware to load data in by routing down a binary tree of address qubits, but building qRAM is challenging (requires quantum coherence across memory structure). Alternatively, if data has structure (like it’s output of a simple function), one can prepare it faster. In an AI pipeline, data may not be arbitrary – e.g., input embeddings come from an earlier layer, which might itself be produced by a circuit to begin with in a fully quantum pipeline, so no "loading" needed; it’s already a quantum state if the previous layer was quantum. However, at the boundary (classical input like pixel values), one must encode. If that boundary is a bottleneck, the advantage could be lost. One strategy is to compress data classically first (reduce dimension) and then let quantum act on that smaller representation.
5.6. Measurement and Readout
5.7. Scalability vs. Classical Partitioning
5.8. Cost-Benefit Analysis
5.9. Reliability and Verification
6. Implementation Roadmap
6.1. Phase 1: Classical Simulation of Quantum Algorithms (Present to 1 Year)
6.1.1. Benchmarks
6.2. Phase 2: Early Hybrid Quantum-Classical (1–3 Years)
6.2.1. Benchmarks
6.3. Phase 3: Broadening Quantum Role – Hybrid Quantum (3–7 Years)
6.3.1. Benchmarks
6.4. Phase 4: Fault-Tolerant Quantum Integration (7–15 Years)
6.4.1. Benchmarks
6.5. Phase 5: Full Integration and Ubiquity (15+ Years)
6.5.1. Industry adoption in later phases
6.6. Evaluation and Benchmarking Strategy
- We will measure not just raw performance but also cost, reliability, and development effort.
- A key milestone would be demonstrating a provable quantum advantage for an AI task: i.e., show that the hybrid approach scales better with problem size than the best-known classical approach (this might come in Phase 4 when error-corrected systems can run algorithms that are intractable classically) [12].
- Another evaluation criterion is accuracy vs resource trade-off: maybe quantum allows using fewer data or parameters to achieve same accuracy, indicating a more efficient learning (some theoretical works suggest quantum models might generalize from less data by examining certain function spaces) [42].
7. Comparisons with Existing Approaches
7.1. Benchmarking against Classical AI Models
7.2. Advantages and Trade-offs of Hybrid Approaches
- Pros: Potential speedups for specific subroutines (as enumerated, search, linear algebra) [10,11,12], ability to explore multiple possibilities in parallel (which might avoid getting stuck in local minima for optimization, as a quantum state can explore many configurations at once), and possibly better scaling with problem complexity (e.g., solving certain high-dimensional problems that classical would approximate).
7.3. Industry Applications and Commercialization Potential
- In drug discovery, classical simulations for molecular interactions are too slow for complex molecules; a hybrid AI that uses quantum simulation for molecular scoring could give pharmaceutical companies an edge in screening drugs [49]. If a particular quantum device can simulate certain chemical systems exactly, the AI integrating that will outperform any classical predictive model that must approximate chemistry.
- In finance, arbitrage or portfolio optimization problems are combinatorially large; quantum optimizers might find better solutions or faster convergence, giving better returns or risk management [51]. If a bank can show a quantum-assisted trading algorithm that yields even a small percentage improvement, that’s commercially significant.
- In logistics, routing and scheduling (UPS, airlines, etc.) are often solved with heuristic AI. A hybrid with quantum annealing or QAOA might find schedules that save a percent of cost – huge in absolute terms. Companies like Volkswagen have already experimented with quantum for traffic flow optimization [53].
- For AI services (like Google, Microsoft providing AI APIs), adding a "quantum boost" option might become a product: e.g., a cloud AI service that for a premium price will do a more thorough job using quantum in the backend. They would do this if it offers a quality or speed advantage for certain tasks like huge dataset analysis or cryptographic pattern detection, etc.
7.4. Comparison with Other Non-Classical Approaches
- Neuromorphic Computing (analog, brain-inspired): Projects like Intel Loihi or IBM TrueNorth attempt to run spiking neural networks with very low energy, potentially achieving great efficiency for specific tasks like sensory processing [54]. Neuromorphic hardware can do certain things (like sparse, event-driven computation) far more efficiently than traditional CPUs. However, they don’t typically offer speedups for the big linear algebra tasks in deep learning; they excel in scenarios more akin to how brains work (which might align with some AI like continuous learning, edge AI).
- Analog optical computing: using photonics for matrix multiplications could massively speed up neural network inference and training, because light can compute many operations in parallel with low latency [55]. Optical matrix multipliers might do in one clock what takes many GPU cycles. So in a sense, optical computing could address some of the same bottlenecks (like big matrix multiplies in transformers) that we looked at quantum for. The difference is optical computing is still ultimately classical (though analog), and doesn’t give exponential algorithmic speedups, just a constant or polynomial improvement by doing things in parallel or faster hardware. But those improvements might be enough for a long time.
- ASICs and advanced classical hardware: AI chips are evolving (more memory near compute, smarter architectures). It could be that by the time quantum is ready to help, classical chips have also advanced significantly (like 3D stacking, cryoCMOS, etc.) making classical AI faster such that the quantum advantage window narrows. There’s a bit of a race: quantum algorithms vs the continual Moore’s Law-type progress (albeit Moore’s law slowing, but alternate improvements like algorithmic or hardware specialization continuing).
7.5. Case Studies and Known Results
7.6. Interpretability and Safety Considerations
7.7. Potential for Commercialization
7.8. Summary of Comparison
- Performance: Hybrid expected to win in asymptotic limit on certain tasks [12]; classical wins in near-term practical tasks and overall maturity.
- Resource usage: Hybrid aims to reduce required classical compute or data [52]; classical can use brute force if resources are available.
- Robustness: Classical stable, quantum prone to noise (currently) [9].
- Flexibility: Classical can handle any logic; quantum integration must be tailored to algorithms that fit quantum paradigms.
- Ecosystem: Classical AI has huge ecosystem, quantum is budding. Over time the gap will close.
8. Conclusion and Future Directions
8.1. Key Findings
- Quantum algorithms can theoretically provide polynomial (and in some cases exponential) speedups for subroutines common in AI, such as search (Grover [10]), linear algebra (HHL [11]), and optimization (QAOA [26]). When these are the bottleneck, a hybrid quantum-classical approach could outperform purely classical methods as problem sizes grow.
- The hybrid approach can be designed in a modular way: one can replace certain layers or components of an AI model with quantum equivalents without changing the overall model’s functionality. This modularity makes incremental adoption feasible.
- Current quantum hardware is not yet at the scale or error rate to outperform classical AI on real-world problems, but progress is steady. Google’s 105-qubit Willow [13] demonstrated a huge leap in computational power in a narrow task, suggesting that as coherence and qubit counts improve, practical AI-relevant tasks will enter quantum advantage regime.
- There are promising early results in quantum machine learning (QML) indicating that quantum models can achieve at least comparable performance to classical ones on small datasets [22], and in some synthetic cases, even provably require fewer resources [23]. This supports the notion that QHM is not just hype but has concrete merit.
- We identified that the major hurdles are engineering-related (noise, interfacing, etc.), not a fundamental lack of quantum algorithms for AI. In fact, we have a toolkit of quantum techniques ready; it’s about deploying them effectively. Noise mitigation and hybrid error correction strategies can likely bridge the gap in the medium term.
8.2. Research Gaps and Next Steps
- Algorithm design: We need more quantum algorithms tailored to AI tasks. For example, quantum versions of convolution operations or graph neural network message passing. Research should also explore hybrid algorithms that use quantum in novel ways (not just speeding up what classical does, but perhaps doing things differently because quantum allows it).
- Theory of quantum learning: A deeper theoretical understanding of why and when quantum models generalize better or learn faster would guide us to the right applications. This involves quantum statistical learning theory, perhaps extending PAC learning to quantum settings.
- Benchmark tasks: The community would benefit from establishing standard benchmark tasks for quantum-enhanced AI (analogous to how MNIST, ImageNet, etc., are for classical). These could be tasks believed to have some quantum structure or just challenging computationally. Having benchmarks will track progress objectively.
- Integration techniques: Research on software and compilers to integrate quantum and classical code seamlessly is needed (some efforts are in progress in projects like Qiskit Machine Learning, PennyLane, etc.). This includes differentiable programming across quantum and classical boundaries – currently one can compute gradients of quantum circuits, but combining that with classical autograd is non-trivial.
- Hybrid architectures for alignment: On the AI safety side, one interesting direction is using quantum computers to help interpret or verify AI decisions. This might involve quantum-enhanced model checking or more efficient probabilistic reasoning about model behavior. It’s speculative, but worth exploring as part of neurosymbolic alignment efforts.
8.3. Broader Implications
8.4. Sustainability
8.5. Human-AI Interaction
8.6. Long-Term Vision
Appendix A. Mathematical Proofs
Appendix A.1. Complexity Analysis of QSANN
Appendix B. Future Work Details
Appendix B.1. Quantum Error Mitigation for AI
- Error-aware training: Incorporating knowledge of quantum noise profiles into the training process
- Noise-resilient quantum encodings: Designing quantum feature maps that are inherently robust to common noise channels
- Hybrid error mitigation: Combining classical post-processing with quantum error detection to improve output fidelity
Appendix B.2. Resource-Efficient Quantum Transfers for AI
- Compressed quantum encoding: Methods to encode only the most relevant features of high-dimensional data
- Quantum-inspired classical preprocessing: Using classical algorithms that mirror quantum principles to prepare data efficiently for quantum processing
- Direct quantum sensing: For certain sensor data, bypassing classical conversion by connecting quantum sensors directly to quantum processors
References
- Brown, T. B. , Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., et al. (2020). Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems, 33, 1877–1901.
- Patterson, D. , Gonzalez, J., Le, Q., Liang, C., Munguia, L. M., Rothchild, D., et al. (2021). Carbon Emissions and Large Neural Network Training. arXiv preprint arXiv:2104.10350, arXiv:2104.10350.
- Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and Policy Considerations for Deep Learning in NLP. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 3645–3650.
- Rudin, C. (2019). Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nature Machine Intelligence, 1(5), 206–215.
- Goodfellow, I. J., Shlens, J., & Szegedy, C. (2014). Explaining and Harnessing Adversarial Examples. arXiv preprint arXiv:1412.6572.
- Christian, B. (2020). The Alignment Problem: Machine Learning and Human Values. W. W. Norton & Company.
- Garcez, A. D. , & Lamb, L. C. (2020). Neurosymbolic AI: The 3rd Wave. arXiv preprint arXiv:2012.05876, arXiv:2012.05876.
- Nielsen, M. A., & Chuang, I. L. (2010). Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge University Press.
- Preskill, J. (2018). Quantum Computing in the NISQ Era and Beyond. Quantum, 2, 79.
- Grover, L. K. (1996). A Fast Quantum Mechanical Algorithm for Database Search. Proceedings of the Twenty-eighth Annual ACM Symposium on Theory of Computing, 212–219.
- Harrow, A. W., Hassidim, A., & Lloyd, S. (2009). Quantum Algorithm for Linear Systems of Equations. Physical Review Letters, 103(15), 150502.
- Biamonte, J., Wittek, P., Pancotti, N., Rebentrost, P., Wiebe, N., & Lloyd, S. (2017). Quantum Machine Learning. Nature, 549(7671), 195–202.
- Neven, H., Babbush, R., Barends, R., et al. (2024). Google Quantum AI announces Willow processor: A 105-qubit superconducting quantum computer. Google AI Blog. Available online: https://ai.googleblog.com/2024/12/announcing-willow-processor-105-qubit.html.
- Gambetta, J. M., & Temme, K. (2023). Extending the Computational Reach of a Noisy Superconducting Quantum Processor. IBM Journal of Research and Development, 67(1), 1–12.
- Brandão, F. Brandão, F., Painter, O., et al. (2025). Amazon announces Ocelot quantum chip. Amazon Science. Available online: https://www.amazon.science/blog/amazon-announces-ocelot-quantum-chip.
- Nayak, C., Alam, Z., Ali, R., Andrzejczuk, M., Antipov, A., Aghaee, M., et al. (2025). Microsoft unveils Majorana 1, the world’s first quantum processor powered by topological qubits. Microsoft News Center. Available online: https://news.microsoft.com/source/features/innovation/microsofts-majorana-1-chip-carves-new-path-for-quantum-computing/.
- Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention Is All You Need. Advances in Neural Information Processing Systems, 30, 5998–6008.
- Yang, J. J., Yoo, S., Kim, H., & Choi, M. (2020). Cryogenic Control Architecture for Large-Scale Quantum Computing. IEEE International Conference on Quantum Computing and Engineering (QCE), 1–9.
- Adams, R. Adams, R., Chen, S., et al. (2024). Advancements in quantum error correction technology outperform leading quantum computing systems. Phys.org. Retrieved from https://phys.org/tags/superconducting+qubits/.
- Giovannetti, V., Lloyd, S., & Maccone, L. (2008). Quantum Random Access Memory. Physical Review Letters, 100(16), 160501.
- Benedetti, M., Lloyd, E., Sack, S., & Fiorentini, M. (2019). Parameterized Quantum Circuits as Machine Learning Models. Quantum Science and Technology, 4(4), 043001.
- Havlaek, V., Córcoles, A. D., Temme, K., Harrow, A. W., Kandala, A., Chow, J. M., & Gambetta, J. M. (2019). Supervised Learning with Quantum-Enhanced Feature Spaces. Nature, 567(7747), 209–212.
- Rebentrost, P., Mohseni, M., & Lloyd, S. (2014). Quantum Support Vector Machine for Big Data Classification. Physical Review Letters, 113(13), 130503.
- Schuld, M., Bergholm, V., Gogolin, C., Izaac, J., & Killoran, N. (2019). Evaluating Analytic Gradients on Quantum Hardware. Physical Review A, 99(3), 032331.
- Lloyd, S., Schuld, M., Ijaz, A., Izaac, J., & Killoran, N. (2020). Quantum Embeddings for Machine Learning. arXiv preprint arXiv:2001.03622.
- Farhi, E., Goldstone, J., & Gutmann, S. (2014). A Quantum Approximate Optimization Algorithm. arXiv preprint arXiv:1411.4028.
- Grover, L. K. (2002). Creating Superpositions That Correspond to Efficiently Integrable Probability Distributions. arXiv preprint quant-ph/0201097.
- Harrigan, M. P., Sung, K. J., Neeley, M., Satzinger, K. J., Arute, F., Arya, K., et al. (2021). Quantum Approximate Optimization of Non-Planar Graph Problems on a Planar Superconducting Processor. Nature Physics, 17(3), 332–336.
- Du, Y., Hsieh, M. H., Liu, T., Tao, D., & Khamis, N. (2020). Quantum Neural Architecture Search. arXiv preprint arXiv:2010.10217.
- Mao, J. Mao, J., Gan, C., Kohli, P., Tenenbaum, J. B., & Wu, J. (2019). The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences from Natural Supervision. International Conference on Learning Representations.
- Reichardt, B. W., Unger, F., & Vazirani, U. (2013). Classical Command of Quantum Systems. Nature, 496(7446), 456–460.
- Arute, F., Arya, K., Babbush, R., Bacon, D., Bardin, J. C., Barends, R., et al. (2019). Quantum Supremacy Using a Programmable Superconducting Processor. Nature, 574(7779), 505–510.
- Putterman, F., Wu, Y., Morton, J. J. L., & Kandala, A. (2023). Quantum Error Correction with Bosonic Cat Qubits. Science, 380(6645), 718–722.
- Amin, M. H. , Andriyash, E., Rolfe, J., Kulchytskyy, B., & Melko, R. (2018). Quantum Boltzmann Machine. ( 8(2), 021050.
- Lutchyn, R. M. , Bakkers, E. P., Kouwenhoven, L. P., Krogstrup, P., Marcus, C. M., & Oreg, Y. (2018). Majorana Zero Modes in Superconductor–Semiconductor Heterostructures. Nature Reviews Materials, 1(1),1–13.
- Sarma, S. D., Freedman, M., & Nayak, C. (2015). Majorana Zero Modes and Topological Quantum Computation. npj Quantum Information, 1(1), 1–13.
- Aasen, D. Aasen, D., Aghaee, M., Alam, Z., Andrzejczuk, M., Antipov, A., Astafev, M., et al. (2025). Roadmap to fault tolerant quantum computation using topological qubit arrays. arXiv preprint arXiv:2502.12252. Available online: https://arxiv.org/abs/2502.12252.
- Lloyd, S., Mohseni, M., & Rebentrost, P. (2014). Quantum Principal Component Analysis. Nature Physics, 10(9), 631–633.
- Tiwari, G., Thulasiram, R., & Thulasiraman, P. (2020). Quantum Deep Neural Networks: A Quantum Deep Learning Framework. IEEE Access, 8, 219819–219843.
- Chen, S. Y. C., Yoo, S., & Yoon, Y. (2022). Quantum Transformers for Natural Language Processing. IEEE Transactions on Neural Networks and Learning Systems, 1–12.
- Tang, E. (2019). A Quantum-Inspired Classical Algorithm for Recommendation Systems. Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, 217–228.
- Caro, M. C., Huang, H. Y., Cerezo, M., Sharma, K., Sornborger, A., Cincio, L., & Coles, P. J. (2022). Generalization in Quantum Machine Learning from Few Training Data. Nature Communications, 13(1), 1–11.
- Baltrusaitis, T., Ahuja, C., & Morency, L. P. (2019). Multimodal Machine Learning: A Survey and Taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(2), 423–443.
- Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., et al. (2021). Learning Transferable Visual Models from Natural Language Supervision. Proceedings of the 38th International Conference on Machine Learning, 8748–8763.
- Wang, W., Dang, L., Feng, Y., Yang, Q., & Wang, H. (2020). What Makes a Good Multimodal Fusion Technique? A Survey. Information Fusion, 65, 149–169.
- Hoffmann, J., Borgeaud, S., Mensch, A., Buchatskaya, E., Cai, T., Rutherford, E., et al. (2022). Training Compute-Optimal Large Language Models. arXiv preprint arXiv:2203.15556.
- Buhrman, H., Cleve, R., Watrous, J., & De Wolf, R. (2001). Quantum Fingerprinting. Physical Review Letters, 87(16), 167902.
- Du, Y., Hsieh, M. H., Liu, T., & Tao, D. (2021). Efficient Quantum Reinforcement Learning via Quantum Advantage. npj Quantum Information, 7(1), 1–8.
- Cao, Y. Cao, Y., Romero, J., Olson, J. P., Degroote, M., Johnson, P. D., Kieferová, M., et al. (2018). Potential of Quantum Computing for Drug Discovery. Journal of Chemical Theory and Computation, 15(3), 1501–1521.
- Hann, C. T., Noh, K., Putterman, H., Matheny, M. H., Iverson, J. K., Fang, M. T., Chamberland, C., Painter, O., & Brandão, F. G. S. L. (2024). Hybrid cat-transmon architecture for scalable, hardware-efficient quantum error correction. arXiv preprint arXiv:2410.23363. Available online: https://arxiv.org/abs/2410.23363.
- Egger, D. J., Gambella, C., Marecek, J., McFaddin, S., Mevissen, M., Raymond, R., et al. (2021). Quantum Computing for Finance: State-of-the-Art and Future Prospects. IEEE Transactions on Quantum Engineering, 2, 1–24.
- Huang, H. Y., Broughton, M., Mohseni, M., Babbush, R., Boixo, S., Neven, H., & McClean, J. R. (2021). Power of Data in Quantum Machine Learning. Nature Communications, 12(1), 2631.
- Neukart, F., Compostella, G., Seidel, C., Von Dollen, D., Yarkoni, S., & Parney, B. (2017). Traffic Flow Optimization Using a Quantum Annealer. Frontiers in ICT, 4, 29.
- Davies, M., Srinivasa, N., Lin, T. H., Chinya, G., Cao, Y., Choday, S. H., et al. (2018). Loihi: A Neuromorphic Manycore Processor with On-Chip Learning. IEEE Micro, 38(1), 82–99.
- Shen, Y., Harris, N. C., Skirlo, S., Prabhu, M., Baehr-Jones, T., Hochberg, M., et al. (2017). Deep Learning with Coherent Nanophotonic Circuits. Nature Photonics, 11(7), 441–446.
- Montanaro, A. (2015). Quantum Speedup of Monte Carlo Methods. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 471(2181), 20150301.
| Operation | Classical Complexity / Features | Quantum Complexity / Features |
|---|---|---|
| Unstructured Search | linear search | Grover’s algorithm (quadratic speedup) |
| Structured Search (e.g. BST) | for balanced tree | (no known speedup if structure can be exploited classically) |
| Dense Matrix-Vector Multiply | for matrix | with quantum amplitude encoding (exponential state space) but data loading |
| Solving Linear System (s-sparse) | (HHL algorithm, exponential speedup in N) | |
| Distance / Inner Product | for vectors in | with quantum state overlap (assuming states prepared) |
| Fourier Transform | (FFT) | (quantum FFT, exponential in signal length) |
| Search in database of size N | if sorted (binary search) / if unsorted | (Grover) unsorted; no gain if sorted (need classical sort) |
| Sampling from distribution | Depends on distribution (often ) | Quantum sampling can leverage superposition to sample in one step from prepared distribution |
| Optimization (generic) | No general polytime method for NP-hard problems (often exponential) | Quadratic speedup in exhaustive search (Grover); QAOA heuristic (depends on problem, no guaranteed global speedup in worst-case) |
| Data Encoding Overhead | Not applicable (data is in memory directly) | May require operations to load N amplitudes (quantum RAM can reduce this if available) |
| Robustness to Noise | Classical bits stable (error correcting codes rarely needed at hardware level) | Qubits fragile, require quantum error correction (overhead in qubit count and operations) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
