1. Introduction
There's a significant gap in interventional cardiology education. On one hand, automated lesion assessment gets an intraclass correlation coefficient of 0.998 with expert consensus for stent sizing decisions [
1]. On the other hand, 40% of graduating fellows can't identify calcium that needs to be modified, and 35% incorrectly size stents by more than 0.5mm [
2]. What's missing is any evidence on whether one of these issues could help address the other.
This gap has significant clinical implications. Using IVUS to guide percutaneous coronary intervention (PCI) cuts target lesion failure by 35% and stent thrombosis by 50% compared to using angiography alone [
10]. Fellows who complete training without mastering IVUS interpretation miss out on these potential benefits. Fortunately, FDA-cleared systems that can potentially bridge this competency gap are already being used in clinical settings [
1]. The question that remains is whether these tools can effectively teach, not just perform.
Our review of the literature shows that there's a complete lack of research on AI-enhanced IVUS education [1-10]. Given the existence of validated tools and documented failures, this gap in the literature highlights the key point of this paper: studying AI-enhanced IVUS education is not just advisable, but crucial.
2. The Current Evidence Landscape
2.1. Technological Validation Without Educational Translation
AI's technical capabilities in IVUS interpretation have been well-established. An automated lesion assessment system, tested on 234 patients with 8,076 cross-sectional images, shows a high level of agreement with expert judgment [
1]. Segmentation algorithms produce correlations above 0.99 with manual measurements [
8]. End-diastolic frame detection ensures consistent timing [
7]. The system is also compatible across major platforms [
9].
These accomplishments directly address documented gaps in education. For example, 40% of fellows can't accurately identify when calcium needs modification, which could be improved by AI that can do so accurately [
2]. Likewise, 35% of fellows incorrectly size stents, and they might benefit from working with systems that have a near-perfect 0.998 correlation with expert opinions [
1,
2]. However, this logical connection remains purely theoretical—there's no evidence to support or refute it.
2.2. Educational Failures Despite Established Frameworks
Competency gaps in IVUS interpretation aren't just anecdotal – they're systematically documented through validated assessment frameworks. Yet, despite structured curricula and clear performance standards, these gaps continue. The current training methods, based on apprenticeship models, produce uneven results, leaving a significant number of graduates unprepared for independent practice.
Against a backdrop of clear clinical benefits, this educational crisis is unfolding. IVUS guidance has shown a 35% reduction in target lesion failure and a 50% reduction in stent thrombosis, but only if it's interpreted accurately [
10]. Unfortunately, 40% of new operators lack the basic skills to realize these benefits for their patients.
3. Cautionary Evidence from Related Domains
Although there's currently no educational data specific to IVUS, studies on optical coherence tomography (OCT) provide valuable insights that complicate the idea of technology being the sole solution. Using AI to aid in OCT interpretation reduced the time it took by 23%, indicating potential efficiency gains [
3]. However, this improvement was short-lived—when AI support was removed, performance quickly returned to its original level. This suggests a dependency on AI rather than learning, and efficiency without the necessary education.
What's even more troubling, research shows that AI systems often make incorrect recommendations, and users, especially those new to the field, are surprisingly vulnerable to automation bias. A digital feedback course may improve theoretical knowledge, but it doesn't necessarily translate into more accurate measurements [
6]. These findings don't necessarily mean AI-enhanced education will fail, but they highlight the risk of implementing it without careful thought, which could create operators who are unable to work independently—a solution that might be worse than the problem.
4. The Urgency of Now
Three factors come together to make this research pressing rather than just significant. First, the technological infrastructure is in place and growing. Putting off "better" AI overlooks that current systems already match expert performance for key educational goals [
1,
7,
8,
9]. Second, every fellowship cycle that passes without tackling the 40% failure rate keeps patient care subpar [
2]. Third, other specialties are moving ahead—radiology and pathology are incorporating AI into their educational programs, while interventional cardiology debates the issue.
Continuing to do nothing is getting more expensive every year. Graduates from fellowship programs lack the necessary skills, patients receive less-than-optimal care, and proven technologies are left unused in education. At the same time, decisions about using AI in medical education will be made with or without solid evidence. The industry will market its products, institutions will adopt them, and fellows will train using them—either with guidance from research or simply based on assumptions.
5. A Research Agenda, Not an Implementation Plan
Our analysis doesn't endorse the use of AI in IVUS education. Instead, it highlights the need for further research to inform decision-making. Three separate studies could help fill the current gaps in evidence.
Our top priority is to determine whether AI enhances or hinders learning. Does interacting with AI systems speed up skill acquisition, or does it create a dependence? Can skills learned with AI assistance apply to unassisted practice? Can training methods counteract the automation bias seen in other areas? These basic questions call for controlled educational studies that compare AI-assisted training with traditional methods.
Once benefits become clear, the next phase would focus on fine-tuning the implementation. The key questions are how to integrate AI into existing curricula, what safeguards are needed to prevent over-reliance while still reaping efficiency gains, and how to balance independent interpretation skills with AI proficiency. To answer these questions, we need to conduct ongoing research within real-world fellowship programs.
True validation requires long-term follow-up. Do operators trained with AI help achieve better clinical outcomes? Does any educational benefit last into their independent practice? These answers come from carefully tracking graduates into their early careers.
Most importantly, we need to set clear failure criteria ahead of time. If AI doesn't significantly boost our skills, if automation bias is too hard to overcome, or if our skills disappear without technological support, then we should explore alternative approaches. The objective is to gather evidence, not just validate our existing assumptions.
Addressing Predictable Objections
There are several objections to this research agenda that need to be addressed ahead of time. The argument that "technology doesn't belong in medical education" overlooks the fact that current methods lead to a 40% failure rate [
2]. The real question is not whether change is necessary, but which changes to prioritize. AI is one option that deserves a thorough, evidence-based evaluation alongside others.
Concerns that AI will lead to dependent operators are backed by OCT data that reveals performance reversion without support [
3]. This legitimate worry calls for cautious research on educational use, not giving up on investigation. To prevent dependency, we need to study it, not shy away from it.
Proposing to wait for further clinical validation misses the difference between clinical and educational validation. Today's AI systems already match expert clinical performance [
1,
7,
8,
9]. What's missing is proof of educational value—a distinct question that calls for different approaches.
Claiming that implementation complexity prevents investigation conflates difficulty with impossibility. Research will show whether challenges can be overcome or are insurmountable. Presuming failure without evidence is as flawed as presuming success.
6. The Path Forward
Interventional cardiologists have a clear choice to make. We can recognize that there's no evidence showing a link between validated AI tools and educational outcomes, and work to generate that evidence. Or we can stick with training methods that fail 40% of fellows, while potentially useful technology remains unexplored in education.
Implementing this choice demands collaboration from multiple stakeholders. Industry partners that have already invested in clinical validation should also support educational research. Academic institutions should dedicate time and resources to educational innovation. Professional organizations should make educational research a priority alongside clinical guidelines. Regulatory agencies should establish clear pathways for assessing educational AI, separate from clinical approval.
7. Conclusions
Current evidence paints a clear picture: while AI tools are proven effective, educational systems are still failing, and no research links the two. This isn't a complex issue that needs in-depth analysis or a contentious claim that requires heated discussion. Instead, it's a simple knowledge gap that calls for straightforward research.
At the intersection of technological capability and educational need, we find a gap in evidence to guide their integration. On one hand, AI systems are achieving expert-level performance. On the other, fellows are showing significant competency gaps. We still don't know if one can help address the other.
Tools are already available—AI systems with proven clinical accuracy. There's a clear need—around 40% of fellows struggling with fundamental competencies. Research methods are well-established—controlled educational trials with clear goals. What's missing is the commitment to bring these elements together through thorough investigation.
It's not a push for implementing AI in IVUS education, but rather a demand for evidence that would support or counter such a move. Until we have that evidence, both enthusiasm and skepticism are equally baseless, and our fellows keep graduating with a 40% chance of being underprepared for the procedures they'll perform.
References
- Galo, J., et al. (2025). Machine Learning in Intravascular Ultrasound: Validating Agreement of an Artificial Intelligence Algorithm with Expert Clinicians on Stent Sizing from Intravascular Ultrasound: The Automated Lesion Assessment (ALA) Study. Published online May 20, 2025. DOI: Available via PubMed PMID: 39981660.
- Di Mario, C., Di Sciascio, G., Dubois-Randé, J.L., Michels, R., & Mills, P. Curriculum and Syllabus for Interventional Cardiology Subspecialty Training in Europe. European Society of Cardiology Working Group 10.
- Sibbald, M.G., et al. (2025). Impact of Artificial Intelligence-Enhanced Optical Coherence Tomography on Operator Performance and Clinical Decision-Making. JACC: Cardiovascular Interventions, Published January 6, 2025. DOI: Available via PMC11993895.
- Cioffi, G.M., Pinilla-Echeverri, N., Sheth, T., & Sibbald, M.G. (2023). Does artificial intelligence enhance physician interpretation of optical coherence tomography: insights from eye tracking. Frontiers in Cardiovascular Medicine, Vol. 10, December 8, 2023. [CrossRef]
- Qin, H., et al. (2021). Automatic Coregistration Between Coronary Angiography and Intravascular Optical Coherence Tomography: Feasibility and Accuracy. JACC: Asia, August 31, 2021.
- Kassis, N., et al. (2022). Immersive educational curriculum on intracoronary optical coherence tomography image interpretation among naive readers. Cardiovascular Revascularization Medicine, October 11, 2022. DOI: Available via PubMed PMID: 36224563.
- Bajaj, R., et al. (2021). A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images. The International Journal of Cardiovascular Imaging, February 14, 2021. [CrossRef]
- Dong, L., et al. (2023). Comparison of deep learning-based image segmentation algorithms for lumen and external elastic membrane in intravascular ultrasound images. International Journal of Cardiovascular Imaging, 2023. DOI: Available via PubMed PMID: 38017463.
- Hsiao, C.H., Chen, K., Peng, T.Y., & Huang, W.C. (2021). Federated Learning for Coronary Artery Plaque Detection in Atherosclerosis Using IVUS Imaging: A Multi-Hospital Collaboration. arXiv preprint, December 2021.
- Li, X., et al. (2024). Intravascular ultrasound-guided versus angiography-guided percutaneous coronary intervention in patients with acute coronary syndrome: a multicentre, open-label, randomised controlled trial. The Lancet, May 10, 2024.
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).