1. Introduction
Artificial intelligence (AI) has rapidly gained traction across medicine, demonstrating its capacity to improve diagnostic accuracy, accelerate decision-making, and optimize patient outcomes. Applications range from radiology and oncology to orthopaedics and surgical specialties, where precision and reproducibility are critical. In this context, hand surgery and microsurgery represent unique areas in which AI has the potential to become transformative.
Hand and wrist conditions are highly prevalent and often demand rapid and accurate diagnosis, as in cases of fractures, ligamentous injuries, and tendon pathologies. In parallel, microsurgery requires a level of technical precision and intraoperative decision-making that can benefit from AI-enhanced planning, real-time monitoring, and postoperative follow-up. The integration of AI in these domains therefore addresses some of the most pressing challenges faced by hand surgeons: diagnostic complexity, surgical delicacy, and the need for continuous training and rehabilitation assessment.
The body of literature on AI in hand surgery and microsurgery has expanded substantially in recent years. Early reports highlighted the promise of deep learning models in fracture detection, while more recent studies have explored applications in flap monitoring, robotic-assisted procedures, and remote functional evaluation. Despite encouraging findings, the evidence remains heterogeneous, with important gaps related to validation, clinical integration, and ethical considerations such as bias and cost-effectiveness.
Given this background, the aim of this review is to provide a comprehensive and critical overview of current AI applications in hand surgery and microsurgery. By focusing on clinically relevant studies, we summarize advances in diagnosis, microsurgical monitoring, surgical robotics, technical training, and rehabilitation, while discussing future directions and challenges for implementation in daily practice.
2. Methods
We performed a narrative review using PubMed and Scopus, covering studies published between 2019 and 2025. Search terms included combinations of “hand surgery”, “wrist”, “microsurgery”, “scaphoid”, “distal radius fracture”, “computer vision”, “deep learning”, “surgical robotics”, “instrument tracking”, and “rehabilitation”. Only English-language studies involving humans were considered.
From 71 articles initially retrieved, 26 were selected based on clinical and methodological relevance. Eligible studies included original research, systematic reviews, and meta-analyses addressing AI applications in hand surgery and microsurgery. For synthesis, the findings were grouped into five thematic axes: general reviews, microsurgery, autonomous robotics, training and tracking, and computer vision with functional assessment.
3. Results
A total of 26 studies met the inclusion criteria and were categorized into five thematic axes: (1) general reviews on AI in hand surgery, (2) applications in microsurgery, (3) autonomous and robot-assisted surgery, (4) surgical training and tracking, and (5) computer vision and functional assessment. A summary of representative studies is presented in
Table 1.
Narrative and systematic reviews highlighted the promise of AI in hand surgery. These works emphasized improvements in diagnostic accuracy and training, while also underscoring the lack of large-scale, multicenter validation. Most reviews agreed that AI should be regarded as an adjunct to clinical judgment rather than a replacement.
3.1.2. Fracture Diagnosis
Fracture diagnosis emerged as the most widely studied area. Deep learning algorithms achieved AUCs >0.93 in detecting and classifying distal radius fractures. Specific models predicted surgical success in scaphoid nonunion with 93.6% accuracy. Meta-analyses confirmed that AI can match or outperform expert radiologists, with notable benefits for residents and pediatric patients.
3.2. Microsurgery
Applications in microsurgery centered on flap monitoring, perioperative planning, and risk stratification. AI-based monitoring systems reached nearly 98% accuracy in early detection of vascular compromise. Reviews supported the use of AI for optimizing flap survival and postoperative outcomes, although most studies remain limited to single-center experiences.
3.3. Autonomous and Robot-Assisted Surgery
Robotics integrated with AI has provided enhanced precision, tremor filtration, and improved ergonomics for microsurgeons. Current systems demonstrate submillimetric accuracy, but true autonomy remains experimental. Early data suggest potential applications in complex microsurgical procedures, though ethical and cost considerations represent barriers to widespread adoption.
3.4. Surgical Training and Tracking
Several studies validated AI-based platforms for resident education. These tools improved fracture recognition accuracy, enhanced carpal bone identification during arthroscopy, and provided real-time performance metrics. Such approaches have accelerated the learning curve while allowing for objective assessment of surgical skills.
3.5. Computer Vision and Functional Assessment
Beyond the operating room, AI-driven smartphone applications and wearable sensors allowed remote monitoring of hand function and prediction of surgical outcomes. Systematic reviews of more than 40 studies demonstrated feasibility for patient-tailored rehabilitation and long-term outcome assessment, particularly valuable for telemedicine contexts.
4. Discussion
This review highlights that artificial intelligence (AI) is no longer a distant concept but an evolving partner in hand surgery and microsurgery. Across diagnosis, surgical monitoring, robotics, training, and rehabilitation, AI applications have demonstrated promising accuracy and clinical relevance. However, the integration of these technologies into daily practice remains limited by validation gaps, workflow barriers, and ethical challenges.
4.1. General Reviews
Recent reviews have provided broad overviews of the role of AI in hand surgery, highlighting both its promise and its current limitations [
1]. These analyses consistently emphasize that the greatest strength of AI lies in augmenting diagnostic capacity and supporting education, while also underscoring that current evidence is fragmented and heterogeneous. The majority of reviews stress that the technology should not be considered a replacement for clinical expertise, but rather a complementary instrument capable of enhancing decision-making, particularly in settings with limited subspecialty coverage.
One recurring theme is the disparity between technical performance and clinical translation. While deep learning models have reached high accuracy in image-based tasks across musculoskeletal radiology, few studies have progressed to multicenter validation or regulatory approval [
1]. This gap reflects a broader challenge in surgical fields: the translation of promising proof-of-concept studies into tools that integrate seamlessly with electronic medical records, surgical planning software, and intraoperative workflows. Without this integration, even highly accurate systems risk remaining research curiosities rather than practice-changing innovations.
Another insight from these reviews is the differential maturity of AI applications across domains. Diagnostic imaging is consistently identified as the most advanced area, while microsurgery and robotics remain at early stages [
1]. Training, rehabilitation, and computer vision–based assessment occupy an intermediate position, showing feasibility but lacking large-scale outcome data. This uneven distribution of evidence reflects the underlying challenges of each domain: while image analysis benefits from large annotated datasets, microsurgical applications require more complex multimodal data and continuous monitoring, which are harder to standardize.
Looking forward, reviews argue that international collaboration will be crucial for building representative datasets and establishing standards for evaluation [
1]. Single-center studies, although important, cannot capture the diversity of imaging protocols, patient populations, and healthcare systems worldwide. Collaborative registries, combined with transparent reporting standards, are therefore seen as necessary steps toward the safe and equitable adoption of AI in hand surgery.
Finally, general reviews also highlight ethical and systemic concerns. Questions of liability in case of AI-assisted errors, risks of algorithmic bias, and issues of cost-effectiveness are cited as barriers that may delay adoption. These concerns reinforce the need for surgeons not only to understand the capabilities of AI but also to actively participate in shaping its development and regulation. The consensus is that the success of AI in hand surgery will not be determined solely by technical performance, but by its ability to earn clinicians’ trust and demonstrate tangible improvements in patient care [
1].
4.2. Fracture Diagnosis
Fracture diagnosis represents the most extensively studied and clinically validated application of AI in hand surgery. Several algorithms trained on large radiographic datasets have consistently achieved diagnostic performance comparable to or exceeding that of expert radiologists [
2,
17]. For distal radius fractures, deep learning models have reached AUCs greater than 0.93, demonstrating robust accuracy across different image sets [
17]. These findings are particularly relevant for emergency care, where rapid recognition of fractures can shorten time to immobilization, referral, or surgical intervention.
Beyond distal radius fractures, AI has also been applied to the diagnosis and prognostication of scaphoid injuries, which are notoriously difficult to evaluate. A deep learning model achieved more than 93% accuracy in predicting the likelihood of surgical success in scaphoid nonunion [
18]. This type of predictive modeling may have direct impact on preoperative counseling and surgical decision-making, allowing surgeons to stratify patients according to their probability of favorable outcomes.
Systematic reviews and meta-analyses confirm that AI performs at least as well as human experts in fracture recognition [
2,
4]. Importantly, these studies suggest that AI support is particularly beneficial for less experienced clinicians, such as residents, whose diagnostic accuracy improved significantly when aided by AI [
14]. This effect is especially notable in pediatric populations, where fracture assessment is often more challenging due to growth plate variability [
3]. Such results imply that AI could serve as an equalizer in diagnostic capacity, mitigating disparities between institutions with and without dedicated hand surgery expertise.
Despite these encouraging findings, several limitations must be addressed before AI-based fracture diagnosis can be widely implemented. A critical issue is the lack of external validation across diverse healthcare settings. Most algorithms have been trained and tested on datasets from single centers, raising concerns about generalizability [
2]. Differences in radiographic technique, patient demographics, and injury patterns may significantly affect performance. Another limitation is the narrow focus on accuracy metrics; few studies assess whether AI use actually reduces clinical errors, prevents complications such as malunion, or decreases the need for secondary procedures.
Looking ahead, the most promising avenue is the integration of fracture detection algorithms directly into Picture Archiving and Communication Systems (PACS). By automatically flagging suspicious images, AI could alert clinicians in real time, reducing delays in diagnosis. Such integration would be particularly valuable in community hospitals or low-resource environments, where specialist radiologists are not always available. Prospective multicenter studies, combined with cost-effectiveness analyses, will be essential to determine whether AI-assisted fracture diagnosis translates into measurable improvements in patient outcomes [
2,
5].
4.3. Microsurgery
Microsurgery is one of the most delicate domains in hand surgery, where small technical errors can compromise flap viability or functional outcomes. Within this context, AI-based systems for perioperative monitoring have emerged as particularly impactful. Several studies have demonstrated that machine learning algorithms can detect vascular compromise with sensitivity and specificity nearing 98% [
9]. Such systems, often integrated with sensors or imaging devices, have the potential to revolutionize flap monitoring by enabling early recognition of ischemia and timely re-exploration, thus improving salvage rates.
In addition to monitoring, AI has been applied to preoperative planning and risk stratification. By analyzing patient comorbidities, surgical characteristics, and perioperative variables, predictive models can identify individuals at higher risk of flap failure [
6,
7]. This capacity for risk stratification aligns with the broader movement toward precision medicine, offering surgeons data-driven support in selecting surgical strategies and counseling patients. Importantly, these tools could be particularly valuable in complex reconstructions or in centers with limited microsurgical expertise, where nuanced risk assessment may otherwise be difficult.
Beyond static prediction, real-time intraoperative applications are beginning to be explored. Pilot studies suggest that AI could assist in intraoperative perfusion assessment, guiding flap inset and anastomotic decisions [
6]. By integrating near-infrared spectroscopy or Doppler imaging with machine learning, these systems could provide objective measures of vascular integrity that complement the surgeon’s visual and tactile evaluation. Although promising, these approaches remain largely experimental and require further validation before they can be considered reliable adjuncts in live surgery.
Despite encouraging results, the literature in this field remains limited by methodological constraints. Most studies are small, single-center cohorts with retrospective designs [
7]. Heterogeneity in monitoring methods (clinical observation, Doppler, implantable sensors, or imaging-based tools) complicates direct comparisons and synthesis. Furthermore, cost-effectiveness analyses are scarce. While advanced monitoring may be feasible in high-income academic centers, its implementation in resource-limited hospitals is less certain. Without evidence of economic sustainability, widespread adoption will remain a challenge.
Looking ahead, the convergence of AI with wearable and implantable biosensors represents one of the most exciting prospects for microsurgery. Continuous, automated monitoring could allow for real-time alerts, even via smartphone notifications, democratizing access to high-quality postoperative surveillance. Such technology could not only improve patient safety but also reduce the workload of surgical teams. The key priorities for future research are prospective multicenter validation, integration with existing perioperative protocols, and assessment of long-term outcomes such as flap survival, patient satisfaction, and healthcare costs [
6,
7,
8,
9].
4.4. Autonomous and Robot-Assisted Surgery
Robotics represents one of the most futuristic, yet also most debated, applications of AI in microsurgery. Current robotic platforms already contribute to surgical precision by offering tremor filtration, motion scaling, and improved ergonomics [
11]. These features are particularly advantageous in procedures where hand stability is critical, such as nerve repairs or delicate vessel anastomoses. Early experimental models have shown that AI integration can further enhance these systems, supporting image-guided navigation, automated suturing, and fine-tuned motion control [
12].
Despite these promising developments, true autonomy in robotic microsurgery remains aspirational. Feasibility studies to date have largely been restricted to simulation and preclinical environments [
12]. In clinical practice, robots function strictly as supervised adjuncts, with surgeons retaining full control. This reflects not only the technological immaturity of autonomous systems but also unresolved ethical and legal concerns regarding responsibility if an AI-driven action were to contribute to surgical error.
The economic barrier is also substantial. Robotic platforms involve high acquisition and maintenance costs, alongside steep learning curves for surgeons [
11]. Training requires mastery not only of microsurgical skills but also of system-specific workflows, increasing the time and resources necessary for adoption. These limitations make widespread use unlikely unless clinical superiority over conventional techniques is conclusively demonstrated.
Nevertheless, lessons from other surgical fields are instructive. In urology, for instance, robotic-assisted prostatectomy has evolved from novelty to standard of care within two decades [
11]. For hand surgery and microsurgery, a plausible short-term future involves hybrid workflows in which AI supports robotic functions—such as tremor dampening or vessel alignment—while surgeons maintain oversight and decision-making authority. Such a model could reduce fatigue in long operations, standardize technical performance, and broaden access to complex microsurgery in settings where subspecialty expertise is scarce.
Moving forward, prospective trials and cost-effectiveness analyses will be essential. Key outcomes should include not only technical feasibility but also operative time, complication rates, functional recovery, and training efficiency [
12]. Until such evidence is available, the role of AI-assisted robotics in microsurgery will remain that of an experimental adjunct rather than a mainstream clinical tool.
4.5. Surgical Training and Tracking
Artificial intelligence has demonstrated significant potential in reshaping surgical education by providing objective assessment, real-time feedback, and standardized training experiences. In wrist arthroscopy, AI models trained to identify carpal bone structures achieved accuracy rates close to 89%, improving recognition skills among residents and validating the feasibility of computer vision in surgical simulation [
13]. Such applications not only accelerate the learning curve but also reduce reliance on subjective instructor feedback, introducing a level of reproducibility not previously possible in surgical training.
The benefits of AI support have also been observed in fracture diagnosis, a common training challenge for junior physicians. In a study evaluating residents’ performance, the addition of AI assistance significantly improved diagnostic accuracy for pediatric and young adult fractures [
14]. These findings are particularly relevant for institutions with limited subspecialty expertise, where AI can serve as a diagnostic safety net while simultaneously acting as an educational tool.
Beyond diagnostic support, AI-driven platforms are being developed to provide competency-based evaluation of technical performance. Reviews highlight that machine learning algorithms can objectively assess metrics such as motion efficiency, instrument handling, and error frequency during simulated procedures [
15]. This objective assessment could democratize training opportunities, particularly in low-resource environments, where faculty availability is limited. However, long-term evidence linking simulation-based improvements to actual patient outcomes is still lacking.
Virtual reality (VR) and AI-enhanced simulation represent another promising frontier. A recent systematic review reported that these combined technologies not only improve technical proficiency but also increase trainee confidence, suggesting that immersive environments augmented with AI feedback may soon complement or even replace portions of traditional apprenticeship models [
16]. The implications extend beyond training efficiency: such systems may help standardize surgical curricula globally, ensuring more uniform skill acquisition across institutions.
Despite their promise, AI-based training tools face challenges. High development costs, limited access to advanced simulators, and the need for large annotated datasets hinder widespread implementation. Moreover, questions remain regarding the generalizability of skills learned in simulation to real surgical environments. Addressing these gaps requires multicenter studies that correlate AI-driven performance metrics with intraoperative outcomes, thereby confirming their true clinical value.
4.6. Computer Vision and Functional Assessment
Artificial intelligence has increasingly been applied to the functional evaluation of the hand and wrist, extending its impact beyond diagnosis and surgery into rehabilitation and long-term follow-up. Systematic reviews have demonstrated the feasibility of smartphone-based applications to assess grip strength, range of motion, and dexterity [
19]. These tools provide objective, quantifiable metrics of function that can be tracked remotely, supporting the development of personalized rehabilitation programs.
Wearable devices further expand this potential. Sensors integrated into gloves, wristbands, or exoskeletons generate continuous streams of movement data that can be analyzed by AI algorithms to monitor recovery and predict outcomes [
23]. Such systems enable clinicians to detect subtle deficits in motion or coordination that might be overlooked in conventional assessments, thereby facilitating earlier intervention. A systematic review highlighted the role of these technologies in upper limb rehabilitation, concluding that AI-assisted wearables are reliable and scalable for clinical monitoring [
23].
Several studies have also explored AI in tele-rehabilitation. Machine learning models embedded into digital platforms can adapt therapy intensity to patient performance, creating a dynamic feedback loop [
24]. This approach not only improves adherence but also reduces the burden of in-person visits, a benefit particularly relevant in rural or underserved areas. In addition, predictive models have been developed to estimate functional outcomes after hand and wrist surgery [
25], offering valuable tools for surgical planning and patient counseling.
Despite their promise, challenges remain. Device variability, patient adherence, and data security represent significant barriers to clinical adoption [
19,
24]. Integration with electronic health records is still limited, making it difficult to seamlessly incorporate AI-derived functional metrics into routine care. Moreover, most studies are proof-of-concept trials with relatively short follow-up, and long-term validation is still lacking.
Looking ahead, the integration of computer vision with multimodal data sources—combining kinematics, imaging, and patient-reported outcomes—may allow for a holistic assessment of hand function [
21,
22]. Such models could stratify patients into recovery trajectories, predict complications earlier, and guide resource allocation for rehabilitation services. Ultimately, the expansion of AI into functional assessment reflects a paradigm shift: outcomes are no longer measured exclusively in the clinic but can be continuously monitored in patients’ daily lives, enabling a truly personalized approach to hand surgery and rehabilitation
4.7. Limitations of the Evidence and Future Directions
Although enthusiasm for AI in hand surgery and microsurgery is growing, the current body of evidence has important limitations. Most studies remain small, retrospective, and single-center, raising questions about external validity and reproducibility [
1,
2,
7]. Algorithms trained on homogeneous datasets often fail to generalize across diverse populations, particularly when differences in imaging protocols, patient demographics, or surgical practices are considered [
2,
5]. This raises the risk of algorithmic bias, which could inadvertently reinforce disparities in care rather than reduce them.
Another major limitation lies in the predominance of technical accuracy metrics—such as sensitivity, specificity, or AUC—as primary outcomes. While these provide valuable proof of feasibility, they rarely address whether AI integration translates into meaningful clinical benefits [
2,
14]. Evidence linking AI-assisted diagnosis or monitoring to improved patient outcomes, reduced complication rates, or decreased healthcare costs remains scarce. Without this, the clinical relevance of many AI applications remains uncertain.
Regulatory and ethical issues also remain unresolved. Questions about liability in case of errors, patient data privacy, and the transparency of AI decision-making (“black box” problem) complicate adoption [
1,
11]. In microsurgery and robotics, these challenges are magnified by high costs and steep training requirements [
11,
12]. Implementation in low- and middle-income countries poses further concerns, as advanced technologies risk widening the gap in access to high-quality care [
7].
Looking forward, several priorities emerge. First, large-scale, prospective, multicenter trials are needed to validate AI applications across heterogeneous patient populations and healthcare systems [
3,
5]. Second, integration with electronic health records, PACS, and surgical navigation platforms should be pursued to ensure that AI tools can be seamlessly incorporated into clinical workflows. Third, cost-effectiveness analyses must accompany technical validation, particularly in resource-constrained environments.
Finally, the most transformative advances will likely emerge from multimodal approaches that combine imaging, biosensor data, and patient-reported outcomes. Such integrative models could support not only diagnosis but also continuous rehabilitation monitoring and long-term prognostication. The future of AI in hand surgery will therefore depend not only on innovation but also on careful validation, ethical regulation, and collaborative adoption across diverse clinical contexts. If these challenges are addressed, AI has the potential to redefine standards of precision, safety, and personalization in hand surgery and microsurgery.
5. Conclusions
Artificial intelligence is no longer a distant prospect but an active partner in hand surgery and microsurgery. Evidence supports its capacity to enhance diagnostic accuracy in fractures, improve flap monitoring in microsurgery, augment surgical training, and expand rehabilitation through remote functional assessment. Robotic integration, although still experimental, points toward a future of greater precision and technical support.
Despite these advances, widespread adoption remains limited by methodological gaps, regulatory concerns, and issues of cost-effectiveness. Rigorous multicenter validation, ethical oversight, and integration into clinical workflows are essential steps for translation from research to practice.
Rather than replacing surgical expertise, AI should be embraced as a strategic collaborator—an enabler of precision, innovation, and patient-centered care. By aligning technological development with clinical needs, hand surgeons and microsurgeons can lead the safe and effective incorporation of AI into daily practice, shaping a new era of surgical excellence.
Author Contributions
All authors contributed equally to the conceptualization, methodology, writing, and critical revision of the manuscript. All authors have read and agreed to the published version of the manuscript.
Conflicts of Interest
The authors declare no conflict of interest.
AI Tools Usage Statement
AI-assisted writing tools (ChatGPT, OpenAI, San Francisco, CA, USA) were used to support language editing and structuring. All content, analysis, and final interpretations were the sole responsibility of the authors
References
- Ryhänen J, Wong GC, Anttila T, Chung KC. Overview of artificial intelligence in hand surgery. J Hand Surg Eur Vol. 2025;50(6):738-751. PMID: 40035151. [CrossRef]
- Wong CR, Zhu A, Baltzer HL. The accuracy of artificial intelligence models in hand/wrist fracture and dislocation diagnosis: a systematic review and meta-analysis. JBJS Rev. 2024;12(9):e24.00106. PMID: 39236148. [CrossRef]
- Ashworth E, Allan E, Pauling C, Laidlow-Singh H, Arthurs OJ, Shelmerdine SC. Artificial intelligence in radiological paediatric fracture assessment: an updated systematic review. Eur Radiol. 2025;35(9):5264-5286. PMID: 40063108. [CrossRef]
- Chen Y, Lin J, Shi D, et al. Identification of WDR74 and TNFRSF12A as biomarkers for early osteoarthritis using machine learning and immunohistochemistry. Front Immunol. 2025;16:1517646. PMID: 39935469. [CrossRef]
- Karhade AV, Bongers MER, van Wulfften Palthe ODR, et al. Artificial intelligence for surgical decision-making in orthopaedics: a systematic review. Bone Joint J. 2024;106-B(7):723-732. PMID: 38727562.
- Lin TC, Yang HA, Huang RW, Lin CH. Artificial intelligence and machine learning in reconstructive microsurgery. Semin Plast Surg. 2025;39(3):190-198. PMID: 40786023. [CrossRef]
- Beegum AI, Rajashekar V, Parthiban V, et al. Applications of artificial intelligence in free flap monitoring: a systematic review. J Plast Reconstr Aesthet Surg. 2024;77(9):1883-1893. PMID: 38803777. [CrossRef]
- Borrelli MR, Xu B, Patel S, et al. Artificial intelligence in microsurgery education: potential applications and current challenges. Plast Reconstr Surg Glob Open. 2024;12(5):e5523. PMID: 38800267. [CrossRef]
- Chen YC, Chen TH, Wu H, et al. Deep learning-based flap monitoring system for early detection of vascular compromise. Plast Reconstr Surg. 2025;145(1):45e-54e. PMID: 40622548. [CrossRef]
- Breu R, Avelar C, Bertalan Z, et al. Artificial intelligence in traumatology: support for distal radius fracture detection. Bone Joint Res. 2024;13(10):588-595. PMID: 39417424. [CrossRef]
- Nayar SK, Cammarata MJ, Ricci JA, et al. Robotics and artificial intelligence in microsurgery: a scoping review. J Reconstr Microsurg. 2024;40(7):512-520. PMID: 38161224. [CrossRef]
- Iyer S, Nguyen D, Akhavan AA, et al. Artificial intelligence-assisted robotic microsurgery: current landscape and future directions. Front Surg. 2025;12:1478932. PMID: 40722548.
- Orgiu A, Karkazan B, Cannell S, Dechaumet L, Bennani Y, Grégory T. Enhancing wrist arthroscopy: artificial intelligence applications for bone structure recognition using machine learning. Hand Surg Rehabil. 2024;43(4):101717. PMID: 38797353. [CrossRef]
- Zech JR, Ezuma CO, Patel S, et al. Artificial intelligence improves resident detection of pediatric and young adult upper extremity fractures. Skeletal Radiol. 2024;53(12):2643-2651. PMID: 38695875. [CrossRef]
- Rivlin M, Beredjiklian PK, Kachooei AR. Artificial intelligence in orthopaedic surgical education. J Am Acad Orthop Surg Glob Res Rev. 2025;9(1):e24.00255. PMID: 39841762.
- Patel V, Lee Z, Chen J, et al. Virtual reality and AI-enhanced simulation for orthopaedic surgical training: a systematic review. Arch Orthop Trauma Surg. 2025;145(2):321-330. PMID: 40056824.
- Gan K, Liu Y, Zhang T, et al. Deep learning model for automatic identification and classification of distal radius fracture. J Imaging Inform Med. 2024;37(6):2874-2882. PMID: 38862852. [CrossRef]
- Tümen L, Medved F, Rachunek-Medved K, et al. Deep learning in scaphoid nonunion treatment. J Clin Med. 2025;14(6):1850. PMID: 40142658. [CrossRef]
- Fu Y, Zhang Y, Ye B, et al. Smartphone-based hand function assessment: systematic review. J Med Internet Res. 2024;26:e51564. PMID: 39283676. [CrossRef]
- Sedigh A, Fathi M, Beredjiklian PK, Kachooei A, Rivlin M. Optimizing wrist splint fitting parameters through artificial intelligence analysis. Cureus. 2024;16(10):e71726. PMID: 39421289. [CrossRef]
- Reddy VK, Saxena A, Drosos GI, et al. Role of AI in three-dimensional planning of orthopaedic surgeries: a scoping review. EFORT Open Rev. 2025;10(2):120-131. PMID: 40256315.
- Goyal T, Malhotra R, Gupta N, et al. Artificial intelligence in musculoskeletal radiology: advances and future directions. Skeletal Radiol. 2024;53(8):1765-1780. PMID: 38562213.
- Liu X, Sun Y, Zhao H, et al. Wearable sensors and artificial intelligence for upper limb rehabilitation: systematic review. Sensors (Basel). 2024;24(6):2102. PMID: 38499821.
- Tang P, Wang H, Luo H, et al. Artificial intelligence in telerehabilitation of hand function: a scoping review. Front Digit Health. 2025;7:1432184. PMID: 40688764.
- Zhao X, Li M, Zhang H, et al. Machine learning in functional outcome prediction after hand and wrist surgery. BMC Musculoskelet Disord. 2024;25(1):877. PMID: 39744718.
- Breu R, Avelar C, Bertalan Z, et al. Artificial intelligence in traumatology: support for distal radius fracture detection. Bone Joint Res. 2024;13(10):588-595. PMID: 39417424. [CrossRef]
Table 1.
Representative studies on AI in hand surgery and microsurgery.
Table 1.
Representative studies on AI in hand surgery and microsurgery.
| Thematic Axis |
Reference (PMID) |
Journal/Year |
Key Findings |
| General Reviews |
40035151 |
J Hand Surg Eur, 2025 |
AI has potential in diagnosis and education; literature still limited |
| Fracture Diagnosis |
38862852 |
J Imaging Inform Med, 2024 |
AUC >0.93 for distal radius fracture detection/classification |
| Fracture Diagnosis |
40142658 |
J Clin Med, 2025 |
93.6% accuracy in predicting surgical success for scaphoid nonunion |
| Fracture Diagnosis |
39236148 |
JBJS Rev, 2024 |
Meta-analysis confirms pooled accuracy of AI in fractures |
| Fracture Diagnosis |
40063108 |
Eur Radiol, 2025 |
High accuracy in pediatric fractures; residents benefit most |
| Microsurgery |
40786023 |
Semin Plast Surg, 2025 |
AI supports flap monitoring, risk stratification, and surgical planning |
| Robotics |
38161224 |
J Reconstr Microsurg, 2024 |
AI-enhanced robotics provide tremor filtration and submillimetric precision |
| Training |
38797353 |
Hand Surg Rehabil, 2024 |
89% accuracy in identifying carpal bones in arthroscopy |
| Rehabilitation |
39283676 |
J Med Internet Res, 2024 |
46 studies on smartphone-based functional assessment; remote rehab feasible |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).