Preprint
Article

This version is not peer-reviewed.

Human-Supervised AI-Driven Smart Actuator System for Minimally Invasive Surgical Robotics

Submitted:

24 September 2025

Posted:

26 September 2025

You are already at the latest version

Abstract
Background: AI has shown the potential to positively influence minimally invasive surgical robotics. Incorporation of AI can improve perception, planning, decision-making, and execution in this field. Existing systems include, but are not limited to, the STAR for supervised autonomous suturing, MAKO for orthopedic arthroplasty, CyberKnife for radiosurgery, and PROST for prostate interventions. Such systems have shown advancement in precision, image guidance, task automation, haptic feedback, and flexible safety management. However, unlike domains such as autonomous driving, surgical robotics has progressed more cautiously. Current platforms have been found to sporadically lack transparent supervision contracts, adequate surgeon-centric safety guarantees, standardized pathways for adaptive autonomy, embedded safeguards within their modes of operation, and validated metrics for assessing performance. Objective: This paper presents a conceptual framework for a human-supervised AI-driven smart actuator system focused on minimally invasive surgery. The goal of this paper is to propose a forward-looking proof of concept that formalizes surgeon authority, integrates AI-enabled perception and control, enforces provable safety constraints, enables adaptive assistance, and ensures continuous, patient-safe force regulation. Conceptual Design: This architecture incorporates compact backdrivable actuators. It also includes multimodal sensing that encompasses sensor data, force and torque, pose, endoscopic vision, and tissue impedance, as well as an AI stack based off a machine learning and reinforcement learning framework. The model delineates three operational modes. These include teleoperation enhanced by AI-based overlays, shared control that incorporates tremor suppression, virtual fixtures, as well as force regulation, and super-vised autonomy where specific subtasks are carried out under surgeon pedal-hold and confidence gating. Safety is ensured using control barrier functions and model predictive safety filters, which block unsafe actions applying reinforcement learning. Aside from this, human-factor elements feature confidence-aware visualization, multimodal anomaly detection, and options for immediate overrides. Contribution: This study outlines a research roadmap. Our contributions include a formalized supervised-autonomy contract, a layered safety design combining reinforcement learning with provable constraint enforcement, surgeon-centered framework with immediate veto and transparency features, and a translational agenda spanning simulation, phantom, ex-vivo, and cadaveric validation. Conclusion: This paper aims to position AI as a cooperative assistant rather than an autonomous decision-maker in the field of robotic and precision surgery. The conceptual framework endeavors to address challenges in surgical robotics. These include precision, safety, transparency, and supervisory oversight. It synthesizes lessons from current exemplars. It articulates a pathway toward adaptive, auditable, transparently supervised, resilient, and patient-centered surgical systems.
Keywords: 
;  ;  ;  ;  
Subject: 
Engineering  -   Other

1. Introduction

AI-driven surgical robotics plays a key role in modernizing the current healthcare paradigm. It ushers noteworthy improvements in patient care and operational efficiency. Such advanced systems utilize AI and robotic arms to perform precise, minimally invasive surgeries that reduce blood loss, pain, and recovery times [1,2,3,4]. Studies show that AI assistance can increase surgical precision by 40% and reduce complications by 30% [5]. Applying such techniques further results in shorter hospital stays and faster patient recovery. Smart surgical robotics is beneficial for complex procedures like tumor removal, heart surgeries, neuroanatomic surgeries, and joint replacements. They improve patient outcomes and overall satisfaction [6].
AI analyzes real-time data to help surgeons make better decisions and coordinate tasks, cutting down on surgery times and improving resource management. While the initial investment in this technology is substantial, the long-term savings from fewer complications and increased efficiency make it a worthwhile consideration for many hospitals [7].
Implementing AI robotics also presents several challenges. The high cost of the equipment and ongoing maintenance can be a barrier, especially for smaller and rural hospitals [8,9,10]. This fuels disparities in access to high-quality care [11]. A steep learning curve requires extensive training for surgical teams. Such training frequently involves simulations and augmented reality. As AI and robotics continue to advance, future developments like remote surgery and digital twins could further transform intelligent surgeries [5]. This creates hope for a future of more personalized and accessible care, especially in surgery.

3.1. Robotics and Surgery

The convergence of robotics and surgery represents one of the most profound renovations in modern medicine. Here, the precision of engineered machines harmonizes with the intuition and judgment of skilled surgeons. At its essence, surgical robotics integrates mechanical systems, computer vision, machine learning, and ergonomically designed interfaces to augment and, at times, partially or fully automate surgical tasks [12]. This fusion is not designed to displace surgeons but rather to enhance their capability, control, and accuracy. The result is a shift from purely manual interventions toward technologically mediated procedures that offer both enhanced precision and reduced risk.
The idea of surgical robotics emerged in the late 1980s and 1990s. The early work and funding wasdriven by agencies such as the U.S. Defense Advanced Research Projects Agency (DARPA) and NASA, which sought to develop technology for remote surgery in combat zones and in space [13,14,15,16]
Early systems such as the PUMA 560 demonstrated the potential of robotic precision in neurosurgical biopsies. This was followed by the rendering of ROBODOC in the field of orthopedics and AESOP, which was a voice-controlled laparoscopic camera-holding mechanical arm. Landmark progress was shepherded by the development of the ZEUS system in 1998 by Computer Motion, Inc. in Goleta, California. This system aided the first transatlantic telesurgery in 2001 known as Operation Lindbergh. The da Vinci platform, developed by Intuitive Surgical, Inc., demonstrated how robotic arms equipped with miniaturized instruments could replicate, and in many cases exceed, the surgical capacity of human hands [17]. It popularized teleoperated, multi–degree-of-freedom minimally invasive surgery with stereovision and wristed instruments. These systems established robotics as a transformative force in complex surgeries such as mitral valve repair, prostatectomy, and hysterectomy. These innovations made complex surgeries more controlled and less invasive, while preserving the surgeon’s oversight.
Contemporary innovations broaden these foundations into exceedingly specialized and ergonomic systems. Platforms like the da Vinci SP feature single-port access [18]. Systems like CyberKnife and PROST have shown effectiveness in radiosurgery and in prostate biopsy [19,20]. Versius focuses on surgeon comfort [21]. Orthopedic solutions such as TiRobot employ optical tracking and planning tools [22]. CARLO laser osteotome, EndoQuest’s flexible endoluminal device, and Levita’s magnet-assisted system diversify surgical robotic applications [23,24,25].
Parallel advances in sensing and navigation, such as Proprio’s light-field 3D guidance, showcase the integration of intelligence into robotic platforms [26,27]. This trajectory parallels the history of artificial intelligence, which has evolved from symbolic reasoning to today’s multimodal deep learning systems capable of perception, language, and spatial reasoning. Together, these histories chart a convergence where AI augments robotic dexterity with adaptive guidance and decision support. This defines the incipient era of intelligent surgical robotics.
The canonical surgical robotics architecture can be described by three interacting modules. These are perception-navigation, surgical planning, and control-feedback [12,28]. Perception-navigation aggregates multi-source data such as optical and magnetic navigation, pre- and intraoperative imaging, endoscopic video, sensor feedback (e.g., force, temperature) etc., to expand the surgical field of view and quantify the operative context. Yet, despite high-fidelity sensing, traditional systems struggle to interpret complex imagery, to fuse modalities, and to generalize in unstructured environments. Surgical planning translates this information into executable plans. Despite that, current workflows ordinarily rely on subjective human decisions that are hard to quantify and standardize. Control-feedback strategies execute plans through mechanical actuation. These strategies maintain safety in dynamic but constrained environments and support intuitive human-robot interaction. The control-feedback component is singularly challenging in unstructured surgical settings, where the interplay of human intent, robot motion, and tissue behavior is highly variable and time-critical [29].
Robotic systems in surgery can be delineated across several categories. These comprise supervisory-controlled systems, tele-surgical or telesurgery systems, and shared-control systems [30]. A supervisory-controlled system involves a surgeon pre-programming a robot to execute a procedure autonomously. The surgeon superintends and can intervene in its operation. Such a system is often deployed in orthopedics and neurosurgery.
A tele-surgical system acts as an extension of the surgeon’s hands and eyes. It translates hand movements into scaled-down, tremor-free instrument actions. Within this system, a surgeon manually controls robotic arms from a separate, distant location [31]. The surgeon actively manipulates the robot’s instruments and performs the procedure in real-time. A tele-surgical system works best using real-time image feedback apart from a reliable, high-speed communication link for accurate control of the surgical manipulator. It permits an expert surgeon to consult and operate on patients in remote or underserved areas [32].
A shared-control system enables real-time collaboration between a surgeon and a robot during a surgical procedure. Both the surgeon and the robot are involved in handling the surgical instruments. The surgeon directly controls the robot’s movements but receives support and guidance from the system, which provides motion constraints and haptic feedback. The machine gives assistance to the surgeon and directs the surgeon’s actions along a desired path. The surgeon’s movements are stabilized and enhanced by the technology to improve precision and steadiness during complex surgeries [33].
The benefits of robotic assistance are perhaps best illustrated in procedures that demand extreme delicacy. For example, in ophthalmic microsurgery, robots are used to stabilize instruments beyond human capacity. Robots in such surgeries, where high precision is required to prevent any mishap, mitigate tremor during procedures such as retinal vein cannulation [34]. In orthopedics, robotic platforms like CORI surgical system by Smith and Nephew and THINK Surgical’s arthroplasty intervention TSolution One or ROBODOC empower surgeons to plan and execute joint replacement procedures with sub-millimeter accuracy [35,36]. This reduces the probability of implant misalignment. In neurosurgery, image-guided robotic systems guide precise electrode placement for deep brain stimulation to lower variability [37]. Hence, robotics in surgery brings consistency, reproducibility, and accuracy to fields where error margins are exceedingly narrow. This improves therapeutic accuracy and patient outcomes.
Robotic surgery finds significant applications in minimally invasive surgery (MIS). MIS is an approach that prioritizes smaller incisions and less tissue trauma for reduced pain and quicker recovery [38]. Robotics in minimally invasive surgery can be defined as the deployment of robotic technologies to optimize the efficacy, safety, and ergonomics of surgical interventions performed through small incisions. Robotics has furthered MIS by defeating its inherent limitations such as restricted range of motion, limited visibility, and surgeon fatigue due to complexity and high operative times. In robotic-assisted MIS, slender mechanical arms maneuver through tiny ports. They are guided by the surgeon by way of a console which provides magnified, high-definition, three-dimensional visualization. The robotic instruments possess an enhanced range of articulation which exceed that of the human wrist. This capability of the robot in MIS enables complex suturing and dissection through miniature incisions [39].
Microsurgery refers to surgical procedures performed on very small structures such as blood vessels, lymphatic channels, and nerves. It normally requires magnification under an operating microscope. For error-free surgeries and improved chances of survival and quality of life, such operations demand extreme precision, tremor-free motion, and delicate handling of tissues that may be less than a millimeter in diameter [40]. Another type of surgery is stereotactic surgery. It is a minimally invasive technique that relies on a three-dimensional coordinate system to locate small targets within the body, such as brain neoplastic lesions. In stereotactic surgeries, instruments and probes are guided within the body with high accuracy. Stereotactic procedures are pre-planned, image-guided, and involve limited instrument mobility once the trajectory is defined [41].
The challenges of developing robotic systems for microsurgery arise from the need to replicate and even exceed the fine motor control of a human hand at microscopic scales. Robots must provide dexterity, precision, motion scaling, real-time adaptation, and force feedback sensitive enough to handle fragile tissues without causing damage [42]. Unlike stereotactic procedures, where the path to the target can be mathematically calculated and executed with relatively rigid, predefined movements, microsurgery requires continuous adjustment, adroit manipulation, and real-time decision-making. During microvascular anastomosis in a reconstructive or replantation procedure or a neurosurgical operation, a microsurgical robot must insert a micro-needle into a vein thinner than a human hair without puncturing adjacent tissues. This is far more demanding than positioning a rigid probe along a planned stereotactic trajectory [43].
The design of robotic devices for microsurgery must account for the surgeon’s need for enhanced deftness and visual magnification in highly constrained operative fields. Instruments must be exactly miniaturized yet flexible and capable of mimicking complex wrist-like motions within tight anatomical spaces. The integration of advanced imaging systems, such as optical coherence tomography, with robotic platforms becomes essential to provide real-time visual guidance at microscopic scales. In stereotactic surgery, robotic devices primarily serve to increase targeting accuracy and stability. Once aligned, the instrument path is relatively fixed, reducing the complexity of required manipulations.

3.2. AI in Surgical Robotics

Artificial intelligence–assisted or AI-assisted surgical robotics is a burgeoning field. AI is poised to transform surgical robotics from precision assistants into more adaptive, perceptive, and semi-autonomous collaborators under surgeon supervision. The strengths of human surgeons include sound judgment, dexterity gained from training and experience, and sentient adaptability. In contrast, the strengths of robotic systems rely on their mechanical advantage, comprising stability, precision, and integrative sensing. The present chokepoints of robotic systems involve limited capability to interpret complex, dynamic surgical situations and to make granular and subtle decisions. AI is the bridge between the reciprocal strengths of surgeons and robotic systems. Reinforcement learning, machine learning, including deep learning and computer vision, can improve perception, planning, and management in robot-assisted surgeries. This leads to more consistent, safe, and efficient surgeries [44].
AI approaches were introduced into surgical robotics not only due to the recent rapid advances and growing accessibility of AI models, but also to overcome the limitations of classical methods [45,46,47,48]. In perception-navigation, classical approaches to medical image analysis, including preprocessing approaches such as noise reduction (for example Gaussian Blur and median filtering), contrast enhancement (for example, histogram equalization), and color space conversion (for example, RGB to grayscale or HSV), image segmentation and object localization approaches such as segmentation (for example, region-based segmentation), thresholding (for instance, Otsu’s method), edge detection (for example, applying Canny and Sobel operators), feature extraction and description approaches, such as extracting shape and geometric features, texture analysis (for example, using Gabor filters or Local Binary Patterns optimization), and interest point detectors (for example, speeded-up robust features algorithm and scale-invariant feature transform algorithm), and mathematical morphology approaches such as erosion, dilation, top-hat transform, and black-hat transform, cannot capture the richness and variability of clinical imagery, nor do they scale to real-time, context-aware guidance (Table 1).
AI’s shift from handcrafted rules to data-driven learning allows it to overcome the limitations of classical image analysis in surgical robotics. Instead of relying on a human to define features through specific algorithmic approaches, deep learning models like CNNs automatically learn them from vast datasets of annotated surgical images [49,50,51,52]. This makes them robust to variations in various structural and environmental aspects such as tissue anomalies, anatomical aberrations, and lighting. This enables them to perform semantic segmentation, which classifies every pixel to create a context-aware understanding of the surgical scene. Once trained, AI models can process live video streams in real-time. This provides surgeons with immediate, intelligent guidance essential for complex and critical procedures [53].
There may be other issues regarding application of traditional methods in surgical robotics. Conventionally, intraoperative monitoring relied on manually set thresholds and subjective observation. Multimodal fusion, such as registering intraoperative X-rays to preoperative CT, demands cognitive skills that are difficult to encode. Navigation spans two broad paradigms. These include CAD systems and CAM systems that follow preset plans under surgeon supervision, which are common in orthopedics and neurosurgery, where optical or magnetic trackers are applied. They also include master–slave assistants, where surgeons operate tools via endoscopic views. Both benefit from AI’s ability to interpret and predict on its own. This reduces reliance on operator experience and shortens experience curves [12]. Nevertheless, there can be mixed results regarding time savings early in adoption. This implies that AI models are complex and may involve time lag between input and output.
Traditionally, surgical planning involves computer-assisted manual planning. Examples include screw trajectories in spine surgery and reduction strategies in trauma cases. This improves quantification and reproducibility [54]. Nonetheless, it is subjective and labor-intensive. Planning for assistant-style robots essentially occurs implicitly as the surgeon operates. However, this complicates standardization and post hoc analysis. Embedding AI into planning can turn tacit expertise into explicit, learnable patterns. This, in turn, automates elements such as puncture path selection, reduction trajectory design, implant sizing, and tissue tracking. This further improves consistency, reduces radiation exposure in radiotherapy, and preserves surgeon time for higher-level judgment [55,56].
In the realm of control-feedback, human–robot interaction (HRI) and robot–environment interaction (REI) assume importance. HRI considerations are aimed towards more natural ways for surgeons to express intent, improved situational awareness, and reliable oversight and override mechanisms. In laparoscopic settings, for example, camera guidance is often delegated to a human assistant. This potentially results in inefficiencies and communication challenges. Training and skill assessment are essential. However, it is frequently subjective and resource-intensive. REI considerations focus on the robot’s safe and effective engagement with tissue in environments characterized by limited field of view, randomly moving targets, and deformable anatomy such as dermis. Suturing exemplifies the challenges such as lack of native haptics. In many master–slave systems, this paucity complicates force regulation. It risks suture breakage and tissue damage. Emerging haptic-enabled systems promise to mitigate this. Yet, comprehensive solutions need better sensing, predictive control, and adaptive policies.
In perception/navigation, deep networks, such as encoder–decoder models like transformers, show the potential to improve segmentation of organs and tools across modalities. Transformer-based and multiscale fusion networks can boost instrument tracking under occlusions and motion. Unsupervised and weakly supervised methods such as clustering reduce annotation burdens. For pathology classification, AI excels in pattern recognition from imaging such as spectroscopy. In complex signal processing, deep models can enable fast, precise intraoperative 3D reconstruction and registration, distal force estimation in tendon–sheath mechanisms without added sensors, and monocular depth estimation via domain-adapted adversarial networks and deep convolutional neural network-conditional random field (CNN-CRFs) [12,57]. The supervised adversarial network was possibly used for domain adaptation, which involves making synthetic data look more like real data. This was likely done because of class imbalance or lack of overall data, which might have led to inefficient training. CNN-CRFs were used to perform the actual depth estimation. For multi-source fusion, learning-based registration frameworks can align 2D fluoroscopy with 3D CT or estimate organ pose from endoscopic streams, improving guidance accuracy and speed.
AI is shifting surgical robotics from computer-assisted manual planning to surgeon-assisted computer planning. Systems learn safe trajectories, detect anatomy, automate implant placement, adapt needle paths under deformation, enable autonomous endoscope navigation and workflow parsing, and improve control stability, camera tracking, and tremor compensation. They also support scalable skill assessment [58]. This includes NLP analysis and adaptation of operating room communication logs. REI advances involve instances like safe autonomous intracardiac navigation, reinforcement learning–optimized needle insertion and autonomous suturing. Clinical validation is early with limited human trials; larger studies are needed. Ethically and legally, surgeon primacy, accountability for higher autonomy, hybrid rule/learning designs, robust consent, rigorous regulation, and privacy-compliant data governance with clinician oversight must be emphasized.
The FDA’s 510(k) in USA is a premarket submission that allows medical device manufacturers to demonstrate that their new device is substantially equivalent in safety and effectiveness to a legally marketed predicate device already on the market [59]. Furthermore, European Union (EU) regulates surgical robotic devices through Conformité Européene (CE) Mark certification (ensures compliance with EU’s safety, health, and environmental protection standards) within the ambit of the European Medical Devices Directive [60].
Several technical dimensions must be addressed to achieve reliable, safe, and clinically meaningful integration of AI and surgical robotics. Development of models should emphasize explainable and deterministic approaches that enhance interpretability and reproducibility. These should simultaneously improve real-time performance through model compression, hardware-optimized deployment, and mechanisms for graceful degradation in the presence of uncertainty [61]. Data availability must be overcome. This requires multi-institutional collaboration, standardized annotation frameworks, dataset amalgamation, and privacy-preserving sharing strategies such as federated learning. These efforts can be supplemented with synthetic data derived from high-fidelity surgical simulation and generative models. This enables both pretraining and data augmentation for downstream tasks. Human–robot coordination is important. This depends on the design of intuitive surgeon interfaces and advanced feedback modalities. Richer haptic feedback has the potential to objectify traditionally subjective judgments, quantify ambiguous intraoperative indicators, and potentially automate repetitive actions. Building calibrated trust is oriented towards clinical adoption of such technologies. This requires the establishment of measurable safety improvements, transparent oversight mechanisms, robust logging for auditability, and clearly defined autonomy boundaries. Reliable pathways for manual handover must also be guaranteed, ensuring that the balance between trust and autonomy evolves alongside the system’s safeguards (Table 2).

3.3. Actuators in Minimally Invasive Surgery

The intersection of actuators and minimally invasive surgery (MIS) converge mechanical ingenuity and clinical precision to enhance patient outcomes. MIS is defined as the performance of surgical procedures through small incisions with specialized instruments and cameras. Compared to regular open surgery, MIS reduces trauma, accelerates recovery, and minimizes scarring [62]. At the bosom of these procedures lie actuators. Actuators convert energy into precise motion while balancing precision, force, size, and safety. These are devices that translate human effort and other forms of energy into controlled mechanical movement. They serve as the fundamental drivers of surgical tools. They enable delicate manipulations within constrained anatomical spaces. Without actuators, the dexterity and precision required for MIS would remain unattainable. This makes them indispensable to the development of advanced surgical systems. This highlights how actuators transform abstract surgical commands into precise physical actions, bridging the gap between surgeon intent and patient care [63].
The delineation of actuator design in MIS also reveals the unique challenges associated with the field. These devices must operate reliably in a compact form factor. While doing so, they must maintain sterility, compatibility with delicate tissues, and resistance to fatigue over repeated use. Engineers must, therefore, balance force, precision, and miniaturization to design and construct these. For instance, in robotic-assisted laparoscopic surgery, actuators must allow multi-degree-of-freedom motion while remaining small enough to fit through trocars less than a centimeter in diameter. A practical illustration can be found in flexible robotic catheters. They rely on miniature Shape Memory Alloy (SMA) actuators to navigate tortuous vascular pathways without damaging vessel walls [64]. These innovations show how actuator engineering must be finetuned to the specific biomechanical and clinical constraints of MIS.
There are various types of actuators in MIS [65]. Electromechanical motors (as in systems like da Vinci) are power actuators which are mature, accurate, and easy to control, but their bulk limits extreme miniaturization. Piezoelectric actuators are characterized by fast, compact, high-precision motion. They are frequently used for ultrasonic tools and microsurgery. Yet, they suffer from short stroke and high-frequency drive requirements. Pneumatic actuators enable soft, compliant, tissue-safe interaction. However, they trade off precision, linearity, and require external air. Hydraulics deliver smooth, high forces useful for orthopedic tools. The issue with them is that the fluid lines and leakage risks constrain their use in delicate settings.
SMA actuators are silent, miniaturized devices. They include steerable catheters. However, they exhibit slow response, hysteresis, and fatigue under cycling. Magnetic actuation provides wireless manipulation. Their examples include magnet-assisted tools and capsules. They reduce invasiveness, and are, in this way, well-suited for MIS. However, the force drops as we go deeper. Also, in their context, field control is challenging. Electrostatic actuators perform excellently in microsurgery that applies micro-electro-mechanical systems owing to their precise and scalable motion. However, they have low force and environmental sensitivity. Hybrid systems combine various actuating mechanisms. These include pneumatic–hydraulic, piezo–electromagnetic etc. They balance compliance, force, and precision. However, this comes at the cost of higher design and control complexity. Summing up, no single actuator fits all MIS needs. Progress is trending toward integrated hybrid approaches coupled with advances in materials and control algorithms to deliver instruments that are simultaneously efficient, precise, compact, cost-effective and safe (Table 3).
The integration of AI into actuators in minimally invasive surgery marks the next transformative step in this trajectory. Traditionally, actuators respond to direct surgeon inputs. But AI-enabled actuators can interpret complex surgical contexts, optimize their own responses, and even anticipate the surgeon’s needs. These actuators embed machine learning algorithms into their control systems [66]. Such surgical robots can adaptively adjust force, speed, and trajectory in real time. For example, an AI-driven actuator could prevent inadvertent tissue damage by automatically limiting applied force when resistance patterns suggest fragile anatomy. In microsurgical procedures, AI-enhanced piezoelectric actuators could achieve sub-millimeter accuracy by compensating for tremors or predicting motion patterns [67]. This convergence of AI and actuation introduces not only greater precision but also an element of autonomous decision support. It gears towards semi-autonomous surgical systems that amplify rather than replace the surgeon’s expertise.

2. Background

The past two decades have witnessed notable developments in intelligent surgical systems. These advances have been driven by progress accomplished in the fields of robotics, augmented reality, haptic technology, predictive analytics, computer vision, large language models (LLMs), and multimodal AI. Robotic platforms such as the da Vinci Surgical System have transformed urological, gynecological, thoracic, colorectal, bariatric and general surgical procedures. The da Vinci Surgical System was first introduced in 1999 by Intuitive Surgical, Inc. The first clinical use of the da Vinci Surgical System was in 1999, and it received FDA approval in 2000 [68].
Systems such as the da Vinci system are harbingers of minimally invasive techniques in surgery. These techniques improve precision, tremor filtration, precise implant placement, unbiasedness in soft tissue assistance, magnified visualization, optimized resections, diminishment in patient pain, blood loss reduction, faster recovery, and surgeon control, confidence, and comfort. More recently, orthopedic navigation systems such as ROSA Knee and Mako SmartRobotics have integrated AI-based planning and intraoperative feedback to augment implant positioning [69]. These developments demonstrate that technology has reached a level of sophistication which was merely thinkable a decade ago.
However, despite these advancements, the role of the human surgeon in robotic surgeries remains central, as intelligent surgery requires continuous surgeon-centered monitoring to ensure safety, adaptability, and ethical responsibility. One of the strongest arguments for human oversight lies in the unpredictability of surgical practice. Smart systems show surpassing performance at repetitive tasks and structured decision-making but are inherently limited by their training data and algorithmic scope. For example, it has been noted that during robotic-assisted cardiac procedures, unexpected or rare tissue variations such as calcification and fragility require improvisation beyond the programmed parameters of robotic systems [70].
Current intelligent technologies have limited capability in recognizing abnormal bleeding during surgeries. Some surgical technologies can trigger compensatory suction in the form of a remedial action. However, such technologies still lack the competence to devise a novel surgical pathway in real time. Only the surgeon’s expertise, developed through effective clinical training and sustained professional experience, can adapt strategies under such unpredictable conditions [71,72].
Another vital dimension that arises in case of smart robotic surgeries is that of ethical decision-making and patient-centered judgment. Intelligent algorithms may suggest technically optimal interventions but cannot balance them against broader patient values [73]. For instance, in neurosurgical oncology, AI-assisted resection planning can recommend wide excision margins to maximize tumor clearance. This can increase chances of saving the life of the patient and reduce the probability of cancer recurrence or neoplastic metastasis, thus potentially improving their lifespan. Yet, a surgeon may decide on a more conservative approach to preserve speech or motor function, accepting a slightly higher risk of recurrence in favor of quality of life. This form of moral reasoning and context-sensitive decision-making is beyond the capacity of autonomous systems. This makes human oversight indispensable in intelligent robotic surgeries.
The risks inherent to technology itself also underscore the necessity of human monitoring. Between 2000 and 2013, the FDA’s MAUDE database documented over 10,000 reports related to robotic surgery adverse events, including more than 8,000 device malfunctions, which encompassed system errors, instrument failures, and software malfunctions. These malfunctions led to incidents such as falling pieces of instruments into patients, electrical arcing, unintended operations of instruments, and video or imaging errors. A comprehensive study found 144 deaths, 1,391 patient injuries, and 8,061 device malfunctions reported during this period [74].
Although improvements in design have reduced such incidents sine the study was published, hardware and software failures stay ineluctable in complex systems. Without a surgeon monitoring in real time, such failures could escalate into life-threatening complications. As with aviation, where autopilot functionality requires pilot supervision, robot-assisted surgery demands human oversight as a fail-safe mechanism against technological vulnerabilities.
The human connection in surgery is nonpareil. Trust in medicine has been considered a technical contract between patient and physician. However, it is really an interpersonal relationship between them. In studies of patient perceptions of robotic surgery, individuals consistently report greater comfort when assured that their surgeon maintains active control throughout the procedure. A 2024 review highlighted that the public perceives robotic surgery as riskier and shows reluctance unless reassured that a skilled surgeon is ultimately in charge and supervising the robotic system. [75]. This finding illustrates that patient confidence depends on the presence of a responsible human operator rather than blind reliance on automation. Surgeon-centered monitoring, therefore, is not just a technical necessity but a cornerstone of patient trust and therapeutic alliance.
Intelligent surgical systems have reformed medical operative practice by improving precision, minimizing invasiveness, integrating AI-based decision support, fast-tracking routine surgeries, and reducing surgical costs for patients, doctors, and medical service providers. Such systems, however, must continue to be classified as smart tools rather than autonomous agents. The human surgeon is indispensable in providing adaptability to unforeseen complications. He or she exercises ethical judgment and mitigates technological risks. These are key to maintain patient trust. Intelligent surgery must therefore be guided by surgeon-centered monitoring. Surgical technology must serve as augmentation rather than substitution of human expertise. Future innovation should aim at strengthening this collaboration. We must ensure that intelligent systems amplify the surgeon’s overseeing role in patient care.

3. Methods

The development of surgical robotics has entered a phase where the question is no longer whether machines can support physicians in complex procedures but how to integrate them without undermining human authority. This proof of concept proposes an intelligent actuator system for minimally invasive surgery that augments, rather than replaces, the expertise of the surgeon. The predominant goal of suggesting this devise is to enhance stability, precision, and efficacy by embedding artificial intelligence paradigms such as machine learning and reinforcement learning into the actuation pipeline for automation. Despite that, the defining principle of the design is that the surgeon remains in command at all times. The robot is framed as a cooperative assistant, never a substitute decision-maker. Every design choice is oriented around provable safety, transparent behavior, and full auditability. A simple flowchart of the system is provided in Figure 1. The same is provided in the numbered pseudocode format in the form of Algorithm 1.
Algorithm 1: Human-Supervised Intelligent Surgical Actuator System
Input:
• Selected mode ∈ {Teleoperation, Shared Control, Supervised Autonomy}
• Surgeon commands
• Sensor feedback (force, position, safety signals)
• System confidence estimate (τ threshold)
• Dead-man switch state
Output:
• Safe execution of motion commands through actuators
• Possible reversion to Shared Control or system stop on anomaly
1: function SurgicalControl(Mode, SurgeonCommands, Sensors)
2: switch Mode do
3: case Teleoperation:
4: Commands ← SurgeonCommands
5: Commands ← SafetyFilter(Commands)
6: Actuators ← LowLevelControl(Commands)
7: Execute(Actuators)
8:
9: case Shared Control:
10: Assist ← Assistance(SurgeonCommands)
11: Commands ← CommandMix(SurgeonCommands, Assist)
12: Commands ← SafetyFilter(Commands)
13: Actuators ← LowLevelControl(Commands)
14: Execute(Actuators)
15:
16: case Supervised Autonomy:
17: if not PreconditionsMet(Sensors) then
18: return SurgicalControl(Shared, SurgeonCommands, Sensors)
19: end if
20: Commands ← ExecutePolicy(Sensors)
21: Commands ← SafetyFilter(Commands)
22: Actuators ← LowLevelControl(Commands)
23: Execute(Actuators)
24: end switch
25:
26: while TaskNotComplete do
27: if AnomalyDetected(Sensors) or OverrideDetected() then
28: StopMotion(<100 ms)
29: DisableTorque()
30: return SurgicalControl(Shared, SurgeonCommands, Sensors)
31: end if
32: end while
33: end function
34:
35: function EStop()
36: StopMotion(<100 ms)
37: DisableTorque()
38: return SurgicalControl(Shared, SurgeonCommands, Sensors)
39: end function
40: function Assistance(SurgeonCommands)
41: // Apply tremor suppression, virtual fixtures, and force limits
42: return AssistedCommands
43: end function
44:
45: function CommandMix(SurgeonCommands, Assist)
46: // Combine raw surgeon input with assistive corrections
47: return MixedCommands
48: end function
49:
50: function SafetyFilter(Commands)
51: // Enforce safety constraints (e.g., hard limits, CBF, MPSF)
52: return SafeCommands
53: end function
54:
55: function LowLevelControl(Commands)
56: // Convert commands into actuator-level signals
57: return ActuatorSignals
58: end function
59:
60: function Execute(Actuators)
61: // Send actuator signals for motion execution
62: end function
63:
64: function PreconditionsMet(Sensors)
65: if DeadManPressed(Sensors) = false then return false
66: if Confidence(Sensors) < Threshold then return false
67: if not InGreenZone(Sensors) then return false
68: if not SensorsNominal(Sensors) then return false
69: return true
70: end function
71:
72: function ExecutePolicy(Sensors)
73: // Choose bounded primitive or learned RL policy
74: return PolicyCommands
75: end function
76:
77: function AnomalyDetected(Sensors)
78: // Check for anomaly, confidence drop, or zone exit
79: return Boolean
80: end function
81:
82: function OverrideDetected()
83: // Detect explicit surgeon override
84: return Boolean
85: end function
Clinical procedures such as endoscopic submucosal dissection in gastroenterology and microsuturing in urology and gynecology serve as ideal use cases for this design. These tasks are constrained and repetitive. Nevertheless, they demand extraordinary delicacy. They manifest a setting where shared autonomy can tellingly reduce human workload without lessening the surgeon’s control. The envisioned system provides three operational layers. The baseline is pure teleoperation, where the robot acts only as a stable intermediary and AI modules annotate the field. The default mode is shared control, as shown in Algorithm 1 as well as the flowchart (Figure 1). Here, the surgeon specifies goals and the system stabilizes hand motion, suppresses tremor, enforces virtual fixtures, and modulates force and velocity. The most advanced mode is supervised autonomy. It is designed for brief subtasks such as following a defined cut path or maintaining a safe force threshold. Even here, autonomy is permitted only under strict gating conditions. The surgeon must maintain active engagement through a dead-man switch, safety constraints must remain unviolated, and system confidence must exceed a calibrated threshold. Autonomy halts instantly with a pedal release, manual override, or if an anomaly is detected (Table 4).
The technical foundation of Ai-powered surgical actuation rests on robust instrumentation. Miniaturized brushless motors and piezoelectric stacks should deliver precise and backdrivable motion. Integrated brakes must ensure safe holds. Multimodal sensing captures six-axis forces, motor currents, tissue impedance, and thermal feedback during cautery. Stereovision capability and electromagnetic tracking provide tool localization. These inputs support the construction of anatomy-aware virtual fixtures. These are software guard rails that constrain motion to safe paths.
On the intelligence side, perception modules are based on fine-tuned foundation models that segment tissue layers and vessels in real time. The models report uncertainty to both the surgeon and the safety layer. Control modules employ reinforcement learning wrapped in safety filters such as control barrier functions or predictive safety shields. This attempts to guarantee that any proposed action violating force, velocity, or workspace constraints is automatically rejected. Adaptive impedance controllers learn tissue stiffness online to optimize safe interaction. Libraries of learned movement primitives execute bounded maneuvers such as knot pulling or fine cutting. Anomaly detection combines vision and force data to flag slips, bleeding, or tissue delamination. They trigger immediate slow-downs, haptic cues, and surgeon confirmation pipelines.
The human-machine interface is designed with transparency and trust in mind. Surgeons receive confidence-aware overlays that visually fade when algorithmic certainty declines. A three-line status display conveys operating mode, safety state, and system confidence. Haptic channels deliver tremor suppression, force reflection, and gentle repulsion near prohibited zones. Surgeons can disengage autonomy instantly through multiple redundant affordances including foot pedal, clutch, and voice command, while continuous force regulation ensures that tissue loading remains within safe limits in all modes. Each of these is designed to stop motion within milliseconds (Table 5).
Safety is codified at several layers. These include hard interlocks constrain tip speed and force, software shields filter all learned policies, and mode guarding prevents autonomy outside verified green zones. High explainability is maintained. On-demand cards summarize path plans, segmented structures and active constraints are present, post-hoc counterfactuals show what would have occurred without safety filtering. A failure-protected devise, similar to a full black-box recorder in a flight, logs all data streams, surgeon inputs, and software versions. This device supports audit, quality assurance, and incident investigation. Development practices are aligned with international standards for medical devices, risk management, usability, and cybersecurity. The requirement of continuous human supervision is formalized in hazard analyses and design controls (Table 6).
Validation should stress realism and rigor. Perception modules are pretrained on large surgical video corpora and fine-tuned for organ-specific subtasks. Reinforcement learning models are trained in high-fidelity simulation using photo-realistic digital twins and finite-element tissue models. Bench testing begins with synthetic phantoms. It measures accuracy, path error, and tissue interaction forces. It progresses to ex vivo tissue models that evaluate cut quality and hemostasis. User studies compare novice and expert surgeons in different modes of operation. Metrics including workload, error rates, override frequency, and tissue damage scores are employed for evaluation. Strict stopping rules ensure that safety interventions and unacknowledged anomalies automatically pause autonomous features (Table 7).
Milestones progress from baseline teleoperation with virtual fixtures, through shared control and supervised autonomy, to cadaveric studies under institutional review. They culminate in risk-refined prototypes suitable for regulatory pre-submission (Table 8).
The research agenda that unfolds from this concept prompts uncertainties that are both technical and human-centered. It queries whether reinforcement learning under safety filters compares with classical impedance control in terms of constraint adherence and surgeon workload. It stimulates an argument whether confidence-aware visualization adequately reduces unnecessary interruptions and improves tissue handling or not. It voices reservations on the patterns and efficacy of the overrides which are supposed to serve as predictors of near-miss events. It asks whether the interface proactively cues surgeon attention. Addressing such subjects should not only sharpen the technology but also refine the principles of supervised autonomy in surgery.
The system should deliver not just a prototype robot but a framework. The project provides open-source safety modules. It also provides formal proofs demonstrating adherence to operational constraints. Curated multimodal datasets for perception and control will also be used. This will support reproducibility and enabling further research. Human-factors studies will supply evidence that surgeon workload can be reduced without compromising authority over the procedure. At its core, the work articulates a contract of supervised autonomy. The robot must declare its intent, disclose its confidence, and accept immediate human veto. When such an ethos is embedded, the project positions itself as a blueprint for the future of intelligent surgical robotics.

4. Discussion

The proposed human-in-the-loop surgical actuator blueprint demonstrates design and technical feasibility. However, several limitations warrant careful consideration. The actuators, although miniaturized and compact, may still face constraints in extreme microsurgical spaces [76,77]. This may limit their applicability in procedures with ultra-confined anatomy such as capillaries in eyes [78]. Teleoperation, even when enhanced with AI-based overlays, relies heavily on the surgeon’s ability to interpret augmented visual cues [80]. Misalignment between AI annotations and actual tissue states could create subtle cognitive burdens [81]. Supervised autonomy, while gated by dead-man switches and confidence thresholds, may not be able to entirely eliminate risks associated with unmodeled tissue behavior and unexpected intraoperative events. The formalized provable constraint enforcement offers a layer of protection. Despite that, its assurances are bound to model accuracy and sensor fidelity. This may leave substantial residual uncertainty in dynamic, patient-specific conditions.
Human factors, in spite of safeguards such as confidence-aware visualization, multimodal anomaly detection, immediate override options, etc. present auxiliary challenges. Surgeons must remain continuously attentive [82,83]. The cognitive load of monitoring confidence overlays and status indicators may partially offset workload reduction [84,85]. Thresholds for autonomy gating, such as confidence and force limits, require cautious standardization [86]. Overly conservative settings may deteriorate efficiency. On the other hand, permissive thresholds may increase safety risks. The supervised-autonomy contract is conceptually robust [87]. But it depends on strict adherence to operational protocols. It may also be difficult to enforce consistently in high-stress surgical environments.
Validation protocols, including simulation, phantoms, ex-vivo tissue, and cadaver studies, are limited by their fidelity to real clinical scenarios. Tissue properties, bleeding dynamics, and unexpected anatomical variation are further difficult to replicate fully [88,89]. This can inflate performance estimates in controlled environments. The integration of AI paradigms such as ML and reinforcement learning into surgical control systems introduces software complexity, time lag, and potential for emergent failures [90,91]. This requires rigorous long-term monitoring and iterative refinement [92,93,94]. The system is designed to be auditable, transparent, and patient-centered. However, possible limitations accentuate the fact that AI-assisted surgical actuation cannot replace the nuanced judgment of the human surgeon [95,96].

5. Conclusions

Our work advances a surgeon supervised paradigm for intelligent actuation in minimally invasive surgery. It addresses the absence of explicit supervision contracts, the need for provable safety wrapped around learning based control, and practical pathways for adaptive assistance that preserve surgeon authority. We introduce a conceptual architecture that couples compact, backdrivable actuation and multimodal sensing with an AI stack for perception and control. It is governed by a formalized supervised autonomy contract. The system operates across three modes, teleoperation with AI overlays, shared control with tremor suppression and anatomy aware virtual fixtures, and tightly bounded supervised autonomy. Each of these is gated by surgeon engagement, confidence thresholds, and real time safety checks.
The crux of the design is a layered safety framework that pairs reinforcement learning and learned skill primitives. These are accompanied with constraint-enforcing filters, enforced limits on workspace, immediate human override through redundant affordances, continuous force regulation to protect tissue, and confidence-aware AI guidance for responsible decision support. Confidence-aware visualization, multimodal anomaly detection, logging, and predictive intent modeling provide transparency, auditability, surgeon-centric situational awareness, anticipatory decision support, and adaptive safety assurance.
Nevertheless, there are possible drawbacks to look out for in such a proof of concept. Safety guarantees are bounded by model fidelity, sensing quality, sim-to-real transfer, unmodeled tissue variability, and latency in human-robot interaction. Extreme microsurgical workspaces and unmodeled events have the potential to challenge the performance of the robotic system. Vigilance demands may shift cognitive load rather than eliminate it. To overcome the limitations, early results must be interpreted watchfully and rigorously confirmed in larger, diverse studies.
The constraints underscore our framing of AI as an augmentative assistant in place of an autonomous decision maker in surgeries. We believe that the future work should focus on robust clinical validation. Stronger formal guarantees for contact rich interaction, such as passivity layers with verifiable reinforcement learning should also be considered and incorporated. Data infrastructure, such as privacy-preserving multi-institutional amalgamated super-datasets, standardized annotations, open benchmarks, interoperable data formats, and reproducible evaluation pipelines can prove essential. Regulatory strategies that support controlled model updates will also help.

References

  1. Wah, J.N.K. Revolutionizing Surgery: AI and Robotics for Precision, Risk Reduction, and Innovation. J. Robot. Surg. 2025, 19, 47. [Google Scholar] [CrossRef] [PubMed]
  2. Fairag, M.; Almahdi, R.H.; Siddiqi, A.A.; Alharthi, F.K.; Alqurashi, B.S.; Alzahrani, N.G.; Alsulami, A.; Alshehri, R. Robotic Revolution in Surgery: Diverse Applications Across Specialties and Future Prospects. Cureus 2024, 16, e52148. [Google Scholar] [CrossRef] [PubMed]
  3. Saxena, R.; Khan, A. Assessing the Practicality of Designing a Comprehensive Intelligent Conversation Agent to Assist in Dementia Care. In Kim, J. (Ed.) Proceedings of the 17th International Joint Conference on Biomedical Engineering Systems and Technologies (BIOSTEC 2025), Volume 2: HEALTHINF, Rome, Italy, 16–18 February 2025; SCITEPRESS: Setúbal, Portugal, 2025; pp. 655–663. [Google Scholar] [CrossRef]
  4. Saxena, R.; Khan, A. Machine Learning-Based Clinical Decision Support Systems in Dementia Care In: Kim, J. (Ed.) Proceedings of the 17th International Joint Conference on Biomedical Engineering Systems and Technologies (BIOSTEC 2025), Volume 2: HEALTHINF, Rome, Italy, 16–18 February 2025; SCITEPRESS: Setúbal, Portugal, 2025; pp. 664–671. [Google Scholar] [CrossRef]
  5. Wah, J.N.K. The rise of robotics and AI-assisted surgery in modern healthcare. J. Robot. Surg. 2025, 19, 311. [Google Scholar] [CrossRef]
  6. Reddy, K.; Gharde, P.; Tayade, H.; Patil, M.; Reddy, L.S.; Surya, D. Advancements in robotic surgery: A comprehensive overview of current utilizations and upcoming frontiers. Cureus 2023, 15, e50415. [Google Scholar] [CrossRef]
  7. Ergenç, M. Artificial intelligence in surgical practice: Truth beyond fancy covering. Turk. J. Surg. 2025, 41, 118–120. [Google Scholar] [CrossRef]
  8. Ahmed, M.I.; Spooner, B.; Isherwood, J.; Lane, M.; Orrock, E.; Dennison, A. A Systematic Review of the Barriers to the Implementation of Artificial Intelligence in Healthcare. Cureus 2023, 15, e46454. [Google Scholar] [CrossRef]
  9. Saxena, R.R.; Khan, A. Modernizing Medicine Through a Proof of Concept that Studies the Intersection of Robotic Exoskeletons, Computational Capacities and Dementia Care. In Alsadoon, A.; Shenavarmasouleh, F., Amirian, S., Ghareh Mohammadi, F., Arabnia, H.R., Eds.; Deligiannidis, L. (Eds.) Health Informatics and Medical Systems and Biomedical Engineering. CSCE 2024. Communications in Computer and Information Science, vol. 2259; Springer: Cham, Switzerland, 2025; pp. 379–390. [Google Scholar] [CrossRef]
  10. Deo, N.; Anjankar, A. Artificial Intelligence With Robotics in Healthcare: A Narrative Review of Its Viability in India. Cureus 2023, 15, e39416. [Google Scholar] [CrossRef] [PubMed]
  11. Maleki Varnosfaderani, S.; Forouzanfar, M. The Role of AI in Hospitals and Clinics: Transforming Healthcare in the 21st Century. Bioengineering 2024, 11, 337. [Google Scholar] [CrossRef]
  12. Liu, Y.; Wu, X.; Sang, Y.; Zhao, C.; Wang, Y.; Shi, B.; Fan, Y. Evolution of surgical robot systems enhanced by artificial intelligence: A review. Adv. Intell. Syst. 2024, 6, 2300268. [Google Scholar] [CrossRef]
  13. George, E.I.; Brand, T.C.; LaPorta, A.; Marescaux, J.; Satava, R.M. Origins of robotic surgery: From skepticism to standard of care. JSLS 2018, 22, e2018.00039. [Google Scholar] [CrossRef]
  14. Zirafa, C.C.; Romano, G.; Key, T.H.; Davini, F.; Melfi, F. The evolution of robotic thoracic surgery. Ann. Cardiothorac. Surg. 2019, 8, 210–217. [Google Scholar] [CrossRef]
  15. Basic Medical Key. The History of Robotic Surgery. Available online: https://basicmedicalkey.com/the-history-of-robotic-surgery/ (accessed on 26 August 2025).
  16. Smithsonian Magazine. The Past, Present, and Future of Robotic Surgery. Available online: https://www.smithsonianmag.com/innovation/the-past-present-and-future-of-robotic-surgery-180980763/ (accessed on 26 August 2025).
  17. Pugin, F.; Bucher, P.; Morel, P. History of robotic surgery: From AESOP® and ZEUS® to da Vinci®. J. Visc. Surg. 2011, 148, e3–e8. [Google Scholar] [CrossRef]
  18. Miyamura, H.; Mizuno, Y.; Ohwaki, A.; Ito, M.; Nishio, E.; Nishizawa, H. Comparison of single-port robotic surgery using the Da Vinci SP surgical system and single-port laparoscopic surgery for benign indications. Gynecol. Minim. Invasive Ther. 2025, 14, 229–233. [Google Scholar] [CrossRef]
  19. Manabe, Y.; Murai, T.; Ogino, H.; Tamura, T.; Iwabuchi, M.; Mori, Y.; Iwata, H.; Suzuki, H.; Shibamoto, Y. CyberKnife Stereotactic Radiosurgery and Hypofractionated Stereotactic Radiotherapy as First-Line Treatments for Imaging-Diagnosed Intracranial Meningiomas. Neurol. Med.-Chir. 2017, 57, 627–633. [Google Scholar] [CrossRef] [PubMed]
  20. Maris, B.; Tenga, C.; Vicario, R.; Palladino, L.; Murr, N.; De Piccoli, M.; Calanca, A.; Puliatti, S.; Micali, S.; Tafuri, A.; Fiorini, P. Toward Autonomous Robotic Prostate Biopsy: A Pilot Study. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 1393–1401. [Google Scholar] [CrossRef] [PubMed]
  21. Faulkner, J.; Arora, A.; McCulloch, P.; Robertson, S.; Rovira, A.; Ourselin, S.; Jeannon, J.P. Prospective development study of the Versius Surgical System for use in transoral robotic surgery: An IDEAL stage 1/2a first in human and initial case series experience. Eur. Arch. Otorhinolaryngol. 2024, 281, 2667–2678. [Google Scholar] [CrossRef]
  22. Li, T.; Badre, A.; Alambeigi, F.; Tavakoli, M. Robotic Systems and Navigation Techniques in Orthopedics: A Historical Review. Appl. Sci. 2023, 13, 9768. [Google Scholar] [CrossRef]
  23. Ureel, M.; Augello, M.; Holzinger, D.; Wilken, T.; Berg, B.I.; Zeilhofer, H.F.; Millesi, G.; Juergens, P.; Mueller, A.A. Cold ablation robot-guided laser osteotome (CARLO®): From bench to bedside. J. Clin. Med. 2021, 10, 450. [Google Scholar] [CrossRef]
  24. Arezzo, A. Endoluminal robotics. Surgery 2024, 176, 1542–1546. [Google Scholar] [CrossRef]
  25. Fulla, J.; Small, A.; Kaplan-Marans, E.; Palese, M. Magnetic-assisted robotic and laparoscopic renal surgery: Initial clinical experience with the Levita Magnetic Surgical System. J. Endourol. 2020, 34, 1242–1246. [Google Scholar] [CrossRef] [PubMed]
  26. Shen, Y.; Wang, S.; Shen, Y.; Hu, J. The Application of Augmented Reality Technology in Perioperative Visual Guidance: Technological Advances and Innovation Challenges. Sensors 2024, 24, 7363. [Google Scholar] [CrossRef] [PubMed]
  27. Wu, Z.; Dai, Y.; Zeng, Y. Intelligent robot-assisted fracture reduction system for the treatment of unstable pelvic fractures. J. Orthop. Surg. Res. 2024, 19, 271. [Google Scholar] [CrossRef]
  28. Li, C.; Zhang, G.; Zhao, B.; Xie, D.; Du, H.; Duan, X.; Hu, Y.; Zhang, L. Advances of surgical robotics: Image-guided classification and application. Natl. Sci. Rev. 2024, 11, nwae186. [Google Scholar] [CrossRef]
  29. Jian, X.; Song, Y.; Liu, D.; Wang, Y.; Guo, X.; Wu, B.; Zhang, N. Motion planning and control of active robot in orthopedic surgery by CDMP-based imitation learning and constrained optimization. IEEE Trans. Autom. Sci. Eng. 2025, 1. [Google Scholar] [CrossRef]
  30. Mohammad, S. Robotic surgery. J. Oral Biol. Craniofac. Res. 2013, 3, 2. [Google Scholar] [CrossRef]
  31. Barba, P.; Stramiello, J.; Funk, E.K.; Richter, F.; Yip, M.C.; Orosco, R.K. Remote telesurgery in humans: A systematic review. Surg. Endosc. 2022, 36, 2771–2777. [Google Scholar] [CrossRef]
  32. Das, R.; Baishya, N.J.; Bhattacharya, B. A review on tele-manipulators for remote diagnostic procedures and surgery. CSI Trans. ICT 2023, 11, 31–37. [Google Scholar] [CrossRef]
  33. Fracczak, L.; Szaniewski, M.; Podsedkowski, L. Share control of surgery robot master manipulator guiding tool along the standard path. Int. J. Med. Robot. Comput. Assist. Surg. 2019, 15, e1984. [Google Scholar] [CrossRef] [PubMed]
  34. Pandey, S.K.; Sharma, V. Robotics and ophthalmology: Are we there yet? Indian J. Ophthalmol. 2019, 67, 988–994. [Google Scholar] [CrossRef] [PubMed]
  35. Walgrave, S.; Oussedik, S. Comparative assessment of current robotic-assisted systems in primary total knee arthroplasty. Bone Jt. Open 2023, 4, 13–18. [Google Scholar] [CrossRef]
  36. Liow, M.H.L.; Chin, P.L.; Pang, H.N.; Tay, D.K.; Yeo, S.J. THINK surgical TSolution-One® (Robodoc) total knee arthroplasty. SICOT J. 2017, 3, 63. [Google Scholar] [CrossRef]
  37. Ma, F.Z.; Liu, D.F.; Yang, A.C.; Zhang, K.; Meng, F.G.; Zhang, J.G.; Liu, H.G. Application of the robot-assisted implantation in deep brain stimulation. Front. Neurorobot. 2022, 16, 996685. [Google Scholar] [CrossRef]
  38. Jeganathan, J.R.; Jegasothy, R.; Sia, W.T. Minimally invasive surgery: A historical and legal perspective on technological transformation. J. Robot. Surg. 2025, 19, 408. [Google Scholar] [CrossRef]
  39. Biswas, P.; Sikander, S.; Kulkarni, P. Recent advances in robot-assisted surgical systems. Biomed. Eng. Adv. 2023, 6, 100109. [Google Scholar] [CrossRef]
  40. Thamm, O.C.; Eschborn, J.; Schäfer, R.C.; Schmidt, J. Advances in modern microsurgery. J. Clin. Med. 2024, 13, 5284. [Google Scholar] [CrossRef] [PubMed]
  41. Kelly, P.J. Stereotactic surgery: What is past is prologue. Neurosurgery 2000, 46, 16–27. [Google Scholar] [CrossRef] [PubMed]
  42. Wang, T.; Li, H.; Pu, T.; Yang, L. Microsurgery robots: Applications, design, and development. Sensors 2023, 23, 8503. [Google Scholar] [CrossRef]
  43. Malzone, G.; Menichini, G.; Innocenti, M.; Ballestín, A. Microsurgical robotic system enables the performance of microvascular anastomoses: A randomized in vivo preclinical trial. Sci. Rep. 2023, 13, 14003. [Google Scholar] [CrossRef] [PubMed]
  44. Iftikhar, M.; Saqib, M.; Zareen, M.; Mumtaz, H. Artificial intelligence: Revolutionizing robotic surgery: Review. Ann. Med. Surg. 2024, 86, 5401–5409. [Google Scholar] [CrossRef] [PubMed]
  45. Singh, V.; Vasisht, S.; Hashimoto, D.A. Artificial Intelligence in Surgery: What Is Needed for Ongoing Innovation. Surg. (Oxford) 2025, 43, 129–134. [Google Scholar] [CrossRef]
  46. Saxena, R.R.; Saxena, R. Applying Graph Neural Networks in Pharmacology. TechRxiv 2024, 25 June. [CrossRef]
  47. Schmidgall, S.; Opfermann, J.D.; Kim, J.W.; Krieger, A. Will Your Next Surgeon Be a Robot? Autonomy and AI in Robotic Surgery. Sci. Robot. 2025, 10, eadt0187. [Google Scholar] [CrossRef]
  48. Knudsen, J.E.; Ghaffar, U.; Ma, R.; Hung, A.J. Clinical applications of artificial intelligence in robotic surgery. J. Robot. Surg. 2024, 18, 102. [Google Scholar] [CrossRef]
  49. Habuza, T.; Navaz, A.N.; Hashim, F.; Alnajjar, F.; Zaki, N.; Serhani, M.A.; Statsenko, Y. AI applications in robotics, diagnostic image analysis and precision medicine: Current limitations, future trends, guidelines on CAD systems for medicine. Inform. Med. Unlocked 2021, 24, 100596. [Google Scholar] [CrossRef]
  50. Saxena, R.R.; Nieters, E.; Mamudu, I. Pokémondium: A Machine Learning Approach to Detecting Images of Pokémon. TechRxiv 2025, 27 June. [CrossRef]
  51. Saxena, R.R. AI-Driven Forensic Image Enhancement. TechRxiv, 08 May. [CrossRef]
  52. Morris, M.X.; Fiocco, D.; Caneva, T.; Yiapanis, P.; Orgill, D.P. Current and Future Applications of Artificial Intelligence in Surgery: Implications for Clinical Practice and Research. Front. Surg. 2024, 11, 1393898. [Google Scholar] [CrossRef] [PubMed]
  53. Chen, X.; Liu, J.; Liang, H.; et al. Digitalization of surgical features improves surgical accuracy via surgeon guidance and robotization. NPJ Digit. Med. 2025, 8, 497. [Google Scholar] [CrossRef]
  54. Knez, D.; Nahle, I.S.; Vrtovec, T.; Parent, S.; Kadoury, S. Computer-assisted pedicle screw trajectory planning using CT-inferred bone density: A demonstration against surgical outcomes. Med. Phys. 2019, 46, 3543–3554. [Google Scholar] [CrossRef] [PubMed]
  55. Shadid, O.; Seth, I.; Cuomo, R.; Rozen, W.M.; Marcaccini, G. Artificial Intelligence in Microsurgical Planning: A Five-Year Leap in Clinical Translation. J. Clin. Med. 2025, 14, 4574. [Google Scholar] [CrossRef]
  56. Hu, D.; Gong, Y.; Hannaford, B.; Seibel, E.J. Path planning for semi-automated simulated robotic neurosurgery. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 2639–2645. [Google Scholar] [CrossRef]
  57. Mahmood, F.; Durr, N.J. Deep learning and conditional random fields-based depth estimation and topographical reconstruction from conventional endoscopy. Med. Image Anal. 2018, 48, 230–243. [Google Scholar] [CrossRef]
  58. Oettl, F.C.; Zsidai, B.; Oeding, J.F.; Samuelsson, K. Artificial intelligence and musculoskeletal surgical applications. HSS J. [CrossRef]
  59. U.S. Food and Drug Administration. Premarket Notification 510(k). Available online: https://www.fda.gov/medical-devices/premarket-submissions-selecting-and-preparing-correct-submission/premarket-notification-510k (accessed on 29 August 2025).
  60. European Commission. CE marking – obtaining the certificate, EU requirements. Your Europe. Available online: https://europa.eu/youreurope/business/product-requirements/labels-markings/ce-marking/index_en.htm (accessed on 29 August 2025).
  61. Górriz, J.M.; Álvarez-Illán, I.; Álvarez-Marquina, A.; Arco, J.E.; Atzmueller, M.; Ballarini, F.; Barakova, E.; Bologna, G.; Bonomini, P.; Castellanos-Dominguez, G.; et al. Computational Approaches to Explainable Artificial Intelligence: Advances in Theory, Applications and Trends. Inf. Fus. 2023, 100, 101945. [Google Scholar] [CrossRef]
  62. Patil, M.; Gharde, P.; Reddy, K.; Nayak, K. Comparative Analysis of Laparoscopic Versus Open Procedures in Specific General Surgical Interventions. Cureus 2024, 16, e54433. [Google Scholar] [CrossRef]
  63. Cepolina, F.; Razzoli, R. Review of Robotic Surgery Platforms and End Effectors. J. Robot. Surg. 2024, 18, 74. [Google Scholar] [CrossRef] [PubMed]
  64. Lu, Y.H.; Mani, K.; Panigrahi, B.; Hajari, S.; Chen, C.Y. A Shape Memory Alloy-Based Miniaturized Actuator for Catheter Interventions. Cardiovasc. Eng. Technol. 2018, 9, 405–413. [Google Scholar] [CrossRef]
  65. Wang, K.; Sun, L.; Liu, Z.; Sun, N.; Zhang, J. Actuators and Variable Stiffness of Flexible Surgical Actuators: A Review. Sens. Actuators A Phys. 2025, 390, 116588. [Google Scholar] [CrossRef]
  66. Deivayanai, V.C.; Swaminaathanan, P.; Vickram, A.S.; Saravanan, A.; Bibi, S.; Aggarwal, N.; Kumar, V.; Alhadrami, A.H.; Mohammedsaleh, Z.M.; Altalhi, R.; et al. Transforming Healthcare: The Impact of Artificial Intelligence on Diagnostics, Pharmaceuticals, and Ethical Considerations – A Comprehensive Review. Int. J. Surg. 2025, 111, 4666–4693. [Google Scholar] [CrossRef]
  67. Yuan, Z.; Zhou, S.; Hong, C.; Xiao, Z.; Zhang, Z.; Chen, X.; Zeng, L.; Wu, J.; Wang, Y.; Li, X. Piezo-Actuated Smart Mechatronic Systems for Extreme Scenarios. Int. J. Extrem. Manuf. 2025, 7, 022003. [Google Scholar] [CrossRef]
  68. Leung, T.; Vyas, D. Robotic Surgery: Applications. Am. J. Robot. Surg. 2014, 1, 1–64. [Google Scholar] [CrossRef]
  69. Batailler, C.; Shatrov, J.; Sappey-Marinier, E.; Servien, E.; Parratte, S.; Lustig, S. Artificial Intelligence in Knee Arthroplasty: Current Concept of the Available Clinical Applications. Arthroplasty 2022, 4, 17. [Google Scholar] [CrossRef]
  70. Wah, J.N.K. The Robotic Revolution in Cardiac Surgery. J. Robot. Surg. 2025, 19, 386. [Google Scholar] [CrossRef] [PubMed]
  71. Brzeski, A.; Blokus, A.; Cychnerski, J. An Overview of Image Analysis Techniques in Endoscopic Bleeding Detection. Int. J. Innov. Res. Comput. Commun. Eng. 2013, 1, 1350–1357. [Google Scholar]
  72. Rahbar, M.D.; Reisner, L.; Ying, H.; Pandya, A. An Entropy-Based Approach to Detect and Localize Intraoperative Bleeding During Minimally Invasive Surgery. Int. J. Med. Robot. Comput. Assist. Surg. 2020, 16, 1–9. [Google Scholar] [CrossRef]
  73. Bianchi, V.; Misuriello, F.; Piras, E.; Nesci, C.; Chiarello, M.M.; Brisinda, G. Ethical Considerations on the Role of Artificial Intelligence in Defining Futility in Emergency Surgery. Int. J. Surg. 2025, 111, 3178–3184. [Google Scholar] [CrossRef]
  74. Alemzadeh, H.; Raman, J.; Leveson, N.; Kalbarczyk, Z.; Iyer, R.K. Adverse Events in Robotic Surgery: A Retrospective Study of 14 Years of FDA Data. PLoS ONE 2016, 11, e0151470. [Google Scholar] [CrossRef]
  75. McDonnell, C.; Devine, M.; Kavanagh, D. The General Public’s Perception of Robotic Surgery – A Scoping Review. Surgeon 2025, 23, e49–e62. [Google Scholar] [CrossRef]
  76. Ida, Y.; Sugita, N.; Ueta, T.; et al. Microsurgical Robotic System for Vitreoretinal Surgery. Int. J. Comput. Assist. Radiol. Surg. 2012, 7, 27–34. [Google Scholar] [CrossRef]
  77. Parekattil, S.J.; Gudeloglu, A. Robotic Assisted Andrological Surgery. Asian J. Androl. 2013, 15, 67–74. [Google Scholar] [CrossRef] [PubMed]
  78. Tilvawala, G.; Wen, J.; Santiago-Dieppa, D.; Yan, B.; Pannell, J.; Khalessi, A.; Norbash, A.; Friend, J. Soft Robotic Steerable Microcatheter for the Endovascular Treatment of Cerebral Disorders. Sci. Robot. 2021, 6, eabf0601. [Google Scholar] [CrossRef]
  79. Reza, T.; Bokhari, S.F.H. Partnering With Technology: Advancing Laparoscopy With Artificial Intelligence and Machine Learning. Cureus 2024, 16, e56076. [Google Scholar] [CrossRef] [PubMed]
  80. Shang, Z.; Chauhan, V.; Devi, K.; Patil, S. Artificial Intelligence, the Digital Surgeon: Unravelling Its Emerging Footprint in Healthcare – The Narrative Review. J. Multidiscip. Healthc. 2024, 17, 4011–4022. [Google Scholar] [CrossRef]
  81. Feizi, N.; Tavakoli, M.; Patel, R.V.; Atashzar, S.F. Robotics and AI for Teleoperation, Tele-Assessment, and Tele-Training for Surgery in the Era of COVID-19: Existing Challenges, and Future Vision. Front. Robot. AI 2021, 8, 610677. [Google Scholar] [CrossRef]
  82. Hussain, A.K.; Kakakhel, M.M.; Ashraf, M.F.; Shahab, M.; Ahmad, F.; Luqman, F.; Ahmad, M.; Mohammed Nour, A.; Varrassi, G.; Kinger, S. Innovative Approaches to Safe Surgery: A Narrative Synthesis of Best Practices. Cureus 2023, 15, e49723. [Google Scholar] [CrossRef] [PubMed]
  83. Elendu, C.; Amaechi, D.C.; Elendu, T.C.; Jingwa, K.A.; Okoye, O.K.; Okah, M.J.; Ladele, J.A.; Farah, A.H.; Alimi, H.A. Ethical Implications of AI and Robotics in Healthcare: A Review. Med. (Baltimore) 2023, 102, e36671. [Google Scholar] [CrossRef]
  84. Zhang, Z.S.; Wu, Y.; Zheng, B. A Review of Cognitive Support Systems in the Operating Room. Surg. Innov. 2024, 31, 111–122. [Google Scholar] [CrossRef]
  85. Pasquer, A.; Ducarroz, S.; Lifante, J.C.; Skinner, S.; Poncet, G.; Duclos, A. Operating Room Organization and Surgical Performance: A Systematic Review. Patient Saf. Surg. 2024, 18, 5. [Google Scholar] [CrossRef]
  86. Heider, S.; Schoenfelder, J.; Koperna, T.; Brunner, J.O. Balancing Control and Autonomy in Master Surgery Scheduling: Benefits of ICU Quotas for Recovery Units. Health Care Manag. Sci. 2022, 25, 311–332. [Google Scholar] [CrossRef]
  87. Department of Health, Education, and Welfare; National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. J. Am. Coll. Dent. 2014, 81, 4–13. [Google Scholar]
  88. Henderson, E.R.; Halter, R.; Paulsen, K.D.; Pogue, B.W.; Elliott, J.; LaRochelle, E.; Ruiz, A.; Jiang, S.; Streeter, S.S.; Samkoe, K.S.; et al. Onward to Better Surgery – The Critical Need for Improved Ex Vivo Testing and Training Methods. Proceedings of SPIE--The International Society for Optical Engineering, San Francisco, CA, USA, 2024; 12825, 1282506., 3–7 February 2024; SPIE. [Google Scholar] [CrossRef]
  89. Jiang, Y.; Kyeremeh, J.; Luo, X.; Wang, Z.; Zhang, K.; Cao, F.; Asciak, L.; Kazakidi, A.; Stewart, G.D.; Shu, W. A Numerical Simulation Study of Soft Tissue Resection for Low-Damage Precision Cancer Surgery. Comput. Methods Programs Biomed. 2025, 270, 108937. [Google Scholar] [CrossRef] [PubMed]
  90. Kumar, A. Reinforcement Learning for Robotic-Assisted Surgeries: Optimizing Procedural Outcomes and Minimizing Post-Operative Complications. Int. J. Res. Publ. Rev. 2025, 6, 5669–5684. [Google Scholar] [CrossRef]
  91. Saxena, R.R. Applications of Natural Language Processing in the Domain of Mental Health. TechRxiv, 28 October. [CrossRef]
  92. Javed, H.; El-Sappagh, S.; Abuhmed, T. Robustness in Deep Learning Models for Medical Diagnostics: Security and Adversarial Challenges Towards Robust AI Applications. Artif. Intell. Rev. 2025, 58, 12. [Google Scholar] [CrossRef]
  93. Saxena, R.R. Beyond Flashcards: Designing an Intelligent Assistant for USMLE Mastery and Virtual Tutoring in Medical Education (A Study on Harnessing Chatbot Technology for Personalized Step 1 Prep). arXiv 2024, arXiv:2409.10540. https://arxiv.org/abs/2409, 10540. [Google Scholar]
  94. Saxena, R.R. Intelligent Approaches to Predictive Analytics in Occupational Health and Safety in India. arXiv 2024, arXiv:2412.16038. https://arxiv.org/abs/2412, 16038. [Google Scholar]
  95. Amin, A.; Cardoso, S.A.; Suyambu, J.; Abdus Saboor, H.; Cardoso, R.P.; Husnain, A.; Isaac, N.V.; Backing, H.; Mehmood, D.; Mehmood, M.; et al. Future of Artificial Intelligence in Surgery: A Narrative Review. Cureus 2024, 16, e51631. [Google Scholar] [CrossRef] [PubMed]
  96. Lindegger, D.J.; Wawrzynski, J.; Saleh, G.M. Evolution and Applications of Artificial Intelligence to Cataract Surgery. Ophthalmol. Sci. 2022, 2, 100164. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flowchart of the Proposed Human-Supervised Intelligent Actuator for Minimally Invasive Surgery.
Figure 1. Flowchart of the Proposed Human-Supervised Intelligent Actuator for Minimally Invasive Surgery.
Preprints 178091 g001
Table 1. Limitations of Classical Approaches in Perception–Navigation for Medical Image Analysis.
Table 1. Limitations of Classical Approaches in Perception–Navigation for Medical Image Analysis.
Category Techniques Examples Limitations
Preprocessing Noise reduction Gaussian Blur, Median filtering Limited adaptability; cannot handle diverse noise patterns in clinical imagery
Contrast enhancement Histogram equalization Global adjustment lacks contextual awareness
Color space conversion RGB converted to Grayscale or HSV Reduces dimensional richness; may lose clinically relevant details
Segmentation & Localization Region-based segmentation Region growing, watershed methods Sensitive to noise and initialization; poor generalization
Thresholding Otsu’s method Performance drops with non-uniform illumination
Edge detection Canny operator, Sobel operator Prone to spurious edges; limited robustness in complex anatomy
Feature Extraction & Description Shape and geometric features Contours, boundary descriptors Not invariant to scale, rotation, or deformation
Texture analysis Gabor filters, Local Binary Patterns (LBP) Sensitive to noise; limited capture of multi-scale texture
Interest point detectors SIFT, SURF Computationally intensive; not scalable for real-time guidance
Mathematical Morphology Basic operations Erosion, Dilation Over-simplifies structures; loses context
Advanced transforms Top-hat, Black-hat Effective only in constrained scenarios; poor adaptability to variability
General Limitation Cannot capture richness and variability of clinical imagery; poor scalability for real-time, context-aware guidance
Table 2. AI and Surgical Robotics Integration Requirements.
Table 2. AI and Surgical Robotics Integration Requirements.
Dimension Key Components Implementation Strategies
Model Development Explainability and Determinism • Enhance interpretability
• Improve reproducibility
• Use transparent algorithms
Real-time Performance • Model compression
• Hardware-optimized deployment
• Graceful degradation mechanisms for uncertainty
Data Availability Multi-institutional Collaboration • Standardized annotation frameworks
• Dataset amalgamation
• Cross-institution partnerships
Privacy & Sharing • Federated learning
• Privacy-preserving strategies
• Secure data protocols
Synthetic Data • High-fidelity surgical simulation
• Generative models
• Pretraining capabilities
• Data augmentation for downstream tasks
Human-Robot Coordination Interface Design • Intuitive surgeon interfaces
• Advanced feedback modalities
Haptic Feedback • Objectify subjective judgments
• Quantify ambiguous intraoperative indicators
• Automate repetitive actions
Trust & Clinical Adoption Safety Establishment • Measurable safety improvements
• Transparent oversight mechanisms
Accountability • Robust logging for auditability
• Clearly defined autonomy boundaries
Control Mechanisms • Reliable manual handover pathways
• Balance between trust and autonomy
• Evolving safeguards
Table 3. Types of Actuators in Minimally Invasive Surgery.
Table 3. Types of Actuators in Minimally Invasive Surgery.
Actuator Type Mechanism Examples in MIS Advantages Limitations
Electromechanical Actuators Electric motors (DC, stepper, servo) convert electrical energy into precise rotary/linear motion Motor-driven robotic arms (e.g., da Vinci system) High precision, controllability, reliable integration with control algorithms Bulky compared to other actuators; limited miniaturization in very small instruments
Piezoelectric Actuators Use piezoelectric crystals that deform under electric field to generate motion Ultrasonic scalpels, micro-manipulators for ophthalmic and neurosurgery Very high precision, fast response, compact size Limited stroke length; requires high-frequency driving voltage
Pneumatic Actuators Compressed air generates pressure to drive linear or rotary motion Soft robotic grippers, inflatable balloons for dilation Lightweight, compliant, safe for tissue interaction Less precise, nonlinear behavior, dependency on external air supply
Hydraulic Actuators Pressurized fluid drives pistons or chambers for motion High-force surgical tools, orthopedic robots High force density, smooth motion Requires fluid lines; potential risk of leakage inside patient environment
SMA Actuators Metals (e.g., NiTi alloys) change shape when heated and return when cooled Steerable catheters, flexible endoscopic tools Miniaturization potential, silent operation, compact integration Slow response time, hysteresis, limited durability under cycling
Magnetic Actuators External magnetic fields manipulate embedded magnets in instruments Levita’s MARS (magnet-assisted surgical system), capsule endoscopy Wireless control, minimally invasive manipulation, reduced mechanical linkages Limited force at depth, requires careful control of magnetic fields
Electrostatic Actuators Electric field generates force between charged plates/elements Micro-electro-mechanical systems (MEMS) for microsurgery High precision, scalable to micro-scale Very low force output, sensitive to environmental conditions
Hybrid Actuation Systems Combine two or more actuation methods for optimized performance Pneumatic–hydraulic soft robots, piezoelectric–electromagnetic micromanipulators Balance of precision, compliance, and force Complexity in design and control integration
Table 4. Operational Layers in the Proposed Human-in-the-loop AI Surgical Actuation System.
Table 4. Operational Layers in the Proposed Human-in-the-loop AI Surgical Actuation System.
Layer Description AI Function Human Oversight
Teleoperation Surgeon drives robot manually Annotation and measurement only Full human control; no autonomy
Shared Control Surgeon specifies goals; system assists in realizing the goals Stabilizes motion, suppresses tremor, enforces virtual fixtures, modulates force/velocity Surgeon remains decision-maker; real-time assistance
Supervised Autonomy Short, bounded subtasks, such as following a cut path, maintaining safe force Executes subtasks under confidence and safety gating Surgeon holds the dead-man switch; instant reversion on pedal release, override, or detection of an anomaly
Table 5. System Architecture, including Instrumentation, AI/ML Inclusion, & Human Factors.
Table 5. System Architecture, including Instrumentation, AI/ML Inclusion, & Human Factors.
Component Subsystem or Function Description
Instrumentation & Actuation Actuators Miniature BLDC or piezo stacks with high reduction, backdrivable stages; integrated brakes for safe hold
Sensing 6-axis force/torque at the wrist, motor currents, tip pose from stereo/endoscopic vision + EM tracker, temperature (cautery), and tissue impedance
Virtual Fixtures Software “guard rails” that constrain tool motion to safe corridors or planes (anatomy-aware)
AI/ML Stack (Assistive) Perception Foundation vision model fine-tuned on endoscopic video to segment tools, tissue layers, and vessels; uncertainty quantification (MC-Dropout/Deep Ensembles) surfaces confidence to UI and safety layer
Control Safety-filtered RL with CBF and Model Predictive Safety Filters rejecting unsafe actions; adaptive impedance control learning tissue stiffness online; learned skill primitives for short, bounded maneuvers (knot-pull, micro-cut)
Anomaly Detection Multimodal change-point detection on force + vision to flag slip, bleeding, or delamination; triggers slow-down, haptic cue, and visual alert; requires human confirmation
Human Factors & UI Visualization Confidence-aware overlays: segmentation masks and planned trajectories fade with lower confidence; threshold surgeon-tunable; three-line status display (Mode, Safety, Confidence)
Haptics Tremor suppression, force reflection, gentle repulsion near no-go zones
Takeover Affordances Foot pedal, clutch button, and voice “Hold” command; immediate AI disengage (<10 ms torque disable, <100 ms motion stop)
Table 6. Safety, Explainability, and Compliance Features.
Table 6. Safety, Explainability, and Compliance Features.
Category Description
Safety Envelope Hard limits on tip speed, force, and workspace enforced via control barrier functions (CBFs); software cannot override hardware interlocks
Action Shields All reinforcement learning outputs pass through a safety supervisor that enforces constraints and rate limits
Mode Guarding Autonomy permitted only in labeled “green zones” with verified anatomy; exiting a zone forces immediate reversion to shared control
Explainability On-demand “Why now?” cards display planned path, top segmented structures, confidence, and active constraints; post-hoc counterfactuals show what the controller would have done without safety filters
Audit & Traceability Black-box recorder logs sensor data, commands, model versions, and surgeon inputs to support quality assurance and root-cause analysis
Standards-aligned Development Compliance with ISO 14971 (risk management), IEC 62366 (usability engineering), IEC 60601 (electrical/EMC), IEC 62304 (software lifecycle), and FDA cybersecurity guidance; human supervision is formally required in hazard analysis and design inputs, and autonomy cannot be enabled without active human engagement
Table 7. Data, Training, and Validation for the Proposed Actuator System.
Table 7. Data, Training, and Validation for the Proposed Actuator System.
Category Description Metrics / Evaluation
Datasets Curated endoscopic/laparoscopic pictures/videos with pixel-wise labels (tissue layers, vessels), Microscopic biopsy images etc. may also be used; synchronized force/position logs and adverse-event tags should be used for training. Supports perception training and RL supervision; enables anomaly detection
Training Protocols Pretrain perception on large surgical video corpora; fine-tune per organ/site. Train RL in digital twin simulation (photo-real endoscopy + tissue Finite Element Method or FEM) with domain randomization; deploy with safety filter Accuracy of segmentation, RL adherence to force/velocity constraints, confidence calibration
Bench Tests Assess accuracy, peak force, path error; Ex-vivo tissue: evaluate cut quality, hemostasis using synthetic phantoms (artificial models that simulate human tissues, organs, or anatomical structures). Path error, peak and mean force, tissue damage, constraint violations
User Studies Novice and expert surgeons perform tasks across modes. Simulate novice and expert surgeon performance in a realistic, reproducible way using human-in-the-loop testing on synthetic phantoms combined with adjustable system parameters and AI-augmented tools. Task time, path error, max/mean force, tissue damage score, constraint violations, override frequency, NASA-TLX workload
Stopping Rules If any safety-filter intervention exceeding threshold per minute; if any unacknowledged anomaly pauses autonomy Ensures safe human oversight; triggers session pause
Table 8. Proposed Milestones Table for the Development of a Human-in-the-loop Design Intelligent Actuator for Minimally Invasive Surgery.
Table 8. Proposed Milestones Table for the Development of a Human-in-the-loop Design Intelligent Actuator for Minimally Invasive Surgery.
Proposed Milestones (M) Description Evaluation / Deliverable
M1: Teleoperation Baseline teleoperation with full sensing and virtual fixtures on bench phantom Verify accurate motion, force limits, and path following; initial usability feedback
M2: Shared Control Tremor suppression, force limits, and anatomy-aware virtual fixtures Measure path error, force adherence, and surgeon workload reduction; refine UI overlays
M3: Supervised Autonomy Primitives Short micro-task automation under pedal-hold and confidence gating Evaluate task execution accuracy, safety filter performance, and override response
M4: Ex-vivo Evaluation Complete system tested on ex-vivo tissue models Assess cut quality, hemostasis, constraint violations, and human-factors outcomes
M5: Cadaver Lab & Regulatory Prep IRB-approved cadaver studies; refine risk controls; pre-submission to regulators Document compliance with safety standards; produce human-factors report and prepare submission package
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated