Preprint
Review

This version is not peer-reviewed.

Synergizing Brain-Computer Interfaces and AI-Driven Image Segmentation for Precision Neurosurgery

A peer-reviewed article of this preprint also exists.

Submitted:

30 March 2025

Posted:

31 March 2025

You are already at the latest version

Abstract
BCI and AI-driven image segmentation are revolutionizing precision neurosurgery by enhancing surgical accuracy, reducing human error, and improving patient outcomes. This review explores the integration of AI techniques, particularly DL and CNNs, with neuroimaging modalities for automated brain mapping and tissue classification. We analyze existing approaches for real-time neural signal processing, automated segmentation, and surgical robotics, highlighting their strengths, limitations, and clinical applications. The integration of hybrid BCI models with AI enhances neurorehabilitation by providing adaptive feedback for motor recovery and cognitive therapy. However, challenges such as signal reliability, computational latency, and ethical concerns regarding patient autonomy and data privacy persist. Furthermore, we discuss the role of AI in improving decision-making, intraoperative guidance, and post-surgical assessments. By synthesizing recent advancements in medical image processing, BCI technology, and AI-driven neurosurgical interventions, this paper provides a comprehensive overview of current trends, challenges, and future research directions in this rapidly evolving field.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

1.1. Background and Motivation

Brain-Computer Interfaces (BCIs) have emerged as a transformative technology, enabling direct communication between the brain and external devices without reliance on traditional neuromuscular pathways [1]. Initially conceptualized by Vidal as a means of direct brain-computer communication [2], BCIs have since evolved into a sophisticated interdisciplinary domain encompassing neuroscience, artificial intelligence (AI), and bioengineering [3]. The fundamental goal of BCI systems is to translate neural activity into actionable commands, thus facilitating applications in healthcare, neurorehabilitation, and human-computer interaction.
BCI technology has witnessed significant advancements, particularly in signal acquisition, processing, and real-time decoding of cognitive states. One of the earliest practical implementations utilized steady-state visual evoked potentials (SSVEP) to achieve high information transfer rates (ITR) in real-world settings [4]. These developments highlight the potential of BCIs in not only assisting individuals with neuromuscular impairments but also enhancing cognitive augmentation in healthy users.
Despite these advancements, challenges persist in the commercialization and widespread adoption of BCIs. Ethical concerns regarding user privacy, data security, and the potential for cognitive manipulation remain critical barriers [3,5,6]. Additionally, variability in user adaptability and signal reliability necessitates further research into robust machine learning algorithms for improved accuracy and usability.
Given these considerations, this review explores the intersection of BCIs and AI-driven image segmentation for precision neurosurgery. By integrating state-of-the-art DL techniques with neural decoding methodologies, the aim is to enhance surgical precision, optimize real-time decision-making, and mitigate intraoperative risks [7]. This synergy represents a crucial step toward the future of intelligent neuro-interventions, where BCIs play a pivotal role in augmenting surgical outcomes and patient safety.

1.2. Advances in AI-Driven Medical Image Segmentation

Medical image segmentation has witnessed remarkable advancements with the integration of AI, particularly through DL techniques. These AI-driven methods have significantly enhanced the accuracy, efficiency, and automation of analyzing complex medical images. This section explores recent developments in AI-based segmentation across various imaging modalities, including electroencephalography (EEG), functional near-infrared spectroscopy (fNIRS), magnetoencephalography (MEG), electromyography (EMG), computed tomography (CT), electrocorticography (ECoG), magnetic resonance imaging (MRI), functional-MRI (fMRI), fluorescence-guided surgery (FGS), diffusion tensor imaging (DTI) and high-density electrode arrays.
Convolutional neural networks (CNNs), recurrent neural networks (RNNs) and their variants have become the cornerstone of medical image segmentation. Notable architectures include:
  • U-Net and Variants: Initially introduced for biomedical image segmentation, U-Net has been widely adopted due to its encoder-decoder structure, which effectively captures spatial and contextual information. Variants like Attention U-Net and 3D U-Net have further improved segmentation accuracy for volumetric imaging [8].
  • Transformers in Segmentation: Vision Transformers (ViTs) and Swin Transformers have recently demonstrated superior performance in segmenting medical images by leveraging self-attention mechanisms to capture long-range dependencies [9].
  • Generative Adversarial Networks (GANs): GAN-based segmentation models enhance the precision of medical image delineation by generating realistic synthetic data and refining segmentation boundaries [10].
AI-driven segmentation methods have been applied to various imaging techniques, each posing unique challenges and requiring specialized approaches:
  • EEG and MEG: While traditionally used for functional brain mapping, AI-assisted segmentation techniques now improve spatial resolution by segmenting source-localized brain activity. DL enhances artifact removal and signal interpretation [11].
  • fNIRS: AI models segment hemodynamic responses from fNIRS data, distinguishing oxygenated and deoxygenated hemoglobin concentrations to map cortical activity with higher precision.
  • EMG: AI-driven segmentation aids in the precise identification of muscle activity patterns, improving applications in neuromuscular disorder diagnosis and prosthetic control.
  • CT and MRI: CNNs and Transformers play a crucial role in segmenting anatomical structures, tumors, and lesions from CT and MRI scans. Multi-modal approaches integrating PET, CT, and MRI enhance diagnostic accuracy [12,13].
  • ECoG and High-Density Arrays: AI models segment cortical activity recorded from ECoG and high-density electrode arrays, enabling more refined brain mapping for epilepsy monitoring and BCI applications [14].

1.3. Challenges and Future Directions

Despite the rapid progress, AI-driven medical image segmentation faces several challenges including data scarcity and quality, model interoperability and computational complexity. The limited availability of annotated medical datasets hinders model generalization. Data augmentation and self-supervised learning offer potential solutions. Deep generative models have been proposed to generate realistic and diverse data that conform to the true distribution of medical images, addressing the scarcity of annotated datasets [15,16]. Additionally, self-supervised learning approaches have been explored to improve model performance in few-shot medical image segmentation scenarios [17]. The black-box nature of DL models raises concerns in clinical applications. However, explainable AI (XAI) methods are being explored to improve transparency. Recent studies have focused on developing interpretable DL models in medical image analysis to enhance trust and facilitate clinical adoption [18]. Moreover, human-centered design guidelines have been proposed to create explainable medical imaging AI systems that align with user needs [19]. Advanced deep learning architectures require substantial computational resources, highlighting the need for optimization strategies and efficient hardware implementation. Although traditional segmentation methods offer computational efficiency and interpretability, their performance often deteriorates in the presence of complex, noisy, or highly variable medical imaging data [20]. Hybrid approaches that integrate traditional techniques with deep learning aim to strike a balance between computational efficiency and segmentation accuracy. Future research should prioritise enhancing generalizability across diverse datasets, incorporating multi-modal imaging modalities, and developing robust, clinically interpretable AI models to advance precision neurosurgery and other medical applications.

1.4. Importance of Precision Neurosurgery

Precision neurosurgery represents a paradigm shift in the field of neurosurgical interventions, leveraging advanced imaging techniques, AI, and BCI to enhance surgical accuracy, minimize risks, and improve patient outcomes. The evolution of precision neurosurgery has been driven by the need for targeted, minimally invasive procedures that preserve neurological function while effectively treating complex conditions such as brain tumours, epilepsy, Parkinson’s disease, and neurovascular disorders [21].

2. Advanced Neuroimaging Modalities for Precision Neurosurgery

2.1. Role of Neuroimaging in Precision Neurosurgery

State-of-the-art neuroimaging modalities provide critical preoperative and intraoperative insights, allowing for accurate localization of pathological regions and functional mapping of the brain. These imaging techniques include:
  • Magnetic Resonance Imaging (MRI) and Computed Tomography (CT): MRI and CT scans serve as foundational tools for visualizing anatomical structures, aiding in tumor resection, and identifying vascular abnormalities [22].
  • Functional MRI (fMRI) and Diffusion Tensor Imaging (DTI): These modalities provide functional and connectivity-based insights, crucial for preserving eloquent brain regions during surgery [23,24].
  • Electrocorticography (ECoG) and Magnetoencephalography (MEG): These electrophysiological imaging techniques assist in preoperative planning by identifying seizure foci and functionally significant cortical areas [25,26,27].
  • Fluorescence-Guided Surgery (FGS): The use of fluorescence agents such as 5-ALA enhances real-time intraoperative tumor visualization, thereby improving the accuracy of surgical resection [28].
AI-powered tools and robotic-assisted systems have revolutionized neurosurgical precision by augmenting clinical decision-making, minimizing human error, and enhancing surgical dexterity. Machine learning models analyze multimodal neuroimaging data to predict optimal surgical pathways and assess risks for optimal AI-driven surgical planning [29]. Robotic systems, such as the ROSA Brain and NeuroMate, enhance stereotactic procedures by providing millimeter-level precision in electrode placement and tumor excision [30]. Also, AI-driven segmentation models, integrated with augmented reality (AR) and virtual reality (VR), facilitate enhanced intraoperative visualization [31]. BCIs are playing an increasingly vital role in precision neurosurgery by offering real-time feedback on neural activity and enabling direct brain-machine interactions. BCIs assist in real-time electrophysiological monitoring, ensuring critical functional regions are preserved during resection [32]. Whereas, implantable BCIs, such as Utah arrays and intracortical microelectrodes, restore motor function in patients with spinal cord injuries or stroke [33].
Despite remarkable advancements, precision neurosurgery faces several challenges. Differences in brain anatomy and pathology demand highly personalized surgical approaches [34]. Also, the integration of AI and BCIs raises concerns regarding patient privacy, informed consent, and surgical liability [35]. The high cost of AI-driven surgical technologies limits widespread adoption, particularly in resource-constrained settings [36]. Precision neurosurgery, driven by advancements in neuroimaging, AI, robotics, and BCIs, is redefining the landscape of neurosurgical interventions. While significant challenges remain, ongoing research and technological innovations continue to enhance surgical accuracy, patient safety, and post-operative outcomes.

2.2. Clinical Relevance in Neurosurgical Practice

In neurosurgical practice, the integration of AI into medical imaging has significantly improved the precision and efficiency of clinical interventions. Neurosurgeons frequently rely on complex imaging modalities such as MRI, CT-scan, and functional imaging techniques like fMRI and DTI for preoperative planning and intraoperative guidance [37]. However, manual segmentation of these images is time-consuming and prone to inter-operator variability, which can impact surgical decision-making and patient outcomes [38]. AI-based segmentation models, particularly DL-driven approaches, have demonstrated superior accuracy in delineating pathological and functional regions while minimizing human intervention [39].
  • Applications of AI in Neurosurgery:
  • Brain Tumor Resection: AI-enhanced segmentation assists in accurately distinguishing tumor margins from healthy tissue, thereby reducing the risk of postoperative neurological deficits [40]. Studies have demonstrated that DL models such as CNNs and transformers outperform traditional segmentation methods in identifying tumor boundaries, leading to improved surgical planning [41].
Figure 1 demonstrates how deep learning models, such as CNNs with attention mechanisms, process different MRI sequences to achieve precise tumor boundary delineation. The Z-score normalization technique is used to enhance contrast and improve segmentation accuracy, as shown in the comparison across multiple MRI channels.
  • Deep Brain Stimulation (DBS) Planning: Accurate segmentation of subcortical structures is crucial for optimal electrode placement in DBS procedures used to treat movement disorders such as Parkinson's disease [43]. AI-based volumetric segmentation has been shown to enhance the precision of target selection in DBS, thereby improving therapeutic outcomes [44].
Figure 2 depicts directional stimulation mapping using the distal row of segmented electrodes in anterior (A–A′′), lateral (B–B′′), and posterior (C–C′′) orientations. The figure illustrates the current thresholds (in mA) required to elicit transient or sustained sensory side effects in the face (via the ventral posteromedial nucleus, VPM), hand (via the ventral posterolateral nucleus, VPL), and speech-related functions (via the internal capsule), alongside the stimulation levels needed for effective tremor reduction. Software-based modeling visualizes the anatomical distribution of facial (A–C), hand (A′–C′), and capsular (A′′–C′′) responses.
  • Epilepsy Surgery: AI-based identification of seizure foci enhances the precision of both resective and neuromodulatory treatments for epilepsy [46]. Machine learning algorithms, particularly support vector machines (SVMs) and recurrent neural networks (RNNs), have been employed to analyze intracranial EEG (iEEG) signals and detect epileptogenic zones with high accuracy [47].
Figure 3 presents segmentation outputs from multiple deep learning models, overlaid on postoperative T1-weighted MPRAGE scans for different types of epilepsy surgeries. The resection types include: (1) right anterior temporal lobectomy, (2) right temporal polectomy with encephalocoele disconnection, (3) left frontal corticectomy, and (4) left frontal lesionectomy. Dice Similarity Coefficients (DSCs) are provided for each model, demonstrating the accuracy of segmentation across various resection types. Green-highlighted regions indicate high segmentation accuracy, while lower DSC scores reflect areas where automated methods struggle with precision. This visualization underscores the potential of AI-assisted techniques in enhancing post-surgical evaluation and guiding future interventions.
  • Challenges and Considerations:
Despite these advancements, several challenges must be addressed before AI can be fully integrated into neurosurgical workflows:
  • Interpretability: The "black box" nature of many AI models remains a significant barrier to clinical adoption. To improve transparency, XAI approaches such as attention mechanisms and saliency maps are being explored to provide visual interpretability of AI-generated segmentations [49]. These techniques enhance clinician trust and facilitate regulatory approval [50].
  • Regulatory Approvals: AI-driven medical imaging tools require rigorous validation and approval from regulatory bodies such as the U.S. Food and Drug Administration (USFDA) and the European Conformité Européenne (ECE) certification before they can be deployed in clinical settings [51]. Regulatory frameworks are continually evolving to address concerns related to data privacy, bias, and reliability.
  • Intraoperative Validation: Real-time validation of AI-generated segmentations during surgery remains a challenge. AI must seamlessly integrate with intraoperative imaging systems, such as neuronavigation platforms, to ensure reliable guidance during neurosurgical procedures [52,53]. Additionally, AR and AI-assisted robotics are emerging as potential solutions for improving intraoperative accuracy [54].
The collaboration between medical practitioners and AI researchers is essential for refining these technologies and ensuring their practical deployment in neurosurgical practice. By bridging the gap between computational advancements and real-world clinical applications, AI-driven segmentation and BCI-based neurosurgical interventions have the potential to significantly improve patient outcomes [55].

3. Brain-Computer Interfaces: Principles and Applications

3.1. Fundamentals of BCIs

BCIs are systems that enable direct communication between the brain and external devices by bypassing conventional neuromuscular pathways. BCIs operate by detecting, processing, and translating neural signals into commands that can control computers, prosthetic limbs, communication devices, and other assistive technologies [56]. The fundamental components of a BCI system include:
  • Signal Acquisition
Signal acquisition in BCIs relies on a range of neuroimaging and electrophysiological methods to accurately capture brain activity and translate it into actionable commands. EEG remains the most widely used non-invasive approach due to its high temporal resolution, portability, and affordability, making it suitable for real-time BCI applications [57]. However, EEG suffers from limited spatial resolution due to signal attenuation caused by scalp and skull interference. MEG overcomes some of these limitations by measuring the magnetic fields produced by neuronal activity, providing better spatial resolution than EEG, although its high cost and sensitivity to environmental noise restrict its practical implementation [58]. fNIRS, a non-invasive optical imaging technique, detects cerebral hemodynamic responses and has proven useful for monitoring cognitive states in BCI applications, particularly in scenarios where traditional electrophysiological methods are impractical [59]. For applications requiring higher spatial resolution, ECoG presents a viable semi-invasive alternative, wherein electrodes are placed directly on the cortical surface, providing a balance between high-resolution recordings and reduced signal attenuation compared to non-invasive approaches [60]. For the highest level of precision and direct neuronal activity capture, invasive methods such as single-unit and multi-unit recordings involve implanting microelectrodes to detect action potentials from individual neurons. These approaches offer unparalleled accuracy and control, making them ideal for high-performance BCIs but also pose significant surgical risks and biocompatibility challenges [61]. The continuous evolution of these signal acquisition technologies is essential for improving the efficiency and reliability of BCIs in clinical and assistive applications.
  • Signal Processing and Feature Extraction
Signal processing and feature extraction play a crucial role in brain-computer interfaces (BCIs) by refining raw neural signals and isolating meaningful patterns from noise and redundant data. The initial stage, preprocessing, involves noise filtering, artifact removal, and baseline correction to enhance the clarity and reliability of neural signals. Common techniques include adaptive filtering, independent component analysis (ICA), and wavelet transforms, which help mitigate interference from muscle movements, eye blinks, and environmental electrical noise [62]. Once the signals are cleaned, feature extraction methods identify key neural patterns such as event-related potentials (ERPs), spectral power changes, and phase synchronization, which serve as distinguishing characteristics for BCI applications. These features provide insight into cognitive states and motor intentions, enabling accurate interpretation of brain activity [63]. Finally, classification and machine learning algorithms transform extracted features into actionable commands. Advances in artificial intelligence, particularly DL models, have significantly improved the accuracy and adaptability of BCI systems by leveraging CNNs, RNNs, and hybrid architectures that dynamically adapt to individual users. These methods enhance the decoding of neural signals in real time, improving system performance across various applications, from neurorehabilitation to assistive technologies [64]. The integration of robust signal processing pipelines with advanced AI models continues to drive the evolution of BCIs, enhancing their usability, reliability, and responsiveness.
  • Control and Feedback Mechanisms
BCI systems rely on decoded neural signals to control external devices, creating a closed-loop interaction between the user and the system. This control loop consists of three key components: translation algorithms, output devices, and feedback mechanisms.
The first step in this process is the application of translation algorithms, which map extracted neural features into specific commands. These algorithms employ machine learning techniques to classify neural patterns and associate them with intended actions. Various classification methods, such as linear classifiers, SVMs, and DL approaches, are used to optimize translation accuracy and minimize false activations [65]. Once the neural intent is decoded, output devices execute the corresponding commands. These devices range from robotic arms and exoskeletons to VR environments and communication interfaces. For instance, AR-integrated BCIs enable users to interact with virtual objects through brain signals, paving the way for enhanced human-computer interaction [66,67]. Similarly, P300-based spellers have been developed to facilitate communication for individuals with severe motor disabilities by allowing them to select letters using brain activity alone [68]. Other implementations include drone control via BCI systems, demonstrating the adaptability of neural signal translation across multiple domains.
A crucial aspect of BCIs is the incorporation of feedback mechanisms, which provide real-time sensory feedback to users. This feedback enhances system accuracy and adaptability by leveraging neuroplasticity-driven learning. In EEG-based BCIs, visual, auditory, or haptic feedback is commonly used to guide users in adjusting their neural activity for improved control [69]. For example, when users receive real-time feedback on their brain signal modulation, they can refine their cognitive strategies to enhance performance. This iterative learning process is particularly valuable in neurorehabilitation applications, where BCI systems aid motor recovery by reinforcing brain-muscle coordination [70].

3.2. BCI Paradigms

BCIs operate through various paradigms that define how users generate neural signals to control external devices. These paradigms leverage distinct neural responses, enabling BCI applications in communication, rehabilitation, and assistive technologies.
  • Motor Imagery (MI)
Motor imagery (MI)-based BCIs rely on the user's ability to imagine specific movements, such as hand or foot movements, without executing them physically. This mental rehearsal induces characteristic changes in brain activity, particularly sensorimotor rhythm desynchronization in the primary motor and somatosensory cortices [71]. When a user imagines moving their right hand, for instance, neural activity in the corresponding sensorimotor area exhibits a reduction in oscillatory power, known as event-related desynchronization (ERD), while contralateral areas may show an increase in oscillatory activity, termed event-related synchronization (ERS). These distinct patterns allow BCIs to classify MI tasks and translate them into control commands for prosthetic limbs, wheelchairs, or virtual avatars. Advances in wearable EEG technology have further enhanced MI-based BCIs, improving their usability and accessibility in real-world applications [72].
  • P300 Event-Related Potential (ERP)
The P300-based BCI paradigm capitalizes on the brain's involuntary response to salient stimuli. When a user perceives a rare or meaningful stimulus within a series of non-target stimuli, a positive deflection in EEG activity occurs approximately 300 milliseconds after stimulus onset. This response, known as the P300 component, is widely used in speller interfaces, where users focus on desired letters while the system detects P300 responses to identify their intended selections [73]. P300-based BCIs are particularly beneficial for individuals with severe motor impairments, such as amyotrophic lateral sclerosis (ALS), providing them with an alternative means of communication through brain activity alone.
  • Steady-State Visual Evoked Potentials (SSVEPs)
SSVEP-based BCIs exploit the brain’s response to periodic visual stimulation. When a user fixates on a flickering light source with a specific frequency, their occipital lobe generates EEG signals that oscillate at the same frequency or its harmonics. This frequency-locked neural response allows for highly efficient BCI control, as the system can rapidly identify the frequency the user is attending to and infer their intended command [74]. Due to their high signal-to-noise ratio and minimal training requirements, SSVEP-based BCIs are widely used for high-speed communication, gaming, and environmental control systems. Each BCI paradigm offers unique advantages and is suited for different applications, from assistive communication devices to neurorehabilitation and augmented reality control. The selection of a specific paradigm depends on factors such as signal reliability, user comfort, and system robustness in real-world environments.

3.3. Neuroimaging Modalities for BCI

BCIs rely on diverse neuroimaging techniques to capture neural activity and translate it into actionable commands. These modalities vary in invasiveness, spatial and temporal resolution, signal type, and application scope [75,76]. This section outlines electrophysiological, hemodynamic, metabolic, and hybrid neuroimaging techniques used in BCI systems.
  • Electrophysiological Modalities
Electrophysiological modalities measure electrical brain activity with high temporal precision, making them essential for real-time BCI applications. These techniques facilitate rapid detection of neural signals, enabling efficient communication and control systems.
  • Electroencephalography (EEG)
EEG is one of the most widely used electrophysiological modalities in BCI research due to its non-invasiveness, affordability, and high temporal resolution. It operates by recording voltage fluctuations on the scalp that result from synchronized neuronal activity, enabling real-time neural signal acquisition. EEG offers a temporal resolution in the millisecond range, making it highly suitable for applications requiring rapid signal processing, such as MI-based BCIs, P300 spellers, and SSVEP-driven systems [77]. Despite these advantages, EEG suffers from certain limitations, particularly its poor spatial resolution, which ranges between 10 to 50 mm, making precise source localization challenging. Moreover, EEG signals are highly susceptible to noise and artifacts caused by muscle activity, eye movements, and external electrical interference, necessitating advanced denoising and artifact rejection techniques. In the context of BCI applications, EEG-based MI systems leverage neural desynchronization patterns to facilitate control of prosthetic devices and communication interfaces, [78]. while P300-based BCIs exploit event-related potentials elicited in response to target stimuli, commonly used in speller systems [79]. Additionally, SSVEP-based BCIs rely on periodic visual stimulation to generate robust and high-speed control commands. Despite these challenges, EEG remains a fundamental tool in BCI research, with ongoing efforts focused on improving signal acquisition methods, refining feature extraction algorithms, and integrating multimodal approaches to enhance its reliability and usability in both clinical and non-clinical settings.
  • Electrocorticography (ECoG)
ECoG is an invasive electrophysiological technique that records electrical activity directly from the cortical surface using subdural electrode grids. Unlike EEG, which measures neural activity through the scalp, ECoG electrodes are implanted beneath the dura mater, providing significantly improved spatial resolution, often reaching approximately 1 mm [80]. This enhanced spatial precision allows for more accurate localization of neural signals, leading to higher signal stability and reduced susceptibility to artifacts caused by muscle movement or environmental interference. Due to these advantages, ECoG has been extensively explored for high-performance neuroprosthetic control, enabling individuals with motor impairments to execute precise motor commands through direct cortical signal processing. Furthermore, its application in speech BCIs has shown promising results in assisting individuals with locked-in syndrome by decoding cortical activity associated with speech production and translating it into real-time communication outputs [81]. Despite these benefits, the necessity for surgical implantation presents a significant drawback, as it inherently carries risks of infection, inflammation, and long-term biocompatibility concerns. Nevertheless, ongoing research aims to refine ECoG-based BCIs by developing minimally invasive electrode implantation techniques and optimizing signal processing algorithms to enhance both safety and efficacy in clinical and assistive applications.
  • Local Field Potentials (LFPs)
LFPs are electrophysiological signals that measure synaptic activity from neuronal populations using intracortical electrodes. Unlike EEG, which records electrical activity from the scalp, or ECoG, which captures signals from the cortical surface, LFPs are obtained directly from within the brain, allowing for the detection of lower-frequency oscillations generated by local neuronal assemblies [80]. Due to their proximity to neural sources, LFPs offer significantly higher signal fidelity and decoding accuracy compared to non-invasive and subdural methods, making them highly suitable for applications requiring fine-grained neural control. This high-quality signal acquisition has led to their extensive use in neuroprosthetic limb control, where LFP-based BCIs enable individuals with motor impairments to achieve precise movement execution through direct neural interfacing. Additionally, LFPs play a crucial role in adaptive neurorehabilitation, as their detailed capture of neural dynamics allows for real-time feedback systems that can enhance neuroplasticity and promote motor recovery in patients with neurological disorders [81]. However, despite these advantages, the invasive nature of LFP acquisition necessitates chronic electrode implantation, which carries inherent surgical risks, including infection and long-term stability issues. Current research focuses on refining electrode materials and implantation techniques to improve biocompatibility, reduce immune responses, and extend the longevity of LFP-based interfaces for both clinical and assistive applications.
  • Single-Unit and Multi-Unit Recordings
Single-unit and multi-unit recordings involve the capture of action potentials from individual neurons using microelectrodes, offering the highest spatial resolution among all electrophysiological modalities (~10 µm). By directly measuring the electrical activity of neurons, this technique provides precise neural decoding, enabling detailed analysis of neural circuit dynamics [82]. Due to their exceptional accuracy, single-unit and multi-unit recordings are particularly valuable in high-precision robotic arm control, allowing individuals with severe motor impairments to perform complex, fine-motor tasks with neuroprosthetic devices [83,84]. Additionally, these recordings play a crucial role in sensory feedback integration, where neural signals are used to restore tactile perception in brain-controlled prosthetic limbs. This advancement significantly enhances the user's ability to interact with their environment by providing real-time sensory input alongside motor control [85]. However, despite their advantages, single-unit and multi-unit recordings remain highly invasive, requiring intracortical microelectrode implantation. This introduces challenges such as long-term stability issues, neuronal loss around electrode sites, and immune responses that can degrade signal quality over time. Ongoing research aims to develop biocompatible materials and advanced neural interface technologies to extend the longevity and reliability of these systems, making them more viable for long-term clinical applications.
  • Hemodynamic and Metabolic Modalities
Hemodynamic and metabolic modalities are neuroimaging techniques that assess neural activity by measuring changes in blood flow, oxygenation, or metabolic processes within the brain. These methods are based on the principle that active brain regions require increased oxygen and energy, leading to localized vascular and metabolic changes.
  • Functional Near-Infrared Spectroscopy (fNIRS)
fNIRS is a non-invasive neuroimaging technique that measures changes in oxygenated and deoxygenated hemoglobin using infrared light, providing insights into cortical activity. The technique relies on the absorption properties of near-infrared light to detect hemodynamic responses associated with neural activation, making it particularly valuable for studying cognitive and affective states [86]. One of the primary advantages of fNIRS is its portability, allowing for real-world applications such as cognitive workload monitoring and mental state classification. Additionally, its spatial resolution (approximately 1–3 cm) surpasses that of EEG, making it a viable alternative for certain BCI applications [87]. However, fNIRS has inherent limitations, including low temporal resolution (~1–2 seconds) and susceptibility to artifacts caused by scalp and skull thickness, which can affect signal reliability. Recent advancements have integrated fNIRS with EEG to create hybrid systems that enhance classification accuracy and robustness in mental state detection. This multimodal approach leverages the complementary strengths of both techniques combining fNIRS’s hemodynamic insights with EEG’s high temporal resolution resulting in improved BCI performance for applications such as neurorehabilitation and cognitive workload assessment.
  • Functional Magnetic Resonance Imaging (fMRI)
fMRI is a powerful neuroimaging technique that measures BOLD signals to infer underlying neuronal activity with high spatial resolution. This technique capitalizes on changes in cerebral blood flow and oxygenation, offering unparalleled spatial resolution (~1 mm) and whole-brain coverage [88]. Due to its ability to capture deep cortical and subcortical structures, fMRI has become a critical tool for studying cognitive functions, neural disorders, and BCI applications.
One of the most promising applications of fMRI-BCI is neurofeedback training, where individuals learn to modulate their own brain activity through real-time feedback. This approach has been explored for rehabilitative purposes, including restoring brain function in substance use disorders and psychiatric conditions. Additionally, fMRI-based brain-state decoding has shown potential in psychiatric and neurological applications, enabling the identification of altered brain states in conditions such as disorders of consciousness and schizophrenia [89]. However, fMRI's poor temporal resolution (~2–6 seconds), high costs, and requirement for participant immobility limit its practical use in real-time BCI systems.
Despite these challenges, ongoing research aims to refine fMRI-BCI paradigms by integrating machine learning techniques for more accurate and rapid brain-state classification. The combination of fMRI with other modalities, such as EEG or fNIRS, may further enhance its applicability in real-world neurotechnology.
  • Magnetoencephalography (MEG)
MEG is an advanced neuroimaging modality that detects the weak magnetic fields generated by neuronal electrical activity. Unlike EEG, which measures voltage fluctuations on the scalp, MEG captures direct neuronal activity with high temporal resolution (~1 ms) and improved spatial localization (~3–5 mm) due to the minimal distortion of magnetic fields by the skull and scalp [90].
One of the primary applications of MEG in BCIs is MI-based control systems. MEG-BCIs leverage motor-related cortical activity to facilitate hands-free interaction with external devices, showing promise in neurorehabilitation and assistive technology. Additionally, real-time neurofeedback applications using MEG enable users to modulate their brain activity for cognitive and clinical interventions, including attention training and psychiatric therapy [91].
However, the widespread adoption of MEG-BCIs is constrained by its high cost, the necessity of a magnetically shielded environment, and the limited portability of current MEG systems. Advances in optically pumped magnetometers (OPMs) are being explored to overcome these challenges, potentially enabling wearable MEG technology that preserves its high spatial and temporal resolution. Integrating MEG with EEG or fNIRS may further enhance the robustness and accessibility of BCI applications.
  • Emerging and Hybrid Modalities
To improve accuracy and usability, hybrid BCI systems integrate multiple neuroimaging modalities, leveraging the strengths of each while mitigating their respective limitations. By combining different signal acquisition techniques, these systems provide a more comprehensive understanding of neural activity, enhancing both spatial and temporal resolution.
  • EEG-fNIRS Hybrid Systems
EEG-fNIRS hybrid systems integrate the high temporal resolution of EEG with the improved spatial resolution of fNIRS. EEG detects rapid neural oscillations, while fNIRS provides information on hemodynamic responses associated with neuronal activity. The complementary nature of these modalities makes EEG-fNIRS particularly useful for applications requiring both fine-grained temporal insights and spatially localized brain activity measurements [92]. EEG-fNIRS multi-modality has been employed for real-time emotion classification by capturing both electrophysiological and hemodynamic changes linked to affective states. This has significant implications for affective computing and assistive technologies. The combination of EEG’s rapid detection of cognitive fluctuations and fNIRS’s ability to measure sustained metabolic responses enhances mental workload monitoring. This is particularly useful in fields such as human-computer interaction, aviation, and neuroergonomics [93].
  • EEG-fMRI Hybrid Systems
EEG-fMRI hybrid systems merge the high spatial precision of fMRI with the millisecond-level temporal resolution of EEG, allowing researchers to simultaneously track both fast neural dynamics and precise anatomical localization of brain activity. This synergy is particularly beneficial for investigating brain-state transitions and neurological disorders [94]. EEG-fMRI is widely used in studying consciousness, epilepsy, schizophrenia, and other neuropsychiatric conditions. The integration of neural oscillations from EEG with fMRI’s whole-brain activity maps helps in identifying abnormal brain networks underlying these disorders [95].
  • Invasive Hybrid BCIs
Invasive BCIs utilize electrode arrays implanted directly on or within the brain tissue, offering unparalleled signal fidelity and precision. Hybrid configurations combine different invasive recording techniques to optimize motor control and sensory feedback. ECoG and LFP recordings enhance BCI performance by capturing both cortical surface activity and deeper subcortical signals. This dual-layer approach has shown promise in motor rehabilitation and assistive device control [96]. Hybrid single-unit recordings and ECoG enable high-precision decoding of fine motor movements and sensory feedback integration, improving prosthetic control and neurostimulation therapies. These systems are being explored for applications in spinal cord injury and locked-in syndrome patients [97,98,99].

3.4. Latest Developments

Advancements in multimodal BCI integration, wireless brain implants, and AI-enhanced neuroimaging are expected to improve BCI performance, accessibility, and real-world applications [100,101,102]. The integration of hybrid mathematical models, self-supervised learning, and neuro-symbolic AI is expected to advance BCI performance and generalizability [103,104]. BCIs have emerged as a transformative technology in neurosurgery, offering real-time brain activity monitoring, neurofeedback for intraoperative precision, and assistive solutions for motor rehabilitation. This section examines the diverse applications of BCI in neurosurgical settings, focusing on preoperative planning, intraoperative monitoring, and postoperative rehabilitation. BCI-based neuroimaging techniques assist neurosurgeons in mapping functional areas of the brain, identifying eloquent cortical regions, and predicting surgical outcomes. AI algorithms facilitate precise delineation of brain tumors and critical anatomical structures, aiding in meticulous preoperative planning. For instance, fully automatic brain tumor segmentation techniques have been developed for 3D evaluation in augmented reality environments, enhancing the accuracy of surgical interventions [105]. AI-driven AR systems overlay critical information onto the surgeon's field of view, improving real-time decision-making during procedures. The integration of AR technology in neurosurgery has been adopted early, amplifying user perception by integrating virtual content into the tangible world and displaying it simultaneously and in real-time [106]. ECoG-guided functional mapping enables real-time localization of motor and language areas during neurosurgical procedures. In contrast, EEG-based source localization offers a non-invasive approach for mapping epileptic foci and assessing cortical function [107]. fMRI-BCI integration enhances pre-surgical brain mapping by combining hemodynamic and electrophysiological data. BCI assists in distinguishing between functional and pathological brain tissue, optimizing resection margins while preserving critical cortical areas [108].
BCI-driven intraoperative neuromonitoring (IONM) plays a crucial role in ensuring surgical precision and minimizing neurological deficits. BCI-controlled somatosensory evoked potentials (SSEPs) provide continuous feedback on spinal cord integrity [109]. Motor-evoked potentials (MEPs) track motor pathway function, aiding in tumor resection and deep brain stimulation (DBS) procedures. [110]. Adaptive BCI-guided cortical stimulation modulates brain activity to prevent functional deterioration [111]. AI-enhanced BCI systems detect and correct intraoperative neural disturbances in real time [112]. BCI technology extends beyond surgery, enabling motor recovery, cognitive training, and neuro-prosthetic control for patients with neurological deficits. BCI-based neurorehabilitation utilizes motor imagery and real-time feedback to enhance neuroplasticity [113]. Exoskeletons and robotic prosthetics controlled by BCIs facilitate limb function recovery [114,115,116]. BCI-driven speech synthesis aids in communication for patients with severe speech impairment [117]. Neurofeedback therapy assists in cognitive recovery post-surgery. Future Directions in BCI-Guided Neurosurgery include hybrid BCI models integrating fNIRS, EEG, and ECoG for enhanced neurosurgical guidance [118]. alongside wireless and non-invasive BCIs for portable neuromonitoring.

4. AI-Driven Brain Image Segmentation: State-of-the-Art

4.1. Machine Learning and Deep Learning in Image Segmentation

The integration of ML and DL methodologies into medical image segmentation has fundamentally transformed neurosurgical planning and intervention. Historically, segmentation was performed manually by radiologists and neurosurgeons, a process that is both labor-intensive and susceptible to intra- and inter-observer variability [119]. The advent of AI-driven segmentation, particularly with CNNs, has significantly enhanced segmentation accuracy, efficiency, and reproducibility [120].
DL architectures, including U-Net, DeepLabV3+, and vision transformer models, have demonstrated superior capabilities in extracting hierarchical features from MRI and CT scans, facilitating the precise delineation of tumors, white matter lesions, and other anatomically significant brain structures [121,122]. Hybrid approaches that combine CNNs with transformer-based architectures have been developed to improve global contextual awareness while preserving high-resolution local features. Furthermore, DL models trained on extensive, diverse datasets exhibit strong generalization capabilities across different imaging modalities, thereby reducing dependence on manual annotations and enhancing clinical applicability [123].

4.2. Mathematical Formulation of CNN-Based Segmentation

Mathematically, CNN-based segmentation can be framed as a pixel-wise classification or regression task. Given an input brain scan X and the corresponding ground truth segmentation mask Y , the objective is to train a function f θ such that:
Y = f θ X
where, Y represents predicted segmentation mask and θ denotes learnable params of network.
Loss functions play a crucial role in optimizing segmentation models. Commonly employed loss functions include:
  • Cross-entropy loss, used for pixel-wise classification:
    L C E = i ( Y i log Y i + 1 Y i log ( 1   Y i ) )
  • Dice loss, which quantifies the degree of overlap between the predicted and true segmentation masks:
    L D i c e = 1 2 Y i Y i Y i + Y i
  • Focal loss, designed to mitigate class imbalance by down-weighting easily classified examples:
    L F o c a l = α 1 Y i γ Y i l o g ( Y i )
Training optimization techniques, such as stochastic gradient descent (SGD) and Adam, are employed to iteratively update network parameters and minimize the loss function, ensuring convergence to an optimal segmentation solution.

4.3. Comparison of Traditional vs. AI-Based Segmentation Methods

  • Manual Segmentation
Historically, brain image segmentation has relied on manual delineation performed by radiologists and neurosurgeons using specialized software. While manual segmentation ensures expert-driven accuracy, it is highly time-consuming and introduces subjective variability in results.
  • AI-Based Segmentation
In contrast, AI-driven segmentation methods, particularly DL-based approaches, present several advantages over manual techniques:
  • Speed: AI models can process and segment brain images within seconds, significantly reducing analysis time compared to manual segmentation, which can take hours.
  • Accuracy: DL models often achieve Dice similarity coefficients exceeding 0.90, demonstrating expert-level performance.
  • Reproducibility: AI models eliminate inter-observer variability, ensuring consistent segmentation results across different datasets and clinical environments.
Despite these advantages, AI-based segmentation poses several challenges:
  • Dataset Bias: Models trained on limited datasets may exhibit reduced generalizability to unseen patient populations.
  • Interpretability: The inherent "black box" nature of DL models complicates clinical validation and trust in automated segmentation results.
  • Regulatory Constraints: AI-driven segmentation tools require extensive validation and regulatory approvals before integration into standard clinical workflows.
Ongoing advancements in AI-driven segmentation aim to address these limitations through domain adaptation strategies, self-supervised learning frameworks, and XAI methodologies. These improvements are critical for enhancing the robustness, reliability, and clinical adoption of AI-assisted segmentation in neurosurgical applications.

5. Hybrid BCI and Image Segmentation Model for Precision Neurosurgery

As shown in Figure 4, the system workflow begins with real-time neural signal acquisition, where intracranial EEG and functional neuroimaging data are captured to assess the patient’s neurological state. These neural signals are processed using advanced machine learning algorithms, enabling extracting relevant features and classifying critical neural activity patterns. Simultaneously, DL-based image segmentation techniques analyze high-resolution brain scans to delineate surgical targets with sub-millimeter precision. The integration of these two modalities ensures enhanced localization of pathological regions, thereby improving surgical planning and electrode placement accuracy.

5.1. System Architecture and Workflow

Hybrid BCI and image segmentation model integrates neural signal acquisition with DL-based medical imaging to enhance precision neurosurgery. The system architecture comprises the following key components:
  • Neural Signal Acquisition: EEG and ECoG signals are collected using high-resolution sensors to capture real-time brain activity. Recent advances in non-invasive and minimally invasive BCI techniques improve the spatial resolution and signal fidelity, enabling finer neuro-modulatory applications [79,91,125].
  • Preprocessing Pipeline: Raw neural signals undergo artifact removal, band-pass filtering, and feature extraction to ensure noise-free input for classification. State-of-the-art signal processing frameworks integrate ICA and wavelet decomposition to enhance the robustness of feature extraction [126].
  • DL-Based Image Segmentation: MRI and CT images are processed using transformer-based segmentation models, such as Swin UNETR, for precise delineation of brain structures. The combination of CNNs and self-attention mechanisms significantly improves segmentation accuracy in glioma detection and tumor boundary definition [38].
  • Decision Support System (DSS): The integration of BCI-derived cognitive feedback and AI-based image analysis aids neurosurgeons in optimizing surgical interventions. Multimodal data fusion techniques enhance real-time surgical decision-making, reducing intraoperative errors and improving patient outcomes [127].
  • Cloud Integration: A cloud-based AI/ML framework ensures scalability and real-time computational efficiency. Federated learning models deployed in cloud-based medical AI systems facilitate secure, distributed model training while maintaining patient data privacy [128].

5.2. Signal Processing for Real-Time Neurosurgical Assistance

Effective real-time neurosurgical assistance depends on advanced signal processing techniques. The following methods are employed:
  • Fourier and Wavelet Transforms: Fourier and wavelet transforms are essential mathematical tools for analyzing EEG signals in the frequency domain. The Fourier Transform (FT) decomposes EEG waveforms into constituent frequency components, allowing researchers to identify specific oscillatory patterns associated with cognitive processes and motor intentions. However, the FT assumes stationarity in the signal, which is not always applicable to dynamic brain activity [129].
To overcome this limitation, Wavelet Transform (WT) is utilized, offering superior time-frequency resolution. The WT allows EEG signals to be analyzed across multiple frequency scales, making it particularly useful for detecting transient neurological events such as epileptic seizures, ERPs, and MI tasks in neurosurgical applications. By identifying characteristic frequency bands (e.g., alpha, beta, gamma waves), the system can classify neural states with high accuracy, facilitating real-time decision-making in surgical environments.
  • Independent Component Analysis (ICA): Neural signal recordings, especially EEG, often contain artifacts from non-neural sources such as eye blinks, muscle movements, and external electrical noise. ICA is a powerful statistical technique used to separate and remove these unwanted artifacts while preserving relevant neural information.
ICA operates by assuming that recorded EEG signals are a mixture of independent sources. By applying an optimization algorithm, ICA isolates neural components from noise, significantly enhancing the signal quality for real-time decoding. This method is particularly beneficial in neurosurgical applications where precise neural activity monitoring is critical, as it ensures that the decoded signals accurately reflect the patient’s cognitive state rather than extraneous physiological artifacts [130].
  • Deep Neural Networks (DNNs): Recent advancements in DL have significantly improved EEG-based neural decoding. CNNs and RNNs are particularly effective in extracting spatial and temporal features from EEG data, enabling the classification of brain states with high precision [131].
    CNNs: These networks process EEG signals as spatially structured data, identifying patterns related to motor imagery, cognitive load, and surgical stress responses. CNNs efficiently learn hierarchical representations, making them robust against variations in electrode placement and signal noise.
    RNNs and Long Short-Term Memory (LSTM) Networks: Unlike CNNs, RNNs capture temporal dependencies in EEG signals. LSTM networks, a variant of RNNs, are particularly effective in modeling sequential EEG data, predicting user intent, and tracking dynamic changes in brain activity over time.
By integrating CNNs and RNNs, DL models can classify MI tasks in real time, allowing for precise control of neuroprosthetics, robotic assistants, or surgical guidance systems
  • Kalman Filters (KFs) and Hidden Markov Models (HMMs): Decoding neural signals in real time involves inherent uncertainty due to noise, signal fluctuations, and measurement errors. KFs and HMMs are probabilistic frameworks designed to address these challenges by smoothing and predicting neural signal patterns.
    Kalman Filters: These are widely used in brain-computer interfaces to estimate dynamic brain states based on noisy EEG measurements. In neurosurgical applications, Kalman filters improve the real-time tracking of neural activity, making it possible to predict intended movements with greater precision.
    Hidden Markov Models: HMMs are particularly effective for modeling sequential neural events, such as transitions between different mental states or MI patterns. HMMs assign probabilistic states to EEG sequences, enhancing the accuracy of neurofeedback and BCI-driven assistive technologies.
By combining KFs and HMMs, neurosurgical assistance systems can achieve enhanced robustness, ensuring reliable performance even under variable conditions [132].

5.3. Automated Brain Image Analysis Using DL

The automated brain image analysis pipeline integrates state-of-the-art DL models for robust segmentation. Advanced DL techniques are revolutionizing brain image analysis, offering automated, high-accuracy segmentation, classification, and diagnostic insights. The key methodologies used include:
  • Transformer-Based Segmentation: Traditional convolutional networks often struggle to maintain spatial consistency in brain MRI segmentation. Transformer-based models such as Swin UNETR and TransUNet address this limitation by incorporating self-attention mechanisms that improve feature representation across long-range spatial dependencies.
    Swin UNETR: A hierarchical vision transformer that refines feature extraction while preserving high-resolution structural details in brain MRI scans.
    TransUNet: A hybrid model that combines CNN feature extraction with transformer-based contextual modeling, leading to superior segmentation accuracy in neurosurgical planning and brain tumor delineation [133].
  • Hybrid Attention Mechanisms: DL-based brain segmentation benefits from hybrid attention models, which combine self-attention (global feature learning) and spatial attention (local feature refinement). This approach enhances the precision of region delineation, crucial for neurosurgical decision-making [134].
  • Self-Supervised Learning (SSL): One major limitation of DL in medical imaging is the reliance on large manually labeled datasets. mitigates this issue by leveraging contrastive learning techniques to pre-train models using unlabeled data. This method significantly reduces annotation requirements while maintaining high segmentation accuracy [135].
  • Multi-Modal Fusion: Combining data from multiple imaging modalities, including MRI, CT, and fMRI, enhances diagnostic accuracy by integrating complementary information. DL models perform multi-modal fusion using attention mechanisms, improving robustness against modality-specific noise and artifacts [136].

5.4. Integration with Cloud-Based AI/ML Platforms

To ensure seamless operation in clinical settings, the system is integrated with cloud-based AI/ML platforms. Real-time neurosurgical assistance requires seamless integration with cloud-based AI/ML platforms to enhance computational efficiency, security, and system adaptability.
  • Edge Computing for Low-Latency Processing: To ensure real-time inference in surgical settings, edge computing is employed, enabling on-device processing with minimal latency. This is critical for applications requiring immediate neural signal decoding and feedback mechanisms [137].
  • Federated Learning for Privacy-Preserving AI: Federated learning enables decentralized model training across multiple healthcare institutions while maintaining data privacy. This ensures compliance with regulatory frameworks such as GDPR and HIPAA [138,139].
  • AutoML for Continuous Model Optimization: AutoML techniques automate model selection, hyperparameter tuning, and retraining, allowing continuous improvement of neurosurgical AI models [140].
  • Blockchain for Data Integrity: Blockchain technology ensures tamper-proof medical records through smart contracts, enhancing transparency and security in neurosurgical data management. Smart contracts ensure unbiased and tamper-proof record-keeping of surgical decisions and patient data [141].

6. Performance Evaluation and Statistical Analysis

The evaluation of BCI systems and image segmentation models requires rigorous performance assessment through quantitative metrics and statistical validation. This section outlines the key evaluation metrics for BCI systems and segmentation accuracy, followed by statistical significance testing methods used to validate the experimental results.

6.1. Performance Metrics for BCI Systems

BCI systems are evaluated primarily based on their efficiency in translating neural signals into meaningful outputs. Information Transfer Rate (ITR) quantifies the speed and efficiency of a BCI system in transmitting information. It is measured in bits per minute (bpm) and is calculated using the formula:
I T R = log 2 N + P lo g 2 P + + 1 P lo g 2 ( 1 P N 1 )
where, N represents the number of possible classes (commands), and P is the classification accuracy. ITR provides a measure of how effectively a BCI system can communicate information within a given time frame. This metric represents the proportion of correctly classified trials over the total number of trials, expressed as a percentage. It serves as a direct measure of the reliability of the BCI system in distinguishing between different mental states or commands. Higher accuracy indicates better performance in decoding neural signals.

6.2. Evaluating Segmentation Accuracy

The accuracy of segmentation models in medical image analysis is commonly evaluated using overlap-based metrics, such as the Dice Similarity Coefficient, and distance-based metrics, such as the Hausdorff Distance. Dice Similarity Coefficient (DSC) evaluates the spatial overlap between the predicted segmentation (SSS) and the ground truth (GGG) and is computed as:
D S C = 2 | S     G | S + | G |
where ∣S ∩ G∣ represents the number of overlapping pixels between the predicted and ground truth regions. A higher DSC value (closer to 1) indicates better segmentation performance. Similarly, Jaccard Index (Intersection over Union, IoU) The Jaccard Index is another measure of segmentation accuracy, defined as:
J a c c a r d = | S     G | | S     G |
This metric quantifies the proportion of common pixels between the predicted and actual segmentation masks, with values ranging from 0 to 1, where a higher value represents superior segmentation accuracy.

6.3. Statistical Significance Testing

To ensure the robustness of the results and validate the effectiveness of the proposed models, statistical significance tests are conducted. Commonly used statistical tests include:
  • Paired t-test: Used when comparing the performance of two models on the same dataset, evaluating whether the mean difference between paired observations is statistically significant.
  • Wilcoxon Signed-Rank Test: A non-parametric alternative to the paired t-test, suitable when the data does not follow a normal distribution.
  • Analysis of Variance (ANOVA): Applied when comparing multiple models or experimental conditions to determine whether significant differences exist among them.
  • Permutation Testing: A robust statistical method used to assess the significance of performance differences by randomly shuffling labels and recalculating metrics to generate a null distribution.
These statistical tests ensure that observed improvements in performance are not due to random variation but rather reflect meaningful differences in model effectiveness.

7. Challenges, Ethical Considerations, and Future Directions

Despite substantial progress in BCI technology, several critical challenges persist, hindering widespread adoption and real-world applicability. Addressing these challenges requires interdisciplinary efforts, integrating advancements in neuroscience, engineering, and artificial intelligence. The integration of BCI systems into real-world applications, particularly in neurosurgical and clinical settings, presents several challenges, including technological, ethical, and regulatory hurdles. This section explores the key obstacles in real-world BCI implementation, ethical concerns surrounding AI-driven neurosurgical systems, and potential future research directions to enhance the efficacy and safety of BCIs.

7.1. Challenges in Real-World Implementation

Implementing BCI systems in real-world clinical environments is challenging due to several factors including technological limitations, user variability, data processing latency, long-term stability and regulatory acceptance. BCI systems, particularly invasive ones, face challenges related to electrode degradation, biocompatibility issues, and the risks associated with surgical implantation [142]. Non-invasive BCIs, while safer, often suffer from lower signal resolution due to interference from the skull and scalp, which reduces classification accuracy and real-time usability [143]. Non-invasive BCI methods, such as EEG-based systems, suffer from high susceptibility to noise, including muscle artifacts, environmental interference, and overlapping neural signals. This low SNR compromises signal reliability, necessitating advanced denoising algorithms such as wavelet transforms, ICA, and DL-based noise reduction techniques [144]. ​Improving SNR remains a primary research focus to enhance decoding accuracy and user experience. BCI performance is significantly affected by inter- and intra-user variability, as differences in brain physiology, cognitive states, and learning capabilities lead to inconsistent neural responses [145]. ​Some users naturally exhibit strong, distinct neural patterns, while others require extensive calibration and training. This variability necessitates adaptive BCI models capable of personalizing algorithms in real-time to accommodate individual differences. Emerging approaches such as transfer learning, few-shot learning, and user-specific feature selection hold promise in addressing this issue. Individual neurophysiological differences significantly impact BCI performance. Factors such as cognitive state, age, neurological conditions, and learning adaptation can affect signal reliability and model generalization. The real-time application of BCIs in neurosurgical procedures requires ultra-fast neural decoding with minimal latency. However, existing machine learning algorithms often struggle with balancing speed and accuracy, limiting practical clinical deployment [146]. Neural tissue reactions to implanted electrodes can cause signal degradation over time, necessitating frequent recalibration or device replacement. Even non-invasive BCIs require extensive training to maintain stable classification performance, which can be burdensome for users [147]. Obtaining regulatory approval from agencies such as the USFDA or the EMA requires extensive clinical trials. Additionally, neurosurgeons and healthcare professionals need specialized training to integrate BCIs into surgical workflows [148].

7.2. Ethical Considerations in AI-Driven Neurosurgical Systems

BCI-driven neurosurgical systems introduce profound ethical dilemmas, including patient autonomy and informed consent, privacy and data security, equity and accessibility, dual-use concerns as well as psychological and social impact. Patients must be fully informed about the capabilities, limitations, and risks associated with BCIs, including potential unintended consequences such as cognitive alterations [149]. BCIs generate highly sensitive neural data that, if improperly stored or shared, could lead to unprecedented privacy breaches. Robust encryption and secure data-sharing frameworks are necessary to prevent unauthorized access [150]. As BCIs enable direct access to neural activity, concerns regarding data privacy, cognitive autonomy, and the risk of unintended neural information leakage have emerged [151]. ​Unauthorized access to neural data could lead to intrusive surveillance, neuromarketing, or manipulation, raising profound ethical implications. Additionally, issues such as mental fatigue, long-term neuroplastic effects, and the potential for subconscious bias in AI-driven BCIs necessitate the development of robust legal and ethical frameworks to protect user rights. Advanced BCIs could be prohibitively expensive, raising concerns about disparities in access to cutting-edge neurosurgical interventions. Ethical frameworks must ensure that these technologies do not exacerbate existing healthcare inequalities [152]. BCI technologies have potential applications beyond medicine, including military and surveillance uses, which raise ethical and human rights concerns [153]. International regulations must prevent misuse while enabling beneficial applications. The integration of AI-driven BCIs into human cognition raises questions about identity, agency, and the potential psychological effects of brain augmentation [154]. More research is needed to understand the long-term impact on mental health and self-perception.

7.3. Future Research Directions

Advancements in BCI technologies for neurosurgical applications necessitate addressing several critical challenges. Enhancing neural signal acquisition requires the development of biocompatible materials and ultra-sensitive sensors to improve the longevity and stability of neural recordings, ensuring minimal signal degradation over time [155]. Developing novel high-density EEG systems, optically pumped magnetometers (OPMs), and hybrid neuroimaging techniques could improve signal clarity and resolution. Minimally invasive electrodes and wearable dry EEG sensors are also being explored to enhance comfort and usability while maintaining signal fidelity. Improving neural decoding can be achieved by integrating adaptive machine learning models, such as transformer-based architecture and self-supervised learning techniques, which dynamically adjust to individual brain signals, enhancing real-time performance and reducing calibration time [156]. Combining multiple neural modalities such as EEG with fNIRS or EMG can enhance robustness and reduce dependency on single-signal sources. Hybrid BCIs leverage complementary strengths of different modalities, improving classification accuracy and expanding application domains. Furthermore, personalizing BCI systems through adaptive algorithms that continuously learn from individual users can optimize signal classification accuracy, facilitating seamless interaction between patients and neuroprosthetic devices [157,158]. Ethical and regulatory concerns surrounding BCI technologies, including data privacy, informed consent, and equitable access, must be addressed through the establishment of standardized global policies [159]. Additionally, rigorous longitudinal studies must be conducted to assess the long-term neurological and psychological impact of invasive and non-invasive BCI systems before widespread clinical adoption. Advancements in DL, reinforcement learning, and neuroadaptive AI are expected to revolutionize BCI decoding. AI models capable of real-time contextual learning and error correction will improve system reliability, reducing training time and increasing user engagement. Additionally, closed-loop BCI architectures with adaptive feedback mechanisms will facilitate more intuitive interactions between users and assistive technologies. A collaborative effort among neuroscientists, engineers, clinicians, ethicists, and regulatory bodies is essential to foster responsible innovation in BCI research and implementation. By tackling these challenges, BCI technologies can evolve into reliable, secure, and ethically sound solutions for neurosurgical interventions. By addressing these challenges and integrating novel technological advancements, BCIs have the potential to transform healthcare, neurorehabilitation, and human-computer interaction, bridging the gap between cognitive intention and external control.

8. Conclusions

This review has examined the intersection of BCIs and AI-driven image segmentation in the context of precision neurosurgery. We have explored how DL techniques, particularly CNNs, enhance neuroimaging modalities such as MRI and CT scans, enabling automated segmentation and precise brain mapping. The discussion has underscored the role of BCIs in neurosurgical procedures, focusing on real-time neural signal processing, robotic-assisted surgery, and AI-enhanced intraoperative decision-making. While these technologies significantly improve surgical precision and patient outcomes, they also present challenges related to signal reliability, latency, computational efficiency, and ethical concerns, particularly regarding data privacy, accessibility, and regulatory oversight.
The integration of BCIs and AI into neurosurgical workflows holds transformative potential. AI-driven image segmentation enhances the accuracy of preoperative planning and intraoperative guidance, reducing human error and ensuring optimal tissue differentiation. BCIs contribute by facilitating real-time neural monitoring, enabling adaptive surgical responses, and assisting patients with severe neurological impairments through neuroprosthetic applications. Intraoperative AI-driven robotic assistance minimizes invasiveness and optimizes precision, leading to reduced recovery times and improved long-term patient outcomes. However, widespread clinical adoption depends on addressing technological constraints, refining machine learning models for real-time adaptation, and developing robust ethical and regulatory frameworks to ensure patient safety and equitable access.
The synergy between BCI technology and AI-driven medical image processing represents a paradigm shift in neurosurgical practice. While current advancements demonstrate significant promise, ongoing interdisciplinary collaboration among neurosurgeons, engineers, data scientists, and ethicists is crucial to overcoming existing barriers. Future research should prioritize enhancing neural decoding accuracy, improving AI interpretability, and ensuring seamless human-AI interaction within surgical environments. With continued innovation and responsible implementation, BCI-integrated AI systems can redefine precision neurosurgery, ultimately advancing patient care and surgical outcomes.

Author Contributions

S.G.: Content planning and writing of the manuscript. P.S.: Writing of the manuscript and procurement of figures. D.K.K.: Formal analysis. B.G. and D.M.: Review, validation, and supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no funding.

Acknowledgments

D.M. acknowledges the support from the Department of Biophysics and Radiation Biology and the National Research, Development and Innovation Office at Semmelweis University, and the Ministry of Innovation. B.G. acknowledges the support from the Lee Kong Chian School of Medicine and Data Science, the AI Research (DSAIR) Centre of NTU, and the Cognitive Neuro Imaging Centre (CONIC) at NTU.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

BCI: Brain-Computer Interface; AI: Artificial Intelligence; CNN: Convolutional Neural Network; MRI: Magnetic Resonance Imaging; CT: Computed Tomography; ECE: European Conformité Européenne; EEG: Electroencephalography; EMA: European Medicines Agency; fNIRS: Functional Near-Infrared Spectroscopy; MEG: Magnetoencephalography; ECoG: Electrocorticography; EMG: Electromyography; DBS: Deep Brain Stimulation; SSVEP: Steady-State Visual Evoked Potential; P300: P300 Event-Related Potential; ERP: Event-Related Potential; ICA: Independent Component Analysis; LSTM: Long Short-Term Memory; SVM: Support Vector Machine; AR: Augmented Reality; VR: Virtual Reality; XAI: Explainable Artificial Intelligence; USFDA: United States Food and Drug Administration; GDPR: General Data Protection Regulation; HIPAA: Health Insurance Portability and Accountability Act.

References

  1. Mudgal, S.K.; Sharma, S.K.; Chaturvedi, J.; Sharma, A. Brain Computer Interface Advancement in Neurosciences: Applications and Issues. Interdiscip Neurosurg 2020, 20. [Google Scholar] [CrossRef]
  2. Vidal, J.J. Toward Direct Brain-Computer Communication. Annu Rev Biophys Bioeng 1973, 2, 157–180. [Google Scholar] [CrossRef] [PubMed]
  3. Maiseli, B.; Abdalla, A.T.; Massawe, L. V.; Mbise, M.; Mkocha, K.; Nassor, N.A.; Ismail, M.; Michael, J.; Kimambo, S. Brain–Computer Interface: Trend, Challenges, and Threats. Brain Inform 2023, 10, 1–16. [Google Scholar] [CrossRef] [PubMed]
  4. Wang, Y.; Wang, R.; Gao, X.; Hong, B.; Gao, S. A Practical Vep-Based Brain-Computer Interface. IEEE Trans Neural Syst Rehabil Eng 2006, 14, 234–240. [Google Scholar] [CrossRef]
  5. Ang, K.K.; Guan, C.; Phua, K.S.; Wang, C.; Zhou, L.; Tang, K.Y.; Ephraim Joseph, G.J.; Keong Kuah, C.W.; Geok Chua, K.S. Brain-Computer Interface-Based Robotic End Effector System for Wrist and Hand Rehabilitation: Results of a Three-Armed Randomized Controlled Trial for Chronic Stroke. Front Neuroeng 2014, 7. [Google Scholar] [CrossRef]
  6. Yuste, R.; Goering, S.; Agüeray Arcas, B.; Bi, G.; Carmena, J.M.; Carter, A.; Fins, J.J.; Friesen, P.; Gallant, J.; Huggins, J.E.; et al. Four Ethical Priorities for Neurotechnologies and AI. Nature 2017 551:7679 2017, 551, 159–163. [Google Scholar] [CrossRef]
  7. Brocal, F. Brain-Computer Interfaces in Safety and Security Fields: Risks and Applications. Saf Sci 2023, 160. [Google Scholar] [CrossRef]
  8. Xu, Y.; Quan, R.; Xu, W.; Huang, Y.; Chen, X.; Liu, F. Advances in Medical Image Segmentation: A Comprehensive Review of Traditional, Deep Learning and Hybrid Approaches. Bioengineering 2024, Vol. 11, Page 1034 2024, 11, 1034. [Google Scholar] [CrossRef]
  9. Khan, R.F.; Lee, B.D.; Lee, M.S. Transformers in Medical Image Segmentation: A Narrative Review. Quant Imaging Med Surg 2023, 13, 8747–8767. [Google Scholar] [CrossRef]
  10. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings 2015.
  11. Liu, Z.; Ding, L.; He, B. Integration of EEG/MEG with MRI and FMRI in Functional Neuroimaging. IEEE Eng Med Biol Mag 2006, 25, 46. [Google Scholar] [CrossRef]
  12. Christ, P.F.; Ettlinger, F.; Grün, F.; Elshaera, M.E.A.; Lipkova, J.; Schlecht, S.; Ahmaddy, F.; Tatavarty, S.; Bickel, M.; Bilic, P.; et al. Automatic Liver and Tumor Segmentation of CT and MRI Volumes Using Cascaded Fully Convolutional Neural Networks. 2017.
  13. Panayides, A.S.; Amini, A.; Filipovic, N.D.; Sharma, A.; Tsaftaris, S.A.; Young, A.; Foran, D.; Do, N.; Golemati, S.; Kurc, T.; et al. AI in Medical Imaging Informatics: Current Challenges and Future Directions. IEEE J Biomed Health Inform 2020, 24, 1837–1857. [Google Scholar] [CrossRef] [PubMed]
  14. Hamilton, L.S.; Chang, D.L.; Lee, M.B.; Chang, E.F. Semi-Automated Anatomical Labeling and Inter-Subject Warping of High-Density Intracranial Recording Electrodes in Electrocorticography. Front Neuroinform 2017, 11, 272432. [Google Scholar] [CrossRef]
  15. Ruberto, C. Di; Stefano, A.; Comelli, A.; Putzu, L.; Loddo, A.; Kebaili, A.; Lapuyade-Lahorgue, J.; Ruan, S. Deep Learning Approaches for Data Augmentation in Medical Imaging: A Review. Journal of Imaging 2023, Vol. 9, Page 81 2023, 9, 81. [Google Scholar] [CrossRef]
  16. Chen, X.; Konukoglu, E. Unsupervised Detection of Lesions in Brain MRI Using Constrained Adversarial Auto-Encoders. 2018.
  17. Pham, D.L.; Xu, C.; Prince, J.L. Current Methods in Medical Image Segmentation. Annu Rev Biomed Eng 2000, 2, 315–337. [Google Scholar] [CrossRef]
  18. Singh, A.; Sengupta, S.; Lakshminarayanan, V. Explainable Deep Learning Models in Medical Image Analysis. J Imaging 2020, 6, 52. [Google Scholar] [CrossRef]
  19. Chen, H.; Gomez, C.; Huang, C.M.; Unberath, M. Explainable Medical Imaging AI Needs Human-Centered Design: Guidelines and Evidence from a Systematic Review. npj Digital Medicine 2022 5:1 2022, 5, 1–15. [Google Scholar] [CrossRef]
  20. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings 2015.
  21. Mahmood, A.; Patille, R.; Lam, E.; Mora, D.J.; Gurung, S.; Bookmyer, G.; Weldrick, R.; Chaudhury, H.; Canham, S.L. Correction: Mahmood et al. Aging in the Right Place for Older Adults Experiencing Housing Insecurity: An Environmental Assessment of Temporary Housing Program. Int. J. Environ. Res. Public Health 2022, 19, 14857. International Journal of Environmental Research and Public Health 2023, Vol. 20, Page 6260 2023, 20, 6260. [Google Scholar] [CrossRef] [PubMed]
  22. Fathi Kazerooni, A.; Arif, S.; Madhogarhia, R.; Khalili, N.; Haldar, D.; Bagheri, S.; Familiar, A.M.; Anderson, H.; Haldar, S.; Tu, W.; et al. Automated Tumor Segmentation and Brain Tissue Extraction from Multiparametric MRI of Pediatric Brain Tumors: A Multi-Institutional Study. Neurooncol Adv 2023, 5, 1–12. [Google Scholar] [CrossRef] [PubMed]
  23. Belliveau, J.W.; Kennedy, D.N.; McKinstry, R.C.; Buchbinder, B.R.; Weisskoff, R.M.; Cohen, M.S.; Vevea, J.M.; Brady, T.J.; Rosen, B.R. Functional Mapping of the Human Visual Cortex by Magnetic Resonance Imaging. Science (1979) 1991, 254, 716–719. [Google Scholar] [CrossRef]
  24. Ogawa, S.; Tank, D.W.; Menon, R.; Ellermann, J.M.; Kim, S.G.; Merkle, H.; Ugurbil, K. Intrinsic Signal Changes Accompanying Sensory Stimulation: Functional Brain Mapping with Magnetic Resonance Imaging. Proc Natl Acad Sci U S A 1992, 89, 5951–5955. [Google Scholar] [CrossRef]
  25. Sterman, M.B.; Friar, L. Suppression of Seizures in Epileptic Following on Sensorimotor EEG Feedback Training. Electroencephalogr. Clin. Neurophysiol. 1972, 33, 89–95. [Google Scholar] [CrossRef] [PubMed]
  26. Alarcon, G.; Garcia Seoane, J.J.; Binnie, C.D.; Martin Miguel, M.C.; Juler, J.; Polkey, C.E.; Elwes, R.D.C.; Ortiz Blasco, J.M. Origin and Propagation of Interictal Discharges in the Acute Electrocorticogram. Implications for Pathophysiology and Surgical Treatment of Temporal Lobe Epilepsy. Brain 1997, 120, 2259–2282. [Google Scholar] [CrossRef]
  27. Pan, R.; Yang, C.; Li, Z.; Ren, J.; Duan, Y. Magnetoencephalography-Based Approaches to Epilepsy Classification. Front Neurosci 2023, 17, 1183391. [Google Scholar] [CrossRef]
  28. Schupper, A.J.; Rao, M.; Mohammadi, N.; Baron, R.; Lee, J.Y.K.; Acerbi, F.; Hadjipanayis, C.G. Fluorescence-Guided Surgery: A Review on Timing and Use in Brain Tumor Surgery. Front Neurol 2021, 12, 682151. [Google Scholar] [CrossRef]
  29. Hassan, A.M.; Rajesh, A.; Asaad, M.; Nelson, J.A.; Coert, J.H.; Mehrara, B.J.; Butler, C.E. Artificial Intelligence and Machine Learning in Prediction of Surgical Complications: Current State, Applications, and Implications. Am Surg 2022, 89, 25. [Google Scholar] [CrossRef] [PubMed]
  30. Spyrantis, A.; Woebbecke, T.; Rueß, D.; Constantinescu, A.; Gierich, A.; Luyken, K.; Visser-Vandewalle, V.; Herrmann, E.; Gessler, F.; Czabanka, M.; et al. Accuracy of Robotic and Frame-Based Stereotactic Neurosurgery in a Phantom Model. Front Neurorobot 2022, 16, 762317. [Google Scholar] [CrossRef] [PubMed]
  31. Matsuzaki, K.; Kumatoriya, K.; Tando, M.; Kometani, T.; Shinohara, M. Polyphenols from Persimmon Fruit Attenuate Acetaldehyde-Induced DNA Double-Strand Breaks by Scavenging Acetaldehyde. Scientific Reports 2022 12:1 2022, 12, 1–15. [Google Scholar] [CrossRef]
  32. Belkacem, A.N.; Jamil, N.; Khalid, S.; Alnajjar, F. On Closed-Loop Brain Stimulation Systems for Improving the Quality of Life of Patients with Neurological Disorders. Front Hum Neurosci 2023, 17, 1085173. [Google Scholar] [CrossRef]
  33. Mokienko, O.A. Brain–Computer Interfaces with Intracortical Implants for Motor and Communication Functions Compensation: Review of Recent Developments. Modern Technologies in Medicine 2024, 16, 78. [Google Scholar] [CrossRef]
  34. Vadhavekar, N.H.; Sabzvari, T.; Laguardia, S.; Sheik, T.; Prakash, V.; Gupta, A.; Umesh, I.D.; Singla, A.; Koradia, I.; Patiño, B.B.R.; et al. Advancements in Imaging and Neurosurgical Techniques for Brain Tumor Resection: A Comprehensive Review. Cureus 2024, 16, e72745. [Google Scholar] [CrossRef]
  35. Livanis, E.; Voultsos, P.; Vadikolias, K.; Pantazakos, P.; Tsaroucha, A. Understanding the Ethical Issues of Brain-Computer Interfaces (BCIs): A Blessing or the Beginning of a Dystopian Future? Cureus 2024, 16, e58243. [Google Scholar] [CrossRef] [PubMed]
  36. Iftikhar, M.; Saqib, M.; Zareen, M.; Mumtaz, H. Artificial Intelligence: Revolutionizing Robotic Surgery: Review. Annals of Medicine and Surgery 2024, 86, 5401. [Google Scholar] [CrossRef]
  37. Abu Mhanna, H.Y.; Omar, A.F.; Radzi, Y.M.; Oglat, A.A.; Akhdar, H.F.; Al Ewaidat, H.; Almahmoud, A.; Bani Yaseen, A.B.; Al Badarneh, L.; Alhamad, O.; et al. Systematic Review of Functional Magnetic Resonance Imaging (FMRI) Applications in the Preoperative Planning and Treatment Assessment of Brain Tumors. Heliyon 2025, 11, e42464. [Google Scholar] [CrossRef] [PubMed]
  38. Yue, W.; Zhang, H.; Zhou, J.; Li, G.; Tang, Z.; Sun, Z.; Cai, J.; Tian, N.; Gao, S.; Dong, J.; et al. Deep Learning-Based Automatic Segmentation for Size and Volumetric Measurement of Breast Cancer on Magnetic Resonance Imaging. Front Oncol 2022, 12, 984626. [Google Scholar] [CrossRef] [PubMed]
  39. Manakitsa, N.; Maraslidis, G.S.; Moysis, L.; Fragulis, G.F. A Review of Machine Learning and Deep Learning for Object Detection, Semantic Segmentation, and Human Action Recognition in Machine and Robotic Vision. Technologies 2024, Vol. 12, Page 15 2024, 12, 15. [Google Scholar] [CrossRef]
  40. Agadi, K.; Dominari, A.; Tebha, S.S.; Mohammadi, A.; Zahid, S. Neurosurgical Management of Cerebrospinal Tumors in the Era of Artificial Intelligence : A Scoping Review. J Korean Neurosurg Soc 2022, 66, 632. [Google Scholar] [CrossRef]
  41. Rayed, M.E.; Islam, S.M.S.; Niha, S.I.; Jim, J.R.; Kabir, M.M.; Mridha, M.F. Deep Learning for Medical Image Segmentation: State-of-the-Art Advancements and Challenges. Inform Med Unlocked 2024, 47, 101504. [Google Scholar] [CrossRef]
  42. Ranjbarzadeh, R.; Bagherian Kasgari, A.; Jafarzadeh Ghoushchi, S.; Anari, S.; Naseri, M.; Bendechache, M. Brain Tumor Segmentation Based on Deep Learning and an Attention Mechanism Using MRI Multi-Modalities Brain Images. Scientific Reports 2021 11:1 2021, 11, 1–17. [Google Scholar] [CrossRef]
  43. Fariba, K.A.; Gupta, V. Deep Brain Stimulation. Encyclopedia of Movement Disorders, Three-Volume Set. [CrossRef]
  44. Chandra, V.; Hilliard, J.D.; Foote, K.D. Deep Brain Stimulation for the Treatment of Tremor. J Neurol Sci 2022, 435, 120190. [Google Scholar] [CrossRef]
  45. Krüger, M.T.; Kurtev-Rittstieg, R.; Kägi, G.; Naseri, Y.; Hägele-Link, S.; Brugger, F. Evaluation of Automatic Segmentation of Thalamic Nuclei through Clinical Effects Using Directional Deep Brain Stimulation Leads: A Technical Note. Brain Sciences 2020, Vol. 10, Page 642 2020, 10, 642. [Google Scholar] [CrossRef]
  46. Miller, K.J.; Fine, A.L. Decision Making in Stereotactic Epilepsy Surgery. Epilepsia 2022, 63, 2782. [Google Scholar] [CrossRef] [PubMed]
  47. Mirchi, N.; Warsi, N.M.; Zhang, F.; Wong, S.M.; Suresh, H.; Mithani, K.; Erdman, L.; Ibrahim, G.M. Decoding Intracranial EEG With Machine Learning: A Systematic Review. Front Hum Neurosci 2022, 16, 913777. [Google Scholar] [CrossRef] [PubMed]
  48. Courtney, M.R.; Sinclair, B.; Neal, A.; Nicolo, J.P.; Kwan, P.; Law, M.; O’Brien, T.J.; Vivash, L. Automated Segmentation of Epilepsy Surgical Resection Cavities: Comparison of Four Methods to Manual Segmentation. Neuroimage 2024, 296, 120682. [Google Scholar] [CrossRef]
  49. Hassija, V.; Chamola, V.; Mahapatra, A.; Singal, A.; Goel, D.; Huang, K.; Scardapane, S.; Spinelli, I.; Mahmud, M.; Hussain, A. Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence. Cognit Comput 2024, 16, 45–74. [Google Scholar] [CrossRef]
  50. Mienye, I.D.; Obaido, G.; Jere, N.; Mienye, E.; Aruleba, K.; Emmanuel, I.D.; Ogbuokiri, B. A Survey of Explainable Artificial Intelligence in Healthcare: Concepts, Applications, and Challenges. Inform Med Unlocked 2024, 51, 101587. [Google Scholar] [CrossRef]
  51. Liu, Y.; Yu, W.; Dillon, T. Regulatory Responses and Approval Status of Artificial Intelligence Medical Devices with a Focus on China. NPJ Digit Med 2024, 7, 255. [Google Scholar] [CrossRef]
  52. Khan, D.Z.; Valetopoulou, A.; Das, A.; Hanrahan, J.G.; Williams, S.C.; Bano, S.; Borg, A.; Dorward, N.L.; Barbarisi, S.; Culshaw, L.; et al. Artificial Intelligence Assisted Operative Anatomy Recognition in Endoscopic Pituitary Surgery. NPJ Digit Med 2024, 7, 314. [Google Scholar] [CrossRef]
  53. Nam, S.M.; Byun, Y.H.; Dho, Y.-S.; Park, C.-K. Envisioning the Future of the Neurosurgical Operating Room with the Concept of the Medical Metaverse. J Korean Neurosurg Soc 2025, 68, 137–149. [Google Scholar] [CrossRef] [PubMed]
  54. Brockmeyer, P.; Wiechens, B.; Schliephake, H. The Role of Augmented Reality in the Advancement of Minimally Invasive Surgery Procedures: A Scoping Review. Bioengineering 2023, 10, 501. [Google Scholar] [CrossRef]
  55. Tangsrivimol, J.A.; Schonfeld, E.; Zhang, M.; Veeravagu, A.; Smith, T.R.; Härtl, R.; Lawton, M.T.; El-Sherbini, A.H.; Prevedello, D.M.; Glicksberg, B.S.; et al. Artificial Intelligence in Neurosurgery: A State-of-the-Art Review from Past to Future. Diagnostics 2023, 13, 2429. [Google Scholar] [CrossRef]
  56. Mudgal, S.K.; Sharma, S.K.; Chaturvedi, J.; Sharma, A. Brain Computer Interface Advancement in Neurosciences: Applications and Issues. Interdisciplinary Neurosurgery 2020, 20, 100694. [Google Scholar] [CrossRef]
  57. Caiado, F.; Ukolov, A. The History, Current State and Future Possibilities of the Non-Invasive Brain Computer Interfaces. Med Nov Technol Devices 2025, 25, 100353. [Google Scholar] [CrossRef]
  58. Brookes, M.J.; Leggett, J.; Rea, M.; Hill, R.M.; Holmes, N.; Boto, E.; Bowtell, R. Magnetoencephalography with Optically Pumped Magnetometers (OPM-MEG): The next Generation of Functional Neuroimaging. Trends Neurosci 2022, 45, 621–634. [Google Scholar] [CrossRef]
  59. Acuña, K.; Sapahia, R.; Jiménez, I.N.; Antonietti, M.; Anzola, I.; Cruz, M.; García, M.T.; Krishnan, V.; Leveille, L.A.; Resch, M.D.; et al. Functional Near-Infrared Spectrometry as a Useful Diagnostic Tool for Understanding the Visual System: A Review. J Clin Med 2024, 13, 282. [Google Scholar] [CrossRef]
  60. Coles, L.; Ventrella, D.; Carnicer-Lombarte, A.; Elmi, A.; Troughton, J.G.; Mariello, M.; El Hadwe, S.; Woodington, B.J.; Bacci, M.L.; Malliaras, G.G.; et al. Origami-Inspired Soft Fluidic Actuation for Minimally Invasive Large-Area Electrocorticography. Nat Commun 2024, 15, 6290. [Google Scholar] [CrossRef] [PubMed]
  61. Hong, J.W.; Yoon, C.; Jo, K.; Won, J.H.; Park, S. Recent Advances in Recording and Modulation Technologies for Next-Generation Neural Interfaces. iScience 2021, 24, 103550. [Google Scholar] [CrossRef] [PubMed]
  62. Islam, M.K.; Rastegarnia, A.; Sanei, S. Signal Artifacts and Techniques for Artifacts and Noise Removal. Intelligent Systems Reference Library 2021, 192, 23–79. [Google Scholar] [CrossRef]
  63. Barnova, K.; Mikolasova, M.; Kahankova, R.V.; Jaros, R.; Kawala-Sterniuk, A.; Snasel, V.; Mirjalili, S.; Pelc, M.; Martinek, R. Implementation of Artificial Intelligence and Machine Learning-Based Methods in Brain–Computer Interaction. Comput Biol Med 2023, 163, 107135. [Google Scholar] [CrossRef]
  64. Xu, Y.; Zhou, Y.; Sekula, P.; Ding, L. Machine Learning in Construction: From Shallow to Deep Learning. Developments in the Built Environment 2021, 6, 100045. [Google Scholar] [CrossRef]
  65. Chaudhary, U. Machine Learning with Brain Data. Expanding Senses using Neurotechnology 2025, 179–223. [Google Scholar] [CrossRef]
  66. Si-Mohammed, H.; Petit, J.; Jeunet, C.; Argelaguet, F.; Spindler, F.; Evain, A.; Roussel, N.; Casiez, G.; Lecuyer, A. Towards BCI-Based Interfaces for Augmented Reality: Feasibility, Design and Evaluation. IEEE Trans Vis Comput Graph 2020, 26, 1608–1621. [Google Scholar] [CrossRef] [PubMed]
  67. Kim, S.; Lee, S.; Kang, H.; Kim, S.; Ahn, M. P300 Brain–Computer Interface-Based Drone Control in Virtual and Augmented Reality. Sensors 2021, Vol. 21, Page 5765 2021, 21, 5765. [Google Scholar] [CrossRef]
  68. Farwell, L.A.; Donchin, E. Talking off the Top of Your Head: Toward a Mental Prosthesis Utilizing Event-Related Brain Potentials. Electroencephalogr Clin Neurophysiol 1988, 70, 510–523. [Google Scholar] [CrossRef]
  69. McFarland, D.J.; Wolpaw, J.R. EEG-Based Brain-Computer Interfaces. Curr Opin Biomed Eng 2017, 4, 194–200. [Google Scholar] [CrossRef] [PubMed]
  70. Awuah, W.A.; Ahluwalia, A.; Darko, K.; Sanker, V.; Tan, J.K.; Tenkorang, P.O.; Ben-Jaafar, A.; Ranganathan, S.; Aderinto, N.; Mehta, A.; et al. Bridging Minds and Machines: The Recent Advances of Brain-Computer Interfaces in Neurological and Neurosurgical Applications. World Neurosurg 2024, 189, 138–153. [Google Scholar] [CrossRef] [PubMed]
  71. Pfurtscheller, G.; Neuper, C. Motor Imagery Activates Primary Sensorimotor Area in Humans. Neurosci Lett 1997, 239, 65–68. [Google Scholar] [CrossRef] [PubMed]
  72. Saibene, A.; Caglioni, M.; Corchs, S.; Gasparini, F. EEG-Based BCIs on Motor Imagery Paradigm Using Wearable Technologies: A Systematic Review. Sensors 2023, 23, 2798. [Google Scholar] [CrossRef]
  73. Pan, J.; Chen, X.N.; Ban, N.; He, J.S.; Chen, J.; Huang, H. Advances in P300 Brain–Computer Interface Spellers: Toward Paradigm Design and Performance Evaluation. Front Hum Neurosci 2022, 16, 1077717. [Google Scholar] [CrossRef]
  74. Norcia, A.M.; Gregory Appelbaum, L.; Ales, J.M.; Cottereau, B.R.; Rossion, B. The Steady-State Visual Evoked Potential in Vision Research: A Review. J Vis 2015, 15, 4. [Google Scholar] [CrossRef]
  75. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Graves, A.; Antonoglou, I.; Wierstra, D.; Riedmiller, M. Playing Atari with Deep Reinforcement Learning. 2013.
  76. Haslacher, D.; Akmazoglu, T.B.; van Beinum, A.; Starke, G.; Buthut, M.; Soekadar, S.R. AI for Brain-Computer Interfaces. 2024, 7, 3–28. [CrossRef]
  77. Siribunyaphat, N.; Punsawad, Y. Steady-State Visual Evoked Potential-Based Brain–Computer Interface Using a Novel Visual Stimulus with Quick Response (QR) Code Pattern. Sensors (Basel) 2022, 22, 1439. [Google Scholar] [CrossRef]
  78. Neuper, C.; Müller-Putz, G.R.; Scherer, R.; Pfurtscheller, G. Motor Imagery and EEG-Based Control of Spelling Devices and Neuroprostheses. Prog Brain Res 2006, 159, 393–409. [Google Scholar] [CrossRef]
  79. Pan, J.; Chen, X.N.; Ban, N.; He, J.S.; Chen, J.; Huang, H. Advances in P300 Brain–Computer Interface Spellers: Toward Paradigm Design and Performance Evaluation. Front Hum Neurosci 2022, 16, 1077717. [Google Scholar] [CrossRef] [PubMed]
  80. Adewole, D.O.; Serruya, M.D.; Harris, J.P.; Burrell, J.C.; Petrov, D.; Chen, H.I.; Wolf, J.A.; Cullen, D.K. The Evolution of Neuroprosthetic Interfaces. Crit Rev Biomed Eng 2016, 44, 123. [Google Scholar] [CrossRef] [PubMed]
  81. Branco, M.P.; Pels, E.G.M.; Sars, R.H.; Aarnoutse, E.J.; Ramsey, N.F.; Vansteensel, M.J.; Nijboer, F. Brain-Computer Interfaces for Communication: Preferences of Individuals With Locked-in Syndrome. Neurorehabil Neural Repair 2021, 35, 267–279. [Google Scholar] [CrossRef] [PubMed]
  82. Collinger, J.L.; Gaunt, R.A.; Schwartz, A.B. Progress towards Restoring Upper Limb Movement and Sensation through Intracortical Brain-Computer Interfaces. Curr Opin Biomed Eng 2018, 8, 84–92. [Google Scholar] [CrossRef]
  83. Collinger, J.L.; Wodlinger, B.; Downey, J.E.; Wang, W.; Tyler-Kabara, E.C.; Weber, D.J.; McMorland, A.J.C.; Velliste, M.; Boninger, M.L.; Schwartz, A.B. High-Performance Neuroprosthetic Control by an Individual with Tetraplegia. The Lancet 2013, 381, 557–564. [Google Scholar] [CrossRef]
  84. Hu, X.; Assaad, R.H. The Use of Unmanned Ground Vehicles (Mobile Robots) and Unmanned Aerial Vehicles (Drones) in the Civil Infrastructure Asset Management Sector: Applications, Robotic Platforms, Sensors, and Algorithms. Expert Syst Appl 2023, 232. [Google Scholar] [CrossRef]
  85. Flesher, S.N.; Downey, J.E.; Weiss, J.M.; Hughes, C.L.; Herrera, A.J.; Tyler-Kabara, E.C.; Boninger, M.L.; Collinger, J.L.; Gaunt, R.A. A Brain-Computer Interface That Evokes Tactile Sensations Improves Robotic Arm Control. Science (1979) 2021, 372, 831–836. [Google Scholar] [CrossRef]
  86. Karmakar, S.; Kamilya, S.; Dey, P.; Guhathakurta, P.K.; Dalui, M.; Bera, T.K.; Halder, S.; Koley, C.; Pal, T.; Basu, A. Real Time Detection of Cognitive Load Using FNIRS: A Deep Learning Approach. Biomed Signal Process Control 2023, 80, 104227. [Google Scholar] [CrossRef]
  87. Mughal, N.E.; Khan, M.J.; Khalil, K.; Javed, K.; Sajid, H.; Naseer, N.; Ghafoor, U.; Hong, K.S. EEG-FNIRS-Based Hybrid Image Construction and Classification Using CNN-LSTM. Front Neurorobot 2022, 16, 873239. [Google Scholar] [CrossRef]
  88. Murphy, E.; Poudel, G.; Ganesan, S.; Suo, C.; Manning, V.; Beyer, E.; Clemente, A.; Moffat, B.A.; Zalesky, A.; Lorenzetti, V. Real-Time FMRI-Based Neurofeedback to Restore Brain Function in Substance Use Disorders: A Systematic Review of the Literature. Neurosci Biobehav Rev 2024, 165, 105865. [Google Scholar] [CrossRef] [PubMed]
  89. Van Der Lande, G.J.M.; Casas-Torremocha, D.; Manasanch, A.; Dalla Porta, L.; Gosseries, O.; Alnagger, N.; Barra, A.; Mejías, J.F.; Panda, R.; Riefolo, F.; et al. Brain State Identification and Neuromodulation to Promote Recovery of Consciousness. Brain Commun 2024, 6, fcae362. [Google Scholar] [CrossRef]
  90. Papadopoulos, S.; Bonaiuto, J.; Mattout, J. An Impending Paradigm Shift in Motor Imagery Based Brain-Computer Interfaces. Front Neurosci 2022, 15, 824759. [Google Scholar] [CrossRef] [PubMed]
  91. Zhang, Y.; Yagi, K.; Shibahara, Y.; Tate, L.; Tamura, H. A Study on Analysis Method for a Real-Time Neurofeedback System Using Non-Invasive Magnetoencephalography. Electronics 2022, Vol. 11, Page 2473 2022, 11, 2473. [Google Scholar] [CrossRef]
  92. Aghajani, H.; Omurtag, A. Assessment of Mental Workload by EEG+FNIRS. Annu Int Conf IEEE Eng Med Biol Soc 2016, 2016, 3773–3776. [Google Scholar] [CrossRef]
  93. Warbrick, T. Simultaneous EEG-FMRI: What Have We Learned and What Does the Future Hold? Sensors (Basel) 2022, 22, 2262. [Google Scholar] [CrossRef] [PubMed]
  94. Padmanabhan, P.; Nedumaran, A.M.; Mishra, S.; Pandarinathan, G.; Archunan, G.; Gulyás, B. The Advents of Hybrid Imaging Modalities: A New Era in Neuroimaging Applications. Adv Biosyst 2017, 1. [Google Scholar] [CrossRef]
  95. Freudenburg, Z. V.; Branco, M.P.; Leinders, S.; Vijgh, B.H. van der; Pels, E.G.M.; Denison, T.; Berg, L.H. van den; Miller, K.J.; Aarnoutse, E.J.; Ramsey, N.F.; et al. Sensorimotor ECoG Signal Features for BCI Control: A Comparison Between People With Locked-In Syndrome and Able-Bodied Controls. Front Neurosci 2019, 13, 457334. [Google Scholar] [CrossRef]
  96. Zhao, Z.P.; Nie, C.; Jiang, C.T.; Cao, S.H.; Tian, K.X.; Yu, S.; Gu, J.W. Modulating Brain Activity with Invasive Brain–Computer Interface: A Narrative Review. Brain Sci 2023, 13, 134. [Google Scholar] [CrossRef]
  97. Alahi, M.E.E.; Liu, Y.; Xu, Z.; Wang, H.; Wu, T.; Mukhopadhyay, S.C. Recent Advancement of Electrocorticography (ECoG) Electrodes for Chronic Neural Recording/Stimulation. Mater Today Commun 2021, 29, 102853. [Google Scholar] [CrossRef]
  98. Merk, T.; Peterson, V.; Köhler, R.; Haufe, S.; Richardson, R.M.; Neumann, W.J. Machine Learning Based Brain Signal Decoding for Intelligent Adaptive Deep Brain Stimulation. Exp Neurol 2022, 351, 113993. [Google Scholar] [CrossRef] [PubMed]
  99. Rudroff, T. Decoding Thoughts, Encoding Ethics: A Narrative Review of the BCI-AI Revolution. Brain Res 2025, 1850, 149423. [Google Scholar] [CrossRef] [PubMed]
  100. Saha, S.; Mamun, K.A.; Ahmed, K.; Mostafa, R.; Naik, G.R.; Darvishi, S.; Khandoker, A.H.; Baumert, M. Progress in Brain Computer Interface: Challenges and Opportunities. Front Syst Neurosci 2021, 15, 578875. [Google Scholar] [CrossRef]
  101. Zhang, H.; Jiao, L.; Yang, S.; Li, H.; Jiang, X.; Feng, J.; Zou, S.; Xu, Q.; Gu, J.; Wang, X.; et al. Brain–Computer Interfaces: The Innovative Key to Unlocking Neurological Conditions. Int J Surg 2024, 110, 5745. [Google Scholar] [CrossRef] [PubMed]
  102. Merk, T.; Peterson, V.; Lipski, W.J.; Blankertz, B.; Turner, R.S.; Li, N.; Horn, A.; Richardson, R.M.; Neumann, W.J. Electrocorticography Is Superior to Subthalamic Local Field Potentials for Movement Decoding in Parkinson’s Disease. Elife 2022, 11, e75126. [Google Scholar] [CrossRef]
  103. Cao, T.D.; Truong-Huu, T.; Tran, H.; Tran, K. A Federated Deep Learning Framework for Privacy Preservation and Communication Efficiency. Journal of Systems Architecture 2022, 124. [Google Scholar] [CrossRef]
  104. Lebedev, M.A.; Nicolelis, M.A.L. Brain-Machine Interfaces: From Basic Science to Neuroprostheses and Neurorehabilitation. Physiol Rev 2017, 97, 767–837. [Google Scholar] [CrossRef]
  105. Fick, T.; Van Doormaal, J.A.M.; Tosic, L.; Van Zoest, R.J.; Meulstee, J.W.; Hoving, E.W.; Van Doormaal, T.P.C. Fully Automatic Brain Tumor Segmentation for 3D Evaluation in Augmented Reality. Neurosurg Focus 2021, 51, E14. [Google Scholar] [CrossRef]
  106. Kazemzadeh, K.; Akhlaghdoust, M.; Zali, A. Advances in Artificial Intelligence, Robotics, Augmented and Virtual Reality in Neurosurgery. Front Surg 2023, 10, 1241923. [Google Scholar] [CrossRef]
  107. Zhou, T.; Yu, T.; Li, Z.; Zhou, X.; Wen, J.; Li, X. Functional Mapping of Language-Related Areas from Natural, Narrative Speech during Awake Craniotomy Surgery. Neuroimage 2021, 245, 118720. [Google Scholar] [CrossRef]
  108. Sarubbo, S.; Annicchiarico, L.; Corsini, F.; Zigiotto, L.; Herbet, G.; Moritz-Gasser, S.; Dalpiaz, C.; Vitali, L.; Tate, M.; De Benedictis, A.; et al. Planning Brain Tumor Resection Using a Probabilistic Atlas of Cortical and Subcortical Structures Critical for Functional Processing: A Proof of Concept. Oper Neurosurg (Hagerstown) 2021, 20, E175–E183. [Google Scholar] [CrossRef] [PubMed]
  109. Lachance, B.; Wang, Z.; Badjatia, N.; Jia, X. Somatosensory Evoked Potentials (SSEP) and Neuroprognostication after Cardiac Arrest. Neurocrit Care 2020, 32, 847. [Google Scholar] [CrossRef]
  110. Nikolov, P.; Heil, V.; Hartmann, C.J.; Ivanov, N.; Slotty, P.J.; Vesper, J.; Schnitzler, A.; Groiss, S.J. Motor Evoked Potentials Improve Targeting in Deep Brain Stimulation Surgery. Neuromodulation 2022, 25, 888–894. [Google Scholar] [CrossRef] [PubMed]
  111. Esfandiari, H.; Troxler, P.; Hodel, S.; Suter, D.; Farshad, M.; Cavalcanti, N.; Wetzel, O.; Mania, S.; Cornaz, F.; Selman, F.; et al. Introducing a Brain-Computer Interface to Facilitate Intraoperative Medical Imaging Control – a Feasibility Study. BMC Musculoskelet Disord 2022, 23, 1–10. [Google Scholar] [CrossRef]
  112. Mridha, M.F.; Das, S.C.; Kabir, M.M.; Lima, A.A.; Islam, M.R.; Watanobe, Y. Brain-Computer Interface: Advancement and Challenges. Sensors 2021, 21. [Google Scholar] [CrossRef] [PubMed]
  113. Kim, M.S.; Park, H.; Kwon, I.; An, K.O.; Kim, H.; Park, G.; Hyung, W.; Im, C.H.; Shin, J.H. Efficacy of Brain-Computer Interface Training with Motor Imagery-Contingent Feedback in Improving Upper Limb Function and Neuroplasticity among Persons with Chronic Stroke: A Double-Blinded, Parallel-Group, Randomized Controlled Trial. J Neuroeng Rehabil 2025, 22, 1–13. [Google Scholar] [CrossRef]
  114. Pignolo, L.; Servidio, R.; Basta, G.; Carozzo, S.; Tonin, P.; Calabrò, R.S.; Cerasa, A. The Route of Motor Recovery in Stroke Patients Driven by Exoskeleton-Robot-Assisted Therapy: A Path-Analysis. Medical Sciences 2021, 9, 64. [Google Scholar] [CrossRef]
  115. Yang, S.; Li, R.; Li, H.; Xu, K.; Shi, Y.; Wang, Q.; Yang, T.; Sun, X. Exploring the Use of Brain-Computer Interfaces in Stroke Neurorehabilitation. Biomed Res Int 2021, 2021, 9967348. [Google Scholar] [CrossRef]
  116. Jin, W.; Zhu, X.X.; Qian, L.; Wu, C.; Yang, F.; Zhan, D.; Kang, Z.; Luo, K.; Meng, D.; Xu, G. Electroencephalogram-Based Adaptive Closed-Loop Brain-Computer Interface in Neurorehabilitation: A Review. Front Comput Neurosci 2024, 18, 1431815. [Google Scholar] [CrossRef]
  117. Mane, R.; Wu, Z.; Wang, D. Poststroke Motor, Cognitive and Speech Rehabilitation with Brain–Computer Interface: A Perspective Review. Stroke Vasc Neurol 2022, 7, 541–549. [Google Scholar] [CrossRef]
  118. Zhang, X.; Ma, Z.; Zheng, H.; Li, T.; Chen, K.; Wang, X.; Liu, C.; Xu, L.; Wu, X.; Lin, D.; et al. The Combination of Brain-Computer Interfaces and Artificial Intelligence: Applications and Challenges. Ann Transl Med 2020, 8, 712. [Google Scholar] [CrossRef] [PubMed]
  119. Menze, B.H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K.; Kirby, J.; Burren, Y.; Porz, N.; Slotboom, J.; Wiest, R.; et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Trans Med Imaging 2015, 34, 1993–2024. [Google Scholar] [CrossRef] [PubMed]
  120. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 2015, 9351, 234–241. [Google Scholar] [CrossRef]
  121. Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans Pattern Anal Mach Intell 2018, 40, 834–848. [Google Scholar] [CrossRef]
  122. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. An Image Is Worth 16*16 Words: Transformers for Image Recognition at Scale. Adv Neural Inf Process Syst 2017. 2017-Decem. [Google Scholar]
  123. Hatamizadeh, A.; Nath, V.; Tang, Y.; Yang, D.; Roth, H.R.; Xu, D. Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 2022, 12962 LNCS, 272–284. [Google Scholar] [CrossRef]
  124. Hong, K.S.; Khan, M.J. Hybrid Brain-Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review. Front Neurorobot 2017, 11, 275683. [Google Scholar] [CrossRef]
  125. Peng, C.J.; Chen, Y.C.; Chen, C.C.; Chen, S.J.; Cagneau, B.; Chassagne, L. An EEG-Based Attentiveness Recognition System Using Hilbert–Huang Transform and Support Vector Machine. J Med Biol Eng 2020, 40, 230–238. [Google Scholar] [CrossRef]
  126. Rakhmatulin, I.; Dao, M.S.; Nassibi, A.; Mandic, D. Exploring Convolutional Neural Network Architectures for EEG Feature Extraction. Sensors 2024, Vol. 24, Page 877 2024, 24, 877. [Google Scholar] [CrossRef]
  127. Peksa, J.; Mamchur, D. State-of-the-Art on Brain-Computer Interface Technology. Sensors 2023, Vol. 23, Page 6001 2023, 23, 6001. [Google Scholar] [CrossRef]
  128. Mungoli, N. Scalable, Distributed AI Frameworks: Leveraging Cloud Computing for Enhanced Deep Learning Performance and Efficiency. 2023.
  129. Subasi, A. Practical Guide for Biomedical Signals Analysis Using Machine Learning Techniques: A MATLAB Based Approach. Practical Guide for Biomedical Signals Analysis Using Machine Learning Techniques: A MATLAB Based Approach 2019, 1–443. [Google Scholar] [CrossRef]
  130. Delorme, A.; Makeig, S. EEGLAB: An Open Source Toolbox for Analysis of Single-Trial EEG Dynamics Including Independent Component Analysis. J Neurosci Methods 2004, 134, 9–21. [Google Scholar] [CrossRef]
  131. Schirrmeister, R.T.; Springenberg, J.T.; Fiederer, L.D.J.; Glasstetter, M.; Eggensperger, K.; Tangermann, M.; Hutter, F.; Burgard, W.; Ball, T. Deep Learning with Convolutional Neural Networks for EEG Decoding and Visualization. Hum Brain Mapp 2017, 38, 5391–5420. [Google Scholar] [CrossRef] [PubMed]
  132. Wang, J.; Chen, Y.; Hao, S.; Peng, X.; Hu, L. Deep Learning for Sensor-Based Activity Recognition: A Survey. Pattern Recognit Lett 2019, 119, 3–11. [Google Scholar] [CrossRef]
  133. Hatamizadeh, A.; Nath, V.; Tang, Y.; Yang, D.; Roth, H.R.; Xu, D. Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 2022, 12962 LNCS, 272–284. [Google Scholar] [CrossRef]
  134. Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. 2021.
  135. He, K.; Fan, H.; Wu, Y.; Xie, S.; Girshick, R. Momentum Contrast for Unsupervised Visual Representation Learning. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2019, 9726–9735. [Google Scholar] [CrossRef]
  136. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sánchez, C.I. A Survey on Deep Learning in Medical Image Analysis. Med Image Anal 2017, 42, 60–88. [Google Scholar] [CrossRef] [PubMed]
  137. Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge Computing: Vision and Challenges. IEEE Internet Things J 2016, 3, 637–646. [Google Scholar] [CrossRef]
  138. Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated Machine Learning: Concept and Applications. ACM Trans Intell Syst Technol 2019, 10, 19. [Google Scholar] [CrossRef]
  139. Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated Machine Learning. ACM Transactions on Intelligent Systems and Technology (TIST) 2019, 10. [Google Scholar] [CrossRef]
  140. Zoph, B.; Le, Q. V. Neural Architecture Search with Reinforcement Learning. 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings 2016.
  141. Chen, Z.; Jing, L.; Li, Y.; Li, B. Bridging the Domain Gap: Self-Supervised 3D Scene Understanding with Foundation Models. Adv Neural Inf Process Syst 2023, 36. [Google Scholar]
  142. Edelman, B.J.; Zhang, S.; Schalk, G.; Brunner, P.; Muller-Putz, G.; Guan, C.; He, B. Non-Invasive Brain-Computer Interfaces: State of the Art and Trends. IEEE Rev Biomed Eng 2025, 18. [Google Scholar] [CrossRef] [PubMed]
  143. Simon, C.; Bolton, D.A.E.; Kennedy, N.C.; Soekadar, S.R.; Ruddy, K.L. Challenges and Opportunities for the Future of Brain-Computer Interface in Neurorehabilitation. Front Neurosci 2021, 15, 699428. [Google Scholar] [CrossRef] [PubMed]
  144. Fan, L.; Zhang, F.; Fan, H.; Zhang, C. Brief Review of Image Denoising Techniques. Vis Comput Ind Biomed Art 2019, 2, 1–12. [Google Scholar] [CrossRef]
  145. Chen, Y.; Wang, F.; Li, T.; Zhao, L.; Gong, A.; Nan, W.; Ding, P.; Fu, Y. Considerations and Discussions on the Clear Definition and Definite Scope of Brain-Computer Interfaces. Front Neurosci 2024, 18, 1449208. [Google Scholar] [CrossRef]
  146. Mridha, M.F.; Das, S.C.; Kabir, M.M.; Lima, A.A.; Islam, M.R.; Watanobe, Y. Brain-Computer Interface: Advancement and Challenges. Sensors (Basel) 2021, 21, 5746. [Google Scholar] [CrossRef]
  147. Rajpura, P.; Cecotti, H.; Kumar Meena, Y. Explainable Artificial Intelligence Approaches for Brain-Computer Interfaces: A Review and Design Space. J Neural Eng 2023, 21. [Google Scholar] [CrossRef]
  148. Maiseli, B.; Abdalla, A.T.; Massawe, L. V.; Mbise, M.; Mkocha, K.; Nassor, N.A.; Ismail, M.; Michael, J.; Kimambo, S. Brain–Computer Interface: Trend, Challenges, and Threats. Brain Inform 2023, 10. [Google Scholar] [CrossRef]
  149. Salles, A.; Farisco, M. Neuroethics and AI Ethics: A Proposal for Collaboration. BMC Neurosci 2024, 25, 1–10. [Google Scholar] [CrossRef]
  150. Chaudhary, U.; Birbaumer, N.; Ramos-Murguialday, A. Brain-Computer Interfaces for Communication and Rehabilitation. Nat Rev Neurol 2016, 12, 513–525. [Google Scholar] [CrossRef]
  151. Sun, X. yu; Ye, B. The Functional Differentiation of Brain–Computer Interfaces (BCIs) and Its Ethical Implications. Humanities and Social Sciences Communications 2023 10:1 2023, 10, 1–9. [Google Scholar] [CrossRef]
  152. Keskinbora, K.H.; Keskinbora, K. Ethical Considerations on Novel Neuronal Interfaces. Neurol Sci 2018, 39, 607–613. [Google Scholar] [CrossRef] [PubMed]
  153. Vlek, R.J.; Steines, D.; Szibbo, D.; Kübler, A.; Schneider, M.J.; Haselager, P.; Nijboer, F. Ethical Issues in Brain-Computer Interface Research, Development, and Dissemination. J Neurol Phys Ther 2012, 36, 94–99. [Google Scholar] [CrossRef] [PubMed]
  154. McIntyre, C.C.; Hahn, P.J. Network Perspectives on the Mechanisms of Deep Brain Stimulation. Neurobiol Dis 2010, 38, 329–337. [Google Scholar] [CrossRef] [PubMed]
  155. Borton, D.A.; Yin, M.; Aceros, J.; Nurmikko, A. An Implantable Wireless Neural Interface for Recording Cortical Circuit Dynamics in Moving Primates. J Neural Eng 2013, 10. [Google Scholar] [CrossRef]
  156. Fernandez-Leon, J.A.; Parajuli, A.; Franklin, R.; Sorenson, M.; Felleman, D.J.; Hansen, B.J.; Hu, M.; Dragoi, V. A Wireless Transmission Neural Interface System for Unconstrained Non-Human Primates. J Neural Eng 2015, 12. [Google Scholar] [CrossRef]
  157. Yin, M.; Borton, D.A.; Aceros, J.; Patterson, W.R.; Nurmikko, A. V. A 100-Channel Hermetically Sealed Implantable Device for Chronic Wireless Neurosensing Applications. IEEE Trans Biomed Circuits Syst 2013, 7, 115–128. [Google Scholar] [CrossRef]
  158. Tibor Schirrmeister, R.; Springenberg, J.T.; Fiederer, L.D.J.; Glasstetter, M.; Eggensperger, K.; Tangermann, M.; Hutter, F.; Burgard, W.; Ball, T. Deep Learning with Convolutional Neural Networks for EEG Decoding and Visualization. ArXiv 2017, arXiv:1703.05051. [Google Scholar] [CrossRef]
  159. Ienca, M.; Andorno, R. Towards New Human Rights in the Age of Neuroscience and Neurotechnology. Life Sci Soc Policy 2017, 13, 1–27. [Google Scholar] [CrossRef]
Figure 1. Illustrates 2 sets of 4 MRI modalities and Z-Score normalization. (Reproduced under the terms of the Creative Commons Attribution 4.0 International License from [42]).
Figure 1. Illustrates 2 sets of 4 MRI modalities and Z-Score normalization. (Reproduced under the terms of the Creative Commons Attribution 4.0 International License from [42]).
Preprints 154151 g001
Figure 2. Clinical testing outcomes, including side-effect thresholds and volumetric tissue activation (VTA) models for the right lead. The image illustrates the results of clinical testing for DBS, highlighting VTA models and corresponding side-effect thresholds. (Reproduced under the terms of the Creative Commons Attribution 4.0 International License from [45]).
Figure 2. Clinical testing outcomes, including side-effect thresholds and volumetric tissue activation (VTA) models for the right lead. The image illustrates the results of clinical testing for DBS, highlighting VTA models and corresponding side-effect thresholds. (Reproduced under the terms of the Creative Commons Attribution 4.0 International License from [45]).
Preprints 154151 g002
Figure 3. AI-driven automated segmentation of epilepsy resection cavities. (Reproduced under the terms of the Creative Commons Attribution 4.0 International License (CC BY) from [48]).
Figure 3. AI-driven automated segmentation of epilepsy resection cavities. (Reproduced under the terms of the Creative Commons Attribution 4.0 International License (CC BY) from [48]).
Preprints 154151 g003
Figure 4. Hybrid BCI and image segmentation system for precision neurosurgery. The system integrates real-time intracranial EEG with DL-based medical image analysis to optimize surgical planning. (Reproduced under the terms of the Creative Commons Attribution 4.0 International License from [124]).
Figure 4. Hybrid BCI and image segmentation system for precision neurosurgery. The system integrates real-time intracranial EEG with DL-based medical image analysis to optimize surgical planning. (Reproduced under the terms of the Creative Commons Attribution 4.0 International License from [124]).
Preprints 154151 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated