1. Introduction
Musical performance, when viewed from the musician’s perspective, is a demanding activity that represents the culmination of years of dedication, study, and practice. In high-profile contexts—such as performances with renowned orchestras, at prestigious venues, and during major festivals—professional musicians are expected, both by their audiences and, perhaps more critically, by themselves, to deliver performances that are not only technically flawless but also highly expressive and authentic. Achieving this level of artistry consistently requires a deep commitment that spans many years and encompasses a wide range of subdisciplines, including music history and theory, harmonic analysis, auditory training, instrumental technique, and both solo and ensemble performance practice [
1,
2]. In other words, musical performance requires a triad of knowledge (
Figure 1): musicians must know the piece (the music), the instrument, and themselves.
Consequently, a significant portion of a musician’s professional life focuses on developing and maintaining effective study and practice routines. These routines must be tailored not only to the specific instrument and musical genre but also to the unique challenges faced by each individual musician[
1]. Traditional practice methods typically involve a combination of solitary work and tutored practice sessions, where real-time feedback from an instructor plays a critical role in identifying and correcting both technical and expressive issues. Such feedback is especially vital in addressing technical aspects that are difficult to perceive from a first-person perspective, such as involuntary muscle movements, subtle tempo inconsistencies, and issues related to posture and ergonomics.
To address these challenges, musicians often employ various strategies, including practicing in front of mirrors, using physical aids (such as tape or braces) to control movement, and recording their practice sessions for later analysis [
2]. In this context, emerging human-computer interaction (HCI) tools—particularly electroencephalography(EEG)-based brain computer interfaces (BCIs)—have been increasingly investigated as real-time feedback systems [
3,
4,
5]. These technologies could potentially be used to provide immediate information about low-level parameters associated with cognitive and biomechanical processes during instrumental performance. As a result, they could deliver both quantitative and qualitative insights into technical execution, supporting immediate correction as well as long-term skill development.
Theoretically, a set of functional requirements for an idealized Technology-Enhanced Musical Practice (TEMP) system based on EEG-based BCIs could be devised (see
Section 3). Such a system would be able to monitor physical and cognitive aspects of a musician’s performance during training and provide assessment and corrective feedback.
While prior reviews have examined the use of BCIs and music (listening) as a stimulus for guiding, entraining and modulating the brain in desired directions in applications ranging from neuro-rehabilitation to therapeutic interventions, stress management, diagnostics of neurological disorders, and sports performance enhancement [
6,
7], to the best of our knowledge, no prior work has examined how EEG-based BCIs might enable the operationalization of the functional requirements of a TEMP system. This review seeks to fill this gap by examining the existing literature on how EEG-based BCI technology could support the requirements of a TEMP system, thus augmenting current skilled musical instrument practice systems in terms of motor learning, cognitive feedback, and training efficiency.
In this work, we evaluate the potential use of EEG-based BCIs in supporting TEMP-related components of musical training. We describe a conceptual framework (TEMP) designed to group relevant training features, and assess the potential feasibility of operationalizing such features via EEG-based BCI technology. From this analysis, we assess the potential feasibility and current limitations of implementing TEMP-related components using EEG-based BCI technologies. Our focus is on specific, measurable parameters, rather than abstract qualities such as emotion, expressiveness, or artistic intention. Although there is now sufficient knowledge to begin investigating some of these more complex dimensions, such aspects should be explored only in later iterations of system development. This is motivated by the goal of proposing a broadly applicable training system that musicians of any instrument could use. By excluding genre- or style-specific considerations, the framework aims to remain as generic and adaptable as possible. The scope is limited to technical performance on traditional acoustic or electroacoustic instruments (keys, strings, woodwinds, brass, and percussion) and does not address the generation of music or sound through signal-mapping from biosensors or brain-computer interfaces.
The remainder of this paper is organized as follows.
Section 2 provides a summarized overview of BCIs, technology-enhanced music training, and relevant studies that used BCIs in music training and performance.
Section 3 describes the conceptual TEMP framework.
Section 4 explains the research methodology and objectives.
Section 5 summarizes the most relevant findings from the collected articles.
Section 6 presents a discussion correlating the state-of-the-art of BCI technology that emerged from the research results with the features delineated in the conceptual TEMP framework.
Section 7 presents our conclusion and future work.
2. Background
2.1. Brain-Computer Interfaces
Once a Sci-Fi dream, non-invasive BCIs are now a consumer-grade technology. Leveraging state-of-the-art Machine-Learning (ML) algorithms, BCIs have been used as controllers and brain-state monitors, giving rise to neuro-rehabilitation, gaming, and assistive-tech applications [
8].
A BCI is a device that can measure neural activity during the performance of an active or passive task. BCI devices can be invasive, where electrodes are placed inside specific regions of the brain and directly measure voltage variations in that position, or non-invasive, where the neural activity is measured indirectly.
Such is the case of electroencephalography (EEG), which measures electrical voltage variations of the scalp that are a consequence of the underlying electrical activity of the brain. EEG BCI electrical patterns can be detected in passive and active paradigms. In passive paradigms, the user is stimulated with sensory or cognitive input, and in response, a characteristic neural activity emerges. In active strategies neural activity patterns are elicited after the user thinks (or imagines themselves) of performing a limb movement, such as taking a step, or thinking about a concept or idea, such as “fruit”, or from a stimuli such as an actual image of a banana [
9].
Despite remarkable progress, the utilization of BCIs remains primarily confined to the digital health and accessibility domains. In the music context, most research and applications are focused on the use of BCIs to generate or control musical parameters, and to modulate neural activity during cognitive or motor tasks [
6].
2.2. Technology-Enhanced Musical Practice
The path from musical apprentice to professional performer involves years of training, and evidence shows that a musician will have dedicated 10,000 hours of practice to become a professional player [
2]. This effort has life-lasting effects and will even shape the practitioner’s anatomy and brain development [
1]. Hence, several strategies and tools that assist in this process have been developed over the years, and since the 90’s, digital technology has played an important role in this matter[
10]. The integration of digital technologies into music pedagogy has transformed the landscape of musical practice, moving beyond traditional methods toward more data-driven, interactive, and individualized approaches. Here, technology-enhanced musical practice (TEMP) refers to the use of computational tools — including software platforms, biosensors, motion capture, and brain-computer interfaces (BCIs) — to support and optimize the learning and refinement of musical performance [
10]. These innovations aim to augment the effectiveness of practice sessions by providing real-time feedback, quantitative assessments, and cognitive or physiological insights that were previously only possible in tutored practice sessions, or by analyzing video and audio recordings.
Early developments in this field focused on Machine Audition (MA)-based feedback systems for pitch and tempo accuracy. These systems laid the foundation for currently popular tools like MakeMusic and Yousician [
11,
12], which use music information retrieval algorithmic analysis to provide real-time correction and progress tracking [
10]. These are typical “play-along” applications focused on beginners and amateur music enthusiasts. They offer the possibility to play their instrument together with an interactive musical score, which can be of traditional notation or a simplified representation such as tablature, and provide instant feedback on the correctness of pitch and tempo. While this approach can be fruitful and fun for hobbyists, biomechanical and cognitive parameters can not be detected via machine audition strategies. For example, on instruments where fingering strategy is important (such as string and keyboard instruments), it is not possible to provide feedback on the fingering positioning just by audio analysis. Moreover, MA systems rely on note attack and envelope following for tempo estimation and music tonality for pitch estimation, and even though this can be very accurate for certain genres and instruments, like Pop music percussion and keyboard instruments, it performs poorly in non-rhythmic music that is written based on durations, and music that explores instrumental micro tonality [
10]. Thus, even though relevant and useful, to develop an advanced, professional training system, other sensing technologies and strategies that can deal with real-time feedback of biomechanic and cognitive information are required.
With regards to applications for musical training and practice using EEG-based BCIs, Raffaella et. al. devised a study to assess the effect of real-time neurofeedback in guitar practice [
11]. Their system used an EEG-based BCI to track user focus and motor performance and provided a calculated ratio between score reading (focus) and playing (motor performance) in the form of visual feedback, extracted from the synchrony and asynchrony between sensorimotor and cognitive information decoded from the EEG signals, where a good balance between reading attention and actual playing would indicate good coordination between playing and reading ability. They implemented an experimental design with 20 participants, divided into a BCI group and a control group, that participated in regular musical training sessions over a period of 2 months and evaluated their accuracy in performing a predetermined chord progression. After the training period, they evaluated the player’s performance both in note and tempo accuracy in the task. Their results indicate that the BCI group performed significantly better in all evaluated parameters, concluding that the use of real-time neurofeedback improved the player’s learning process.
A study by Jordana et. al. aimed to determine the effects of alpha neurofeedback and EMG biofeedback protocols on improving musical performance in violinists and viola players [
4]. Their objective was to investigate the impact of this combined alpha-EEG/EMG biofeedback on electrophysiological and psychometric parameters, as well as to compare the responses of musicians with high and low individual alpha peak frequency (APF) to usual practice versus practice combined with biofeedback training. They also wanted to assess whether biofeedback computer software is an effective technology for improving psychomotor performance in musicians.
The experiment involved 12 music students (10 violinists, 2 viola players), divided into an experimental group that received biofeedback (alpha/EMG) combined with music practice, and a control group that only did music practice over a two-month period. The biofeedback system used EEG electrodes on the scalp and EMG electrodes on the forehead. Feedback, in the form of “applause” sounds, was provided in real-time when participants achieved simultaneous supra-threshold bursts of alpha activity and sub-threshold bursts of integrated EMG (IEMG). Participants practiced their usual repertoire during the sessions, with the stated goal of achieving high-quality performance accompanied by feelings of ease and comfort. The results indicated that alpha-EEG/EMG biofeedback training, when used during music performance, improved all measured EEG and EMG parameters associated with optimal psychomotor functioning. The efficiency of the biofeedback training was positively correlated with baseline alpha activity indices, including APF, individual alpha band width (IABW), and amount of alpha suppression (AAS) change. Practice combined with biofeedback led to an increase in alpha activity indices and a decrease in IEMG in both low and high APF groups, with changes being more pronounced in the high APF group. The study concluded that alpha-EEG/EMG biofeedback training is effective in alleviating psychosomatic disturbances during musical execution and can enhance desired self-regulation and musical performance quality.
Riquelme et. al. explored whether musical training, specifically in pianists, influences the ability to control an EEG-based BCI system using motor imagery (MI) [
5]. They aimed to assess and compare the performance of pianists interacting with an MI-based BCI system against a control group of non-musicians. They hypothesized that the anatomical and functional differences developed through musical practice might lead to improved BCI control for musicians. The experimental setup involved testing the BCI performance of four pianists and four non-pianists using motor imagery of left and right hand movements to control a BCI system. The study followed a standard training protocol over three sessions, including a training-only trial followed by trials with real-time feedback based on the user’s interpreted brain activity. EEG signals were recorded using a 16-channel device and processed using Common Spatial Patterns (CSP) for feature extraction and Linear Discriminant Analysis (LDA) for classification. Both online and offline analyses of the BCI accuracy were conducted, focusing on the sessions where feedback was provided. The results showed that pianists achieved a significantly higher mean level of BCI control (74.69%) through MI during the final tested trial compared to the control group (63.13%). Offline analysis supported this finding, indicating a generally better performance for the pianist group across different data subsets. Scalp topography analysis further suggested that non-pianists exhibited a larger area of brain activation during the task compared to pianists. The study concluded that these findings provide indications that musical training is indeed a factor that improves performance with a BCI device operated via movement imagery.
Even though research on this topic is scarce, these studies highlight the potential of the technology and indicate that further research might reveal rewarding results.
3. Conceptual TEMP Framework
As a novel contribution, we propose the conceptual TEMP framework for high-performance training, integrating standard digital practice tools with advanced, sensor-informed functionalities aimed at monitoring physical and cognitive aspects of performance. The purpose of this framework is not to replicate the role of a human instructor, but to complement and enhance it by delivering detailed, real-time, and individualized feedback across multiple layers of the performer’s experience. The following points outline the core functional objectives of this conceptual framework, which serves as a basis for subsequent discussion on how current sensing technologies might support its implementation.
3.0.1. Biomechanical Awareness
To support technical refinement and injury prevention, the TEMP framework should provide awareness of the musician’s full-body biomechanics during practice. This includes:
Posture and Balance: Monitoring of overall playing posture, alignment, and weight distribution.
Movement and Muscle Activity: Real-time monitoring of muscle tension and relaxation patterns, especially in key areas such as forearms, hands, neck, shoulders, and lower back.
Fine Motor and Dexterity: Capture of detailed finger, hand, wrist, arm, and facial muscle movements.
Breathing Control: For wind and voice instrumentalists, diaphragm engagement and respiratory patterns are key parameters of the technique.
Head and Facial Movement: Monitoring facial tension and head alignment to identify strain or compensatory patterns that may indicate suboptimal technique. These capabilities are particularly valuable in identifying inefficiencies or compensatory behaviors that may not be easily perceived from a first-person perspective.
Movement intention: A core functionality of the TEMP framework would be the ability to distinguish intentional, goal-directed movement from involuntary or reflexive motion. This distinction is essential in helping musicians identify habits such as unwanted tension, tremor, or unintentional shifts in posture. By separating these movement types, the system can provide feedback that distinguishes between technical errors and unconscious physical responses, enhancing the performer’s body awareness and self-regulation during practice.
Coordination and Movement Fluidity: Evaluation of coordination and movement fluidity during transitions and articulations.
3.0.2. Tempo Processing
Tempo constitutes a foundational parameter of musical practice. From the earliest stages of training, performers are educated to develop an acute awareness of temporal regularity, enabling them to recognize externally presented tempi with precision. This perceptual skill is fundamental in ensemble coordination, stylistic authenticity, and interpretive nuance across repertoires and performance contexts. Equally central is the capacity to execute tempo reliably: motor functions must translate the musician’s internal pulse into consistent rhythmic output.
A third, cognitively mediated dimension involves the mental rehearsal or imagination of tempo in the absence of explicit sound or movement, supporting score study, silent practice, and preparatory planning. Because perception, production, and imagery interact continuously before and during performance, real-time, context-sensitive feedback on all three modes of tempo processing would represent an important training enhancement. This would allow for the diagnosis of discrepancies between intended and realized tempi and reinforce stable internal timing models.
3.0.3. Cognitive Engagement
As performers reach higher levels of proficiency, much of their motor control becomes automated. While this supports fluency, it can also lead to disengaged or reflexive execution. The TEMP framework should therefore aim to assess the extent to which playing is being performed with full awareness of artistic and motor intentions. This state of full awareness and engagement is also described as the “Flow State”.
Identifying the state of “Flow” and quantifying the degree of engagement and awareness can help advanced performers recognize when they are practicing mindfully versus when they are relying too heavily on muscle memory, thus encouraging more deliberate and reflective practice habits.
4. Methodology
This topical review aims to assess whether current EEG-based brain-computer interface (BCI) technologies are capable of realizing the physiological and cognitive features required by the TEMP framework. To do so, we conducted a targeted literature search designed to identify the state-of-the-art supporting each TEMP-related feature described in
Section 3, namely:
Posture and Balance.
Movement and Muscle Activity.
Fine Motor and Dexterity.
Breathing Control.
Head and Facial Movement.
Movement Intention.
Coordination and Movement Fluidity.
Tempo Processing.
Cognitive Engagement.
Rather than evaluating BCI applications strictly within the music domain, we aim to assess potential feasibility from empirical evidence presented in broad contexts (e.g., rehabilitation, human-computer interaction, gaming, or stress monitoring). We are primarily concerned with answering whether BCIs can feasibly support the real-time integration of the desired parameters using non-invasive EEG, and how near (or far) the technology is from a potential prototype development.
4.1. Search Strategy
For each TEMP feature (e.g., posture and balance, coordination and movement fluidity, movement and muscle activity, etc.), we constructed a tailored Boolean search phrase combining EEG/BCI-related terms with feature-specific keywords applied to the title and abstract fields. The full list of search terms for each TEMP feature is shown in
Appendix A.
4.2. Databases and Filters
Searches were conducted using PubMed and IEEE Xplore databases, covering both medical and technical perspectives of EEG decoding. To ensure the relevance, methodological rigor, and translational potential of the literature included in this topical review, a set of inclusion and exclusion criteria was systematically applied (
Table 1).
Only peer-reviewed journal articles published from 2020 onward were considered, reflecting recent advancements in EEG-based BCI technologies. Eligible studies were required to report empirical findings derived from human participants and to evaluate at least one component aligned with the functional objectives of the TEMP framework. Furthermore, included studies had to report technical performance metrics such as detection accuracy, latency, or signal reliability, to support a critical assessment of the feasibility of EEG-based detection mechanisms for real-time musical BCI applications. To maintain focus on non-invasive EEG systems and active user participation paradigms, studies employing alternative neuroimaging methods (e.g., fMRI, fNIRS), using exclusively passive BCI strategies based on external stimulation (e.g., P300, SSVEP), or purely motor imagery tasks lacking overt movement execution were excluded. Additionally, works focusing exclusively on facial expression analysis, emotion recognition, or simulations not based on human participant data were not considered.
These criteria aimed to ensure the inclusion of studies that provide ecologically valid and technically relevant insights into TEMP-feature implementation. Survey or review papers were also included when they provided synthesized comparisons of recent EEG-BCI advancements relevant to at least one TEMP feature.
4.3. Goal of Analysis
This analysis aims to a) examine whether current EEG-based BCI research empirically addresses the functional domains defined in the TEMP framework. Additionally, b) it seeks to identify underexplored areas, c) assess the degree of technical progress toward real-time application, and d) evaluate the translational potential of existing approaches for developing sensor-informed music training systems.
5. Results
From the search and screening process previously described in
Section 4, we proceed with an overview of the current state-of-the-art, key findings, and research evidence related to the core features of the TEMP framework.
5.1. Biomechanical Awareness
In our context, biomechanics awareness can be understood as the process of EEG-based BCI to identify and interpret the brain’s electrical signals that are related to the body’s physical state, movement, posture, force production, and associated sensations. This field explores how the brain internally represents, monitors, and controls the mechanics of the body.
The analyzed sources collectively demonstrate that various aspects of biomechanics, from fine finger movements and upper limb actions to head rotations and overall posture and balance, have distinct correlates in EEG signals that can be decoded and potentially used for HCI purposes. In the following subsections, we present the most significant findings from each biomechanics feature specified in
Section 3.0.1.
5.1.1. Posture and Balance
Our analysis shows that EEG recordings can indeed provide a valuable window into the complex neural mechanisms underlying human posture and balance control [
12,
13]. Maintaining balance is a dynamic process involving the hierarchical organization and interconnectedness of neural ensembles throughout the central nervous system, including the cerebral cortex, and necessitates the continuous integration of multisensory inputs such as visual, vestibular, and somatosensory information [
13]. A prominent EEG signal elicited by sudden balance disturbances is the perturbation-evoked potential (PEP) [
14], characterized by components like the N1 peak (a negative voltage deflection typically observed 100-200 ms after the perturbation) localized in frontal and central brain regions. The N1 is considered an indicator of postural perturbation and is influenced by physical and psychological factors [
14]. Beyond event-related potentials, resting state brain activity and other evoked responses like Heartbeat-Evoked Potentials (HEPs) have also been investigated in relation to posture and interoception [
15].
Analysis of EEG signals in the frequency domain has revealed that specific oscillatory patterns are associated with postural control and stability [
16]. Low-frequency spectral components, particularly the theta rhythm (3-10 Hz), carry information related to the direction of induced changes in postural stability [
17]. Other frequency bands such as alpha (8-12 Hz) and beta (13-40 Hz) are also relevant, showing modulation with balance perturbations and cognitive load [
16]. For instance, increased cognitive workload can lead to an increase in frontal theta power and a decrease in parietal alpha power [
18]. Functional connectivity analysis, which assesses the coordination between different brain regions, demonstrates that connections within brain networks are reconfigured during balance tasks, especially under dual-task conditions or with increasing difficulty, involving changes in connectivity in delta, theta, alpha, and beta bands [
21].
These EEG insights are being applied in various research areas, including investigating the effects of interventions like cervical traction on brain activity in digital device users [
15], exploring how different body postures (sitting vs. standing) modulate interoceptive processing and visual processing [
19], and understanding the influence of posture and environment (e.g., Virtual Reality) on cognitive state decoding for passive Brain-Computer Interfaces (BCIs) [
18]. Studies have successfully classified the direction of postural stability changes from EEG data, even in individuals with neurological conditions like chronic stroke [
17]. Furthermore, the reliability of EEG signals like the N1 potential is being characterized to determine their potential as biomarkers for balance health [
12]. Challenges in these applications include dealing with movement-related artifacts in mobile or standing conditions and ensuring the robustness of mental state decoding across different contexts [
18].
5.1.2. Movement and Muscular Activity
Researchers have widely explored how EEG signals can be used to understand human movement, particularly in the upper limbs—hands, wrists, arms, and shoulders [
22,
23,
24,
25,
26,
27,
28,
29,
29]. A central goal has been to decode how the hand moves through space, including its position, speed, and motion path [
23,
30,
31,
32,
32]. Other studies focused on recognizing different types of hand actions, like grasping or pushing, and the forces involved [
25,
28,
33].
Movements like reaching, flexing the wrist or arm, and finger tapping were also analyzed, along with more complex or continuous actions split into smaller motion units [
22,
26,
27,
28,
34,
35,
36]. Although less common, similar decoding methods were applied to the lower limbs, especially to detect when a movement begins or to classify walking-related tasks [
23,
24,
29,
32,
37].
Besides identifying the type of movement, researchers also decoded how fast a limb moved, how strong the motion was, and in what direction it went [
22,
23,
31,
32,
33,
34,
38,
33].
Several EEG patterns were key for this decoding. For instance, slow brain waves in the delta band (below 4 Hz) often carry important information about how limbs move [
23,
28,
32,
39]. These slow waves also responded to planning and starting a movement [
39,
40]. Other frequency bands, like theta (4–8 Hz), beta, and low-gamma (up to 40 Hz), also contributed in specific cases [
23,
26,
40].
Another useful signal was the movement-related cortical potential (MRCP), which appears just before and during voluntary movement. MRCPs helped detect fast, planned actions known as ballistic movements and provided clues about movement direction and speed [
26,
33,
36,
39,
41].
In terms of results, EEG systems showed strong performance in classifying movement. For example, detecting when someone moved their arm versus staying still reached up to 88.94% accuracy [
22], and identifying movement direction in multi-choice tasks reached about 80% [
29,
33]. Even low-cost, commercial EEG devices performed well, with accuracies around 73% in some tasks [
31,
32]. Deep learning methods significantly improved these results, achieving up to 99% accuracy in decoding different movement features [
28].
Continuous tracking of movements—estimating motion over time rather than just classifying it—is clearly more difficult. On average, studies reported a moderate correlation (around 0.46) between decoded and actual movements [
42]. More advanced methods, like Kalman filters or deep learning, improved this to about 0.5–0.57 [
30,
32].
The analyzed studies reported tasks involving reaching, following targets, or self-initiated actions [
23,
26,
30,
31,
32]. Practical uses include rehabilitation for motor impairments [
22,
23,
29,
31,
41], controlling prosthetic limbs and exoskeletons [
22,
23,
28,
30,
36,
37,
42], and developing more natural brain-computer interfaces [
22,
23,
23].
5.1.3. Fine Motor and Dexterity
Recent research has shown that EEG signals can successfully distinguish different fine motor tasks by analyzing brain activity patterns alongside behavioral data [
43,
44]. For example, tasks such as sinusoidal versus steady force tracking, performed with either hand, can be classified with high accuracy using EEG combined with force measurements, for both novices and experts [
43]. Across individuals, classification results consistently exceeded chance levels, indicating reliable extraction of task-relevant brain features [
43,
44]. Similarly, distinguishing between left and right voluntary finger movements using single-trial EEG data has been achieved with accuracies sometimes exceeding 90%, based on spatial and temporal signal characteristics [
44]. However, achieving such high accuracy with single-trial EEG remains challenging due to low signal-to-noise ratios (SNR) [
44].
Beyond general task classification, efforts have targeted identifying specific finger movements in 2,3,4,5-class separation [
45,
46,
47,
48,
49]. Although overlapping brain activity for different fingers complicates this task, EEG-based decoding is possible. Studies using ultra-high-density (UHD) EEG, which offers more electrodes and better spatial resolution, report improved accuracy in classifying individual finger movements compared to traditional EEG setups [
45,
47]. With one study reporting around 81% accuracy when distinguishing thumb from little finger movements with UHD EEG [
47].
Research has also progressed toward decoding detailed movement parameters like speed and hand position. Visually guided reaching tasks have been successfully classified for direction, speed, and force using EEG [
31]. Even with commercially available mobile EEG systems, movement speed classification accuracies reached about 73%, demonstrating practical feasibility [
31]. Furthermore, decoding levels of attempted finger extension (low, medium, high effort) is possible from EEG signals, including in stroke patients unable to physically perform the movements [
50].
Several key EEG features support effective decoding of fine motor control. ERDs and ERSs reflect decreases and increases in power in specific frequency bands, particularly the mu/alpha (8–13 Hz) and beta (13–30 Hz) bands, during motor tasks and imagery [
31,
44,
45,
48,
50]. These oscillatory patterns are essential in identifying individual finger movements and graded effort levels [
45,
50]. MRCPs provide additional information related to movement initiation and execution, including finger flexion and extension [
31,
48]. Overall, spectral power features across theta, alpha, and beta bands, and the spatial distribution of these signals, play major roles in decoding [
31,
45,
46,
50,
51,
52]. Some studies have also explored EEG phase patterns, which can outperform amplitude-based methods in classifying finger movements [
46].
Factors influencing decoding success include the EEG system’s spatial resolution; UHD EEG systems provide better differentiation of signals from closely spaced motor areas, though volume conduction effects may limit gains compared to lower-density setups [
47,
48]. The context of expertise matters as well—classification improves when tasks closely reflect the expert’s real-world activities [
43]. Individual differences in EEG patterns, especially in experts, complicate group-level decoding but underline the specialized neural adaptations from training [
43,
44]. Additionally, the choice of machine learning techniques and feature extraction methods strongly affects performance, with approaches like Support Vector Machines (SVM), Deep Learning, and Riemannian geometry-based methods showing promise [
31,
43,
44,
45,
46,
48,
51,
53,
54,
55]. Increasing the amount of training data also improves decoding accuracy [
48].
5.1.4. Breathing Control
Breathing, while primarily a brainstem-regulated process, can also be consciously modulated and engages distributed cortical networks [
56,
57]. Recent studies emphasize not only the role of breathing in sustaining life but also its impact on cognition, emotion, and motor control [
58].
A range of breathing tasks has been explored in EEG studies, each shedding light on different neural dynamics. One line of research has focused on slow, controlled breathing and breath-holding. In these tasks, specific respiratory phases such as inhalation, exhalation, and their corresponding holds were examined in isolation [
56]. Other investigations have used inspiratory occlusions, resistive loads, or voluntary breath-holding to study respiratory control under more challenging conditions [
59,
60]. Voluntary breathing and its distinction from automatic respiration have also been a focal point [
61].
Several EEG signal patterns and features have emerged as especially relevant. Theta-band functional connectivity, in particular, has been identified as a discriminative marker for respiratory phase classification. One study found this signal feature to be highly effective in distinguishing between inhale, exhale, and breath-hold states using EEG connectivity patterns across 61 scalp electrodes [
56]. Respiratory-Related Evoked Potentials (RREPs), measured in response to mechanical stimuli such as airway occlusions, provide a window into the sensory processing of respiratory signals [
59]. Another set of findings demonstrated respiration-entrained brain oscillations that are widespread across cortical areas, highlighting the brain’s capacity to synchronize with the rhythm of breathing [
58].
Low-frequency EEG components, particularly in the sub-2 Hz range, have also been linked to voluntary breathing. These were found to increase in the frontal and right-parietal areas during conscious respiratory control and correlated positively with breathing intensity as measured by phase locking and entropy metrics [
61]. Similarly, EEG power fluctuations in the delta (1–3 Hz) and alpha (8–13 Hz) bands were observed in response to breath-holding, with hypercapnia playing a key modulatory role [
60]. Other novel approaches, such as cycle-frequency (C-F) analysis, have improved the temporal resolution and interpretability of EEG signals associated with cyclic respiratory patterns [
62].
Decoding accuracy varies across studies, depending on the complexity of the task and the EEG features used. The most notable performance was achieved using theta-band functional connectivity features, where a classifier reached an accuracy of 95.1% in distinguishing respiratory phases [
56]. Another study focused on detecting respiratory discomfort achieved a classification area under the curve (AUC) of 0.85 (where a perfect classification represents an AUC value of 1), which increased to 0.89 when EEG data were fused with head accelerometry and smoothed over longer windows [
63].
Applications of these findings span both clinical and cognitive domains. In a clinical context, EEG-based decoding of breathing is being explored for the development of brain-ventilator interfaces, which could detect patient discomfort in mechanically ventilated individuals and improve patient-ventilator synchrony [
63]. Studies have also emphasized the relevance of breathing-related EEG features for biofeedback training, cognitive load monitoring, and assessing the neural impact of respiratory diseases such as COPD, asthma, and sleep apnea [
56,
59,
61]. Furthermore, breathing tasks are being used to study fundamental brain functions, such as attention, emotion, and memory, by leveraging respiration as a modulator of cortical oscillations [
58,
64].
5.1.5. Head and Facial Movement
Our search for studies focused on EEG decoding of head, tongue, and facial movement produced very scarce results, with only 3 studies satisfying the inclusion criteria.
In terms of the types of motor tasks studied, one work focused on decoding tongue movements in four directions (left, right, up, and down) from pre-movement EEG activity [
65]. Participants in this study were ten able-bodied individuals who performed tongue movements while EEG data were recorded. The analysis excluded actual movement-related artifacts by focusing on signals before movement onset. Another study targeted head yaw rotations—left and right—triggered by visual cues [
66]. This work also involved ten participants and sought to establish a mapping between EEG signals and head position, aiming to enable movement recognition for human-computer interaction tasks, including potential driving applications. A third study explored the detection of brain signals associated with both right-hand and tongue movements using a low-cost EEG system positioned around the ear [
67]. Here, the aim was to assess whether such a minimalistic EEG setup could effectively support movement classification for control and rehabilitation purposes.
Several EEG signal patterns and features have proven relevant across these studies. In the case of tongue movements, decoding was based largely on MRCPs and SMRs, particularly those detectable before movement onset [
65]. MRCPs showed lateralized activation patterns: leftward movements had greater negativity in the right hemisphere and vice versa, while vertical movements displayed differences in amplitude [
65]. Features extracted for classification included temporal, spectral, entropy-based, and template-based measures, with temporal and template features offering the best performance [
65]. In the head movement study, EEG signals were primarily recorded from occipital and parietal regions, leveraging the roles of these areas in visual processing and motor coordination [
66]. The around-ear EEG study also employed MRCPs and SMRs, including event-related desynchronization/synchronization in mu and beta frequency bands [
67], with both temporal and spectral features used for classification.
Results across these studies demonstrate varying levels of decoding accuracy, depending on the movement type, number of classes, and EEG configuration. For tongue movement detection versus idle, accuracies ranged from 91.7% to 95.3% using a linear discriminant analysis (LDA) classifier, with rightward movements being the most accurately detected [
65]. When classifying between multiple movement types, accuracies decreased as the number of classes increased: 62.6% for four classes, 75.6% for three (left, right, up), and 87.7% for two (left and right) [
65]. LDA outperformed other classifiers like SVM, random forests, and multilayer perceptrons in these tasks [
65]. In the head movement study, classification accuracy was evaluated using correlation coefficients rather than percentage accuracy. Within-subject training and testing yielded strong correlations, up to r = 0.98, but performance dropped sharply in cross-subject evaluations [
66]. In the around-ear EEG study, classification of tongue movements achieved the highest median accuracy at 83%, followed by hand movements at 73% (for control purposes) and 70% (for rehabilitation purposes) [
67]. Classifier performance was generally consistent across LDA, SVM, and RF, with KNN showing poorer results [
67]. These findings are being explored in distinct contexts and applications. Tongue movement decoding has significant implications for BCIs intended for individuals with high-level spinal cord injuries or ALS, who may have retained tongue control but limited or no hand mobility [
65]. The proximity of the tongue’s cortical representation to the ears suggests the possibility of using aesthetically unobtrusive, minimal EEG headsets for such applications [
65]. Head movement decoding is aimed at broader human-computer interaction scenarios, including vehicle or wheelchair control, where users might need to issue directional commands without using their limbs [
66]. The around-ear EEG approach aligns with efforts to make BCIs more practical, low-cost, and socially acceptable. This is especially important for long-term rehabilitation use or everyday assistive control, where bulky or conspicuous equipment can be a barrier [
67]. The study also emphasizes the need for improved electrode technologies and validation in real-world, online settings involving motor-impaired users [
67].
Notably, across the referenced sources, no direct research was identified on decoding general facial muscle movement from EEG signals. While facial EMG and eye blinking have been mentioned as possible control methods [
65], these approaches do not involve decoding motor intentions for facial expressions via EEG, as is done for tongue and head movements.
5.1.6. Movement Intention
Research in this domain has studied various types of breathing and motor-related tasks. Primarily, the focus is on decoding voluntary motion intentions, including both imagined and executed movements. These tasks are embedded in experimental setups designed to isolate brain activity preceding movement, such as self-paced or cue-based actions, as well as tasks incorporating rhythmic temporal prediction to enhance anticipatory signals [
68,
69]. Studies have also explored spontaneous and self-initiated movements, revealing distinct neural patterns compared to externally cued tasks [
69,
70].
Regarding to the most prominent EEG patterns and features, MRPs and the Bereitschaftspotential (BP), also known as the readiness potential, a slow-building negative electrical potential in the brain that occurs before voluntary, self-initiated movement, are critical markers that occur before voluntary motion and reflect preparatory activity, especially in low-frequency bands [
69,
71,
72]. ERS and ERD in beta and alpha frequency ranges are also consistently observed. For example, beta ERD in motor and frontal areas is sensitive to temporal cues and contributes significantly to decoding accuracy [
68]. Oscillatory activity in mu, alpha, and beta bands plays a central role, especially in motor imagery tasks [
29,
73]. Additionally, potentials such as the contingent negative variation (CNV), P300, and N1-P2/N2 complexes are linked to movement anticipation and timing [
72,
73].
Studies have reported high decoding accuracies, with notable improvements when incorporating temporal prediction or preparatory movement states. For example, a time-synchronized task with rhythmic prediction achieved left-right movement decoding accuracies of 89.71% using CSP and 97.30% with Riemann tangent space [
68]. A spatio-temporal deep learning model reported 98.3% accuracy on a large multiclass dataset [
74], while a brain typing system reached 93% accuracy for selecting among five commands [
74]. Time series shapelet-based methods achieved an average F1-score of 0.82 in ankle movement detection, with low latency and a good true positive rate in pseudo-online settings [
72]. Introducing a preparatory state in movement tasks improved classification accuracy from 78.92% to 83.59% and enhanced consistency in comparing spontaneous premovement and prepared premovement [
69].
These decoding advances have been applied in a variety of contexts, primarily in neurorehabilitation and assistive technologies. BCIs using intention classification have been integrated into systems for post-stroke therapy, where detected movement intentions trigger neuromodulatory devices to promote plasticity [
72]. They are also used for controlling robotic limbs and assistive devices such as wheelchairs, enabling users with severe motor impairments to regain autonomy [
29,
74]. Emerging applications explore complex actions, including bimanual tasks like self-feeding, and propose frameworks for robust, user-specific systems capable of decoding multiple simultaneous intentions, such as motion and timing [
68,
73]. Future directions aim to enhance the naturalness, usability, and robustness of BCIs in real-world scenarios, emphasizing the need for distraction-resilient and multi-effector systems [
29].
Beyond decoding intentional actions, studies have investigated the neural basis of non-intentional or spontaneous movements, as well as executed versus imagined actions. This distinction is particularly valuable in clinical assessments for individuals with disorders of consciousness or severe communicative limitations [
75]. Here, EEG-based models are being developed to identify intentionality in neural activity, providing critical information in contexts where behavioral responses are absent [
75]. Comparisons between imagined and executed movements have revealed differential neural correlates that inform the design of more adaptive BCIs [
29,
71].
In addition, research on spontaneous movements—those not prompted by external cues—has highlighted unique EEG features, such as variations in MRCP and ERD patterns, that differ significantly from prepared movements [
69]. These insights could support the development of asynchronous BCIs that operate without the need for fixed external stimuli, thereby increasing the flexibility and autonomy of users [
70,
72].
5.1.7. Coordination and Movement Fluidity
Decoding movement coordination from EEG signals—especially bimanual movements—is an area of growing interest within brain-computer interface (BCI) research, with promising applications in motor enhancement and neurorehabilitation [
76]. While much of the current work has focused on decoding movements of a single hand [
77], there is increasing attention on bimanual motor tasks, which are essential for performing daily activities and achieving comprehensive functional recovery [
78].
Recent advances in this field have explored a variety of experimental paradigms. These include decoding coordinated spatial directions during task-oriented bimanual movements [
77], comparing simultaneous versus sequential movements toward the same target [
79], and decoding continuous movement parameters, such as position, velocity, and force, rather than simply classifying discrete tasks [
78]. Recent studies have applied advanced deep learning methods to improve decoding accuracy. For example, hybrid models that combine convolutional neural networks (CNNs) with bidirectional long short-term memory (BiLSTM) networks have been used to extract complex spatiotemporal features from EEG signals. These approaches have demonstrated the potential feasibility of decoding coordinated movement directions in bimanual tasks [
77].
Bimanual movements exhibit distinct neural signatures compared to unimanual movements. Coordinated bimanual tasks typically show bilateral event-related desynchronization (ERD), while unimanual tasks tend to elicit ERD primarily in the contralateral hemisphere [
77]. Furthermore, individual differences in motor abilities—such as hand dexterity (measured by the Purdue Pegboard Test) and motor imagery skills (assessed with the Movement Imagery Questionnaire-3)—are significantly associated with specific EEG patterns, particularly alpha-band relative ERD [
76]. EEG-based dynamical network analyses have also highlighted neural markers of visual-motor coordination in both the alpha and gamma frequency bands, which are associated with motor control and visual processing [
80].
These EEG decoding strategies are being increasingly applied in the context of neurorehabilitation, particularly for individuals with motor impairments due to stroke or spinal cord injury (SCI). BCI-based therapies are gaining recognition for their potential to enhance upper limb recovery, expand movement range, and support the execution of complex bimanual tasks. Ultimately, these technologies aim to empower patients to regain independence in activities of daily living (ADL) and to control external assistive devices—such as robotic arms, prosthetics, exoskeletons, robotic gloves, and virtual avatars [
78,
79].
5.2. Tempo and Rhythm
Contemporary research into the neural mechanisms of rhythm perception and synchronization has significantly advanced through sophisticated EEG analysis methodologies that are a consequence of the technology’s high temporal resolution, enabling deeper insights into rhythmic cognition and associated neural dynamics.
State-of-the-art EEG research employs diverse analytical frameworks, prominently featuring steady-state evoked potentials (SSEPs), which reliably capture stable neural responses at specific rhythmic frequencies and harmonics [
81,
82,
83,
84]. Additionally, time-frequency analyses examining alpha, beta, and mu frequency bands provide insights into rhythmic anticipation, motor preparation, and entrainment [
68,
85,
86,
87,
88]. Recent innovations include autocorrelation-based methods for detecting rhythmic periodicities in noisy EEG data, enhancing methodological robustness [
89]. Spatial filtering and advanced machine learning techniques, such as Random Forest and k-nearest neighbor (kNN) algorithms, have also been effectively employed for decoding beat frequencies from naturalistic music stimuli [
90].
The studied motor and cognitive tasks vary broadly, including externally paced sensorimotor synchronization (SMS), self-paced tapping, passive listening, and motor imagery paradigms [
81,
86,
87,
91]. Tasks have been designed to differentiate temporal and movement intentions, enabling precise decoding of compound cognitive-motor intentions [
68]. Studies have used ambiguous rhythms with contexts inducing specific metrical interpretations to link subjective beat perception with neural correlates [
84]. Imagined rhythmic tasks without actual physical performance provided opportunities to examine covert motor system involvement, revealing motor-to-auditory information flows and hierarchical metrical processing [
91,
92]. Additionally, experimental paradigms investigating simultaneous processing of competing rhythms and exploring auditory versus visual modality-specific motor entrainment have enriched the understanding of multimodal rhythmic cognition [
83,
86,
88].
Analyses have identified specific EEG signal patterns for decoding rhythmic tasks. SSEPs consistently indicate neural entrainment to periodic auditory stimuli, and their amplitude has been linked to conscious beat perception even in passive listening scenarios [
81,
82,
83,
84]. Oscillatory patterns, particularly alpha-beta event-related desynchronization (ERD), are robust indicators of temporal anticipation and motor execution [
68,
85]. Mu rhythm modulations, isolated through Independent Component Analysis (ICA), have provided crucial evidence for motor system activation during rhythm perception without overt movement, highlighting the topographic organization within the somatomotor cortex [
86,
88]. Corticomuscular coherence, assessed through EEG and electromyography (EMG), offers additional precision in understanding motor synchronization [
93]. Machine learning models have utilized spectral band power features derived from EEG segments to effectively decode dominant beat frequencies, further demonstrating neural tracking of musical rhythm [
90].
Empirical findings highlight the complex and nuanced nature of neural rhythmic processing. Notably, distinct brain regions are engaged during externally synchronized versus self-paced tapping, involving the inferior frontal gyrus and bilateral inferior parietal lobules, respectively, underscoring different neural mechanisms for internal and external rhythmic timing [
81]. Studies have indicated stronger motor entrainment for visual rather than auditory rhythms, challenging previous assumptions regarding modality dominance [
86]. Additionally, higher SSEP amplitudes were observed at frequencies matching consciously perceived metrical patterns, even without deliberate motor planning, emphasizing a neural-subjective link in rhythm perception [
84]. High accuracy (88.51%) in decoding complex cognitive-motor intentions has been achieved, highlighting the predictive power of rhythmic temporal expectations [
68]. EEG decoding performance has also demonstrated the benefit of utilizing longer EEG segments and dense spatial data for classifying the dominant beat frequency of naturalistic music, achieving accuracy significantly above chance [
90].
The explored contexts and applications demonstrate extensive practical implications. EEG-based rhythm decoding methods offer potential advancements in BCIs, providing nuanced motor and cognitive control for assistive technologies [
68,
90]. Autocorrelation and SSEP analyses expand applicability to developmental populations and rehabilitation contexts, enhancing rhythm-based therapeutic strategies [
84,
89]. Improved methodological rigor and statistical reporting standards have increased reproducibility and comparability across EEG studies, strengthening research quality [
94]. Furthermore, the identification of motor system involvement in rhythm perception, even without overt movement, enriches theories of rhythmic cognition across domains such as music, dance, language processing, and visual rhythm perception [
84,
87,
88,
91].
5.3. Cognitive Engagement
The Flow state, commonly described as an optimal psychological experience characterized by intense concentration, intrinsic motivation, and effortless involvement in activities, has been extensively studied in psychology and neuroscience. Individuals experiencing Flow report a sense of control, diminished self-consciousness, and an altered perception of time, making this state highly desirable for performance enhancement, learning, and well-being.
Research on EEG-based detection of Flow, engagement, and related cognitive–affective states has expanded rapidly, moving from laboratory-bound paradigms to wearable, multimodal, and even intracranial recordings. Early syntheses already highlighted a convergent emphasis on fronto-central alpha and theta rhythms during Flow, together with reduced medial prefrontal activity, but also pointed to considerable methodological heterogeneity that still hampers direct comparisons across studies [
95]. More recent work has addressed these gaps by combining low-cost headsets, peripheral sensors, and transfer-learning pipelines to increase ecological validity and generalisability [
96,
98]. At the same time, high-resolution recordings using Stereo-EEG or UHD-EEG systems have begun to map the spectro-spatial signatures of self-generated speech, music, or complex motor preparation, extending the state-of-the-art beyond classical stimulus–response designs [
99,
100].
Empirical evidence of Flow can be observed across a diverse set of behavioural contexts. Cognitive challenges such as mental arithmetic and reading aloud were used to elicit contrasting Flow and non-Flow states [
98]. Gameplay remains a popular paradigm, ranging from brief two-minute commercial games [
100] to serious educational games that manipulate learning-technology innovations [
101] and Tetris variants spanning boredom to overload [
102]. Fine-grained motor behaviour has equally featured: repetitive finger presses [
103], virtual-reality fingertip force control matched to individual skill [
51], and long-tone musical Go/NoGo tasks preceding performance [
99]. Naturalistic production modalities—reading stories aloud or playing the violin—have complemented passive listening comparisons [
100]. By allowing each participant to select the task that induced the strongest subjective flow, one study further emphasised the personalised nature of the phenomenon [
102].
Regarding the most relevant EEG signal features used for decoding the Flow state, absolute or relative band-power in delta (0.5–4 Hz) to gamma (31–50 Hz) ranges remains the principal descriptor [
51,
102,
103]. In fronto-central and parietal sites, moderate alpha power and increased beta–gamma activity have repeatedly marked intensified flow or team-flow experience [
51,
95]. Spectral ratios such as
and
have been proposed as engagement indices, although their discriminative value varies [
100]. Beyond power, coherence and global phase synchrony better capture large-scale organisation: higher alpha and beta coherence accompany high-flow trials [
51], and preparation for musical performance shows frequency-specific network flexibility detected through dynamic phase-locking analyses [
99]. Convolutional neural networks operating directly on raw Emotiv EPOC signals have outperformed manually engineered features, underscoring the utility of automatic representation learning [
98].
Most studies report that Flow-related states can be decoded above chance. Using the consumer-grade Emotiv EEG, a subject-independent CNN reached 64.97% accuracy, which rose to 75.10% when emotional arousal knowledge was transferred from a pre-analysed (DEAP) dataset [
98]. A virtual-reality fingertip task achieved mean within-subject accuracies exceeding 80% in the beta band and comparable performance for coherence measures [
51]. In a pooled cross-subject analysis of short game sessions, an SVM driven by combined engagement indices reached 81% accuracy and 80% F1, meeting the benchmark generally deemed sufficient for rehabilitation feedback systems [
100]. Portable single-channel devices inevitably yield more modest predictive power; nevertheless, delta, theta, and gamma activity at Fpz explained a significant fraction of variance in subjective flow scores [
102]. Where classification is not the goal, signal-averaged MRPs identified tightly synchronous generators over primary and supplementary motor cortices during rhythmic tapping [
103], and single-case SEEG demonstrated stronger cortical encoding during first compared with repeated hearings of self-produced speech or music [
100].
Application domains mirror this methodological spectrum. Real-time detection of flow promises adaptive work-support systems that eschew self-report biases [
98]. In rehabilitation, decoding intrinsic engagement fluctuations could enable closed-loop neurofeedback for fine motor recovery [
51], while flexible network markers observed during musical preparation inform theories of motor planning [
99]. Education benefits from EEG-based monitoring of attention and flow within serious games, guiding the design of learning technology innovations [
101]. Lightweight headbands and single-channel sensors facilitate studies in naturalistic or mobile settings, broadening participation and paving the way for personalised neuroadaptive interfaces [
96,
102]. Collectively, the literature converges on the feasibility of objective, real-time assessment of flow and engagement, yet also highlights the need for larger, standardised datasets, multi-modal fusion strategies, and rigorous cross-subject validation if these insights are to generalise across users and contexts.
6. Discussion
This review set out to evaluate the extent to which current EEG-based BCI research supports the functional specifications of the TEMP framework. By mapping the empirical findings summarized in
Section 5 onto the TEMP features list introduced in
Section 3, three potential feasibility tiers emerge: (i) capabilities that are already technically viable and could be prototyped; (ii) capabilities that are within experimental reach; and (iii) capabilities that remain largely aspirational and demand substantive advances in EEG decoding, multimodal fusion, and ecological validation. In what follows, we discuss each TEMP pillar in turn and outline a staged development roadmap.
Several aspects of EEG decoding discussed in the literature are already technically mature and could potentially be prototyped in a musical practice context. For instance, discrete classification of bimanual coordination based on EEG markers such as bilateral
ERD and reconfigurations of alpha and gamma-band visual–motor networks is sufficiently robust to allow real-time feedback. This could potentially support musicians by flagging out-of-sync movements during piano playing, guitar fretting, or string bowing, directly addressing pedagogical needs related to inter-hand synchrony and bilateral motor coordination [
77,
78]. Similarly, EEG-based classifiers combining spatial patterns of mu and beta-bands ERD/ERS, slow delta–theta oscillations, and movement-related cortical potentials (MRCPs) achieve reliable decoding of multi-directional limb reaches, velocity categories, and coarse muscular effort levels [
22,
25,
27,
28]. These capabilities could potentially enable real-time detection and correction of gross kinematic errors such as lateral bow drift or excessive muscular exertion in instrumental practice.
At the finger level, EEG classifiers currently distinguish individual finger movements on a single hand, suggesting the possibility of real-time identification of fingering mistakes in keyboard or plucked-string instruments [
45,
46,
47]. Additionally, EEG markers have demonstrated the capacity to classify broad levels of finger velocity and muscular effort, which could potentially offer musicians feedback on the consistency of speed and the presence of excessive force during rapid arpeggios or scalar passages [
50,
52]. With respect to respiratory control, cortical connectivity in the theta-band reliably discriminates inhale, exhale, and breath-hold phases, indicating a possibility for EEG-driven breathing phase indicators—useful for singers and wind instrumentalists to confirm correct timing of inhalations and proper diaphragmatic support [
56,
58]. Furthermore, EEG decoding of cued movement intention, leveraging signals such as the Bereitschaftspotential and beta-band ERD, presents the potential to detect preparatory movements in advance, thereby alerting musicians to inadvertent gestures such as premature shoulder lifts or unnecessary tension before a movement is fully executed [
68,
69].
Wearable EEG studies have also shown that moderate frontal alpha power in combination with elevated beta–gamma coherence can distinguish high-flow states from disengaged practice episodes in motor and game contexts [
51,
98]. In short, practice sessions, engagement indices derived from these features classify flow versus non-flow states with around 80% accuracy [
100], suggesting that the TEMP framework could potentially use these signals to drive adaptive training protocols. For example, technical exercises might automatically accelerate when strong focus is detected or pause when neural signs of overload emerge. These possibilities highlight an emerging potential for EEG-informed feedback to help musicians self-regulate their attention and mental effort during demanding practice sessions.
Beyond immediate feasibility, several EEG decoding capabilities have demonstrated promise but still require further research to fully support nuanced musical applications. Continuous decoding of limb and finger trajectories has achieved only moderate accuracy in current EEG studies, limiting their suitability for precise real-time guidance on detailed kinematic control such as smooth bow trajectories, subtle wrist rotations, or intra-finger force distribution [
23,
30,
32,
48,
51,
54]. Likewise, EEG-based decoding of muscular effort, although effective at distinguishing broadly differing effort levels, remains inadequate for capturing the subtle gradations required for fine dynamic control [
33,
36,
39]. Thus, while coarse muscular overactivation during demanding passages might be detectable, the subtle muscular adjustments distinguishing mezzo-forte from forte are still beyond reliable EEG decoding.
Subtle respiratory dynamics, particularly the fine breath-pressure modulations crucial for nuanced phrasing, also present decoding challenges. EEG markers currently lose sensitivity at finer levels of respiratory effort, suggesting only limited applicability for nuanced control of sustained notes or gradual dynamics such as diminuendi [
60,
61,
63]. Furthermore, EEG-based detection of spontaneous or asynchronous movement intentions, crucial for applications in highly expressive or improvisatory performance contexts, has not yet been validated under the complex motor conditions typical of real-world instrumental practice [
72]. Similarly, EEG decoding of internalised tempo and rhythm has advanced significantly, with reliable decoding of dominant beats and subjective metrical interpretations achievable over longer EEG segments; however, the sub-20 ms timing accuracy required for virtuoso-level rhythmic precision still eludes current methods [
81,
84,
86,
90]. While flow-related EEG markers are promising, their generalisation across individuals remains limited. Convolutional neural networks trained on one user group require explicit domain adaptation before being applied elsewhere, and low-density or single-channel systems typically capture only coarse engagement rather than the nuanced “sweet-spot” of immersive concentration [
98,
102].
Several additional EEG decoding capabilities remain entirely aspirational, having not yet been experimentally demonstrated or validated in contexts directly relevant to music practice. Notably, continuous EEG-based decoding of subtle postural shifts or fine alignment adjustments typical of instrumental playing has not been reported. Existing EEG markers, such as the perturbation-evoked N1 response, are inherently limited to detecting discrete balance disturbances rather than tracking ongoing alignment or weight distribution [
12,
14]. Similarly, comprehensive EEG decoding of detailed facial musculature and expressions, important for instruments involving complex embouchure control or vocalists requiring nuanced facial tension management, has not yet been demonstrated. Current EEG approaches also fail to decode subtle embouchure adjustments or head alignment with sufficient cross-subject generalisation for practical use [
66]. Furthermore, EEG-only methods have not achieved the sub-millimetre precision needed to reconstruct finger trajectories and pad pressure distributions required for refining sophisticated articulation techniques or subtle rotational finger adjustments. Finally, although markers of cognitive engagement and flow states have been identified, a universally applicable, robust EEG signature capturing the nuanced cognitive "sweet spot" of creative musical engagement, independent of individual calibration or task-specific training, has not yet emerged [
98,
102].
Lastly, it is important to note that EEG signals primarily reflect cortical activity, leaving a substantial portion of subcortical dynamics, such as those involving the basal ganglia, which play a key role in musical practice and sensorimotor control, largely inaccessible to current EEG-based BCIs. This limitation does not hinder the development of the TEMP framework and highlights a promising direction for future research.
An integrated potential feasibility map is presented in
Table 2
7. Conclusion and Future Work
The reviewed literature demonstrates considerable potential for EEG-based BCI applications—conceptualized within the TEMP framework—to enhance high-level instrumental training by delivering detailed, real-time feedback on biomechanical, cognitive, and technical aspects of performance. The TEMP concept, as evidenced by current state-of-the-art EEG technologies, appears technically feasible, particularly in areas such as detecting bimanual coordination, discrete finger movements, general motor intentions, respiratory phases, and cognitive states like flow and engagement.
However, despite promising advancements, substantial technical and practical challenges remain. Real-time continuous decoding of subtle biomechanical adjustments, nuanced muscular effort gradations, fine respiratory control, and precise internalized tempo processing still require further research and development. Achieving millisecond timing precision necessary for professional rhythmic accuracy and nuanced musical articulation is currently beyond the capabilities of available EEG decoding methodologies. To further advance the TEMP concept, several technological strategies merit exploration. The integration of multimodal sensing approaches, such as camera-based motion tracking and inertial measurement units (IMUs) for precise body and head tracking, could significantly enhance biomechanical awareness and overcome some limitations inherent to EEG. Additionally, hybrid systems combining EEG with electromyography (EMG) could improve detection accuracy of fine motor control and muscular tension; however, with the downside of overwhelming the musician with an intricate network of sensors. Machine learning algorithms, particularly deep learning models, should continue to be refined for enhanced pattern recognition and predictive capabilities.
Also, potential barriers to technology acceptance must be addressed. Key challenges include the invasiveness and practicality of EEG setups, user discomfort, and the social stigma of wearing visible neurotech equipment during practice. There are also concerns about data privacy, user consent, and ethical use of biometric and neurological data. User training and education to foster familiarity and trust in BCI systems will be critical for widespread acceptance. Future research should prioritize usability studies, addressing ergonomic design and system integration into existing musical training routines to ensure ease of use and acceptance by musicians and educators alike.
Author Contributions
“Conceptualization, AVP; methodology, AVP, JE, JC, CPV; writing—original draft preparation, all authors; writing—review and editing, all authors. All authors have read and agreed to the published version of the manuscript.”
Funding
This work is financed through national funds by FCT - Fundação para a Ciência e a Tecnologia, I.P., in the framework of the projects UIDB/00326/2025, UIDB/04501/2025 and UIDP/04501/2025.
Acknowledgments
The authors would like to acknowledge the use of AI tools, more specifically ChatGPT and NotebookLM. These tools were used for summarizing and extracting structured information for the selected articles (title, authors, year, objectives, method, summary of results, summary of conclusions). However, the article screening process was completely performed “manually” by the authors. In addition, ChatGPT was used in the entire document to correct grammar, spelling, and, where necessary, summarize and rephrase for clarity and conciseness. ChatGPT was also used to assist with LaTeX formatting issues.
Appendix A. Utilized Search Queries
Table A1.
Search Queries by Category.
Table A1.
Search Queries by Category.
| TEMP Feature |
Search Query |
| Posture and Balance |
(“EEG” OR “brain-computer interface” OR “brain computer interface” OR “BCI”) AND (“posture” OR “alignment” OR “weight distribution” OR “proprioception” OR “kinesthesia” OR “balance”) |
| Movement and Muscle Activity |
(“EEG” OR “brain-computer interface” OR “brain computer interface” OR “BCI”) AND (“muscle activity” OR “motor detection” OR “motor execution” OR “movement detection” OR “movement”) |
| Fine Motor and Dexterity |
(“EEG” OR “brain-computer interface” OR “brain computer interface” OR “BCI”) AND (“fine motor control” OR “fine motor skills” OR “finger movement” OR “motor dexterity” OR “manual dexterity” OR “finger tapping” OR “precise movement” OR “precision motor tasks” OR “finger control” OR “force” OR “pressure” OR “finger identification”) |
| Breathing Control |
(“EEG” OR “brain-computer interface” OR “brain computer interface” OR “BCI”) AND (“breathing” OR “respiration” OR “respiratory control” OR “diaphragm” OR “respiratory effort” OR “respiratory patterns” OR “breath regulation” OR “inhalation” OR “exhalation”) |
| Head and Facial Movement |
(“EEG” OR “brain-computer interface” OR “brain computer interface” OR “BCI”) AND (“facial movement” OR “facial muscle activity” OR “facial tension” OR “facial expression” OR “head movement” OR “head posture” OR “head position” OR “head tracking” OR “cranial muscle activity” OR “facial motor control”) |
| Movement Intention |
(“EEG” OR “brain-computer interface” OR “brain computer interface” OR “BCI”) AND (“voluntary movement” OR “involuntary movement” OR “motor intention” OR “movement intention” OR “intent detection” OR “reflex movement” OR “automatic motor response” OR “conscious movement” OR “unconscious movement” OR “motor inhibition” OR “motor control” OR “volitional” OR “reflexive movement” OR “intentional movement” OR “purposeful movement” OR “spasmodic movement”) |
| Coordination and Movement Fluidity |
(“EEG” OR “brain-computer interface” OR “brain computer interface” OR “BCI”) AND (“motor coordination” OR “movement fluidity”) |
| Tempo Processing |
(“EEG” OR “brain-computer interface” OR “brain computer interface” OR “BCI”) AND (“tempo perception” OR “tempo tracking” OR “internal tempo” OR “imagined tempo” OR “motor imagery tempo” OR “rhythm perception” OR “timing perception” OR “sensorimotor timing” OR “mental tempo” OR “temporal processing” OR “beat perception” OR “rhythm processing” OR “timing accuracy”) |
| Cognitive Engagement |
(“EEG” OR “brain-computer interface” OR “brain computer interface” OR “BCI”) AND (“flow” OR “musical flow” OR “musical performance” OR “music performance” OR “movement focus” OR “active movement control” OR “automatic performance” OR “performance engagement”) |
References
- Barrett, K.C.; Ashley, R.; Strait, D.L.; Kraus, N. Art and science: how musical training shapes the brain. Frontiers in Psychology 2013, 4, 713. [Google Scholar] [CrossRef] [PubMed]
- Williamon, A. Musical excellence: Strategies and techniques to enhance performance; Oxford University Press, 2004.
- Bazanova, O.; Kondratenko, A.; Kondratenko, O.; Mernaya, E.; Zhimulev, E. New computer-based technology to teach peak performance in musicians. In Proceedings of the 2007 29th International Conference on Information Technology Interfaces. IEEE; 2007; pp. 39–44. [Google Scholar]
- Pop-Jordanova, N.; Bazanova, O.; Kondratenko, A.; Kondratenko, O.; Markovska-Simoska, S.; Mernaya, J. Simultaneous EEG and EMG biofeedback for peak performance in musicians. In Proceedings of the Inaugural Meeting of EPE Society of Applied Neuroscience (SAN) in association with the EU Cooperation in Science and Technology (COST) B27; 2006; pp. 23–23. [Google Scholar]
- Riquelme-Ros, J.V.; Rodríguez-Bermúdez, G.; Rodríguez-Rodríguez, I.; Rodríguez, J.V.; Molina-García-Pardo, J.M. On the better performance of pianists with motor imagery-based brain-computer interface systems. Sensors 2020, 20, 4452. [Google Scholar] [CrossRef] [PubMed]
- Bhavsar, P.; Shah, P.; Sinha, S.; Kumar, D. Musical Neurofeedback Advancements, Feedback Modalities, and Applications: A Systematic Review. Applied psychophysiology and biofeedback 2024, 49, 347–363. [Google Scholar] [CrossRef] [PubMed]
- Sayal, A.; Direito, B.; Sousa, T.; Singer, N.; Castelo-Branco, M. Music in the loop: a systematic review of current neurofeedback methodologies using music. Frontiers in Neuroscience 2025, 19, 1515377. [Google Scholar] [CrossRef]
- Kawala-Sterniuk, A.; Browarska, N.; Al-Bakri, A.; Pelc, M.; Zygarlicki, J.; Sidikova, M.; Martinek, R.; Gorzelanczyk, E.J. Summary of over fifty years with brain-computer interfaces—a review. Brain sciences 2021, 11, 43. [Google Scholar] [CrossRef]
- Nicolas-Alonso, L.F.; Gomez-Gil, J. Brain computer interfaces, a review. sensors 2012, 12, 1211–1279. [Google Scholar] [CrossRef]
- Acquilino, A.; Scavone, G. Current state and future directions of technologies for music instrument pedagogy. Frontiers in Psychology 2022, 13, 835609. [Google Scholar] [CrossRef]
- Folgieri, R.; Lucchiari, C.; Gričar, S.; Baldigara, T.; Gil, M. Exploring the potential of BCI in education: an experiment in musical training. Information 2025, 16, 261. [Google Scholar] [CrossRef]
- Mirdamadi, J.L.; Poorman, A.; Munter, G.; Jones, K.; Ting, L.H.; Borich, M.R.; Payne, A.M. Excellent test-retest reliability of perturbation-evoked cortical responses supports feasibility of the balance N1 as a clinical biomarker. Journal of Neurophysiology 2025, 133, 987–1001. [Google Scholar] [CrossRef]
- Dadfar, M.; Kukkar, K.K.; Parikh, P.J. Reduced parietal to frontal functional connectivity for dynamic balance in late middle-to-older adults. Experimental Brain Research 2025, 243, 1–13. [Google Scholar] [CrossRef]
- Jalilpour, S.; Müller-Putz, G. Balance perturbation and error processing elicit distinct brain dynamics. Journal of Neural Engineering 2023, 20, 026026. [Google Scholar] [CrossRef] [PubMed]
- Jung, J.Y.; Kang, C.K.; Kim, Y.B. Postural supporting cervical traction workstation to improve resting state brain activity in digital device users: EEG study. Digital Health 2024, 10, 20552076241282244. [Google Scholar] [CrossRef] [PubMed]
- Chen, Y.C.; Tsai, Y.Y.; Huang, W.M.; Zhao, C.G.; Hwang, I.S. Cortical adaptations in regional activity and backbone network following short-term postural training with visual feedback for older adults. GeroScience 2025, 1–14. [Google Scholar] [CrossRef] [PubMed]
- Solis-Escalante, T.; De Kam, D.; Weerdesteyn, V. Classification of rhythmic cortical activity elicited by whole-body balance perturbations suggests the cortical representation of direction-specific changes in postural stability. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2020, 28, 2566–2574. [Google Scholar] [CrossRef]
- Gherman, D.E.; Klug, M.; Krol, L.R.; Zander, T.O. An investigation of a passive BCI’s performance for different body postures and presentation modalities. Biomedical Physics & Engineering Express 2025. [Google Scholar]
- Oknina, L.; Strelnikova, E.; Lin, L.F.; Kashirina, M.; Slezkin, A.; Zakharov, V. Alterations in functional connectivity of the brain during postural balance maintenance with auditory stimuli: a stabilometry and electroencephalogram study. Biomedical Physics & Engineering Express 2025, 11, 035006. [Google Scholar]
- Dohata, M.; Kaneko, N.; Takahashi, R.; Suzuki, Y.; Nakazawa, K. Posture-Dependent Modulation of Interoceptive Processing in Young Male Participants: A Heartbeat-Evoked Potential Study. European Journal of Neuroscience 2025, 61, e70021. [Google Scholar] [CrossRef]
- Borra, D.; Mondini, V.; Magosso, E.; Muller-Putz, G.R. Decoding movement kinematics from EEG using an interpretable convolutional neural network. Computers in Biology and Medicine 2023, 165, 107323. [Google Scholar] [CrossRef]
- Besharat, A.; Samadzadehaghdam, N. Improving Upper Limb Movement Classification from EEG Signals Using Enhanced Regularized Correlation-Based Common Spatio-Spectral Patterns. IEEE Access 2025. [Google Scholar] [CrossRef]
- Wang, P.; Li, Z.; Gong, P.; Zhou, Y.; Chen, F.; Zhang, D. MTRT: Motion trajectory reconstruction transformer for EEG-based BCI decoding. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2023, 31, 2349–2358. [Google Scholar] [CrossRef]
- Jia, H.; Feng, F.; Caiafa, C.F.; Duan, F.; Zhang, Y.; Sun, Z.; Solé-Casals, J. Multi-class classification of upper limb movements with filter bank task-related component analysis. IEEE Journal of Biomedical and Health Informatics 2023, 27, 3867–3877. [Google Scholar] [CrossRef] [PubMed]
- Gao, Z.; Xu, B.; Wang, X.; Zhang, W.; Ping, J.; Li, H.; Song, A. Multilayer Brain Networks for Enhanced Decoding of Natural Hand Movements and Kinematic Parameters. IEEE Transactions on Biomedical Engineering 2024. [Google Scholar] [CrossRef] [PubMed]
- Niu, J.; Jiang, N. Pseudo-online detection and classification for upper-limb movements. Journal of Neural Engineering 2022, 19, 036042. [Google Scholar] [CrossRef]
- Zolfaghari, S.; Rezaii, T.Y.; Meshgini, S.; Farzamnia, A.; Fan, L.C. Speed classification of upper limb movements through EEG signal for BCI application. IEEE Access 2021, 9, 114564–114573. [Google Scholar] [CrossRef]
- Kumar, N.; Michmizos, K.P. A neurophysiologically interpretable deep neural network predicts complex movement components from brain activity. Scientific reports 2022, 12, 1101. [Google Scholar] [CrossRef]
- Wang, J.; Bi, L.; Fei, W. EEG-based motor BCIs for upper limb movement: current techniques and future insights. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2023, 31, 4413–4427. [Google Scholar] [CrossRef]
- Hosseini, S.M.; Shalchyan, V. State-based decoding of continuous hand movements using EEG signals. IEEE Access 2023, 11, 42764–42778. [Google Scholar] [CrossRef]
- Robinson, N.; Chester, T.W.J.; et al. Use of mobile EEG in decoding hand movement speed and position. IEEE Transactions on Human-Machine Systems 2021, 51, 120–129. [Google Scholar] [CrossRef]
- Wang, J.; Bi, L.; Fei, W.; Tian, K. EEG-based continuous hand movement decoding using improved center-out paradigm. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2022, 30, 2845–2855. [Google Scholar] [CrossRef]
- Fei, W.; Bi, L.; Wang, J.; Xia, S.; Fan, X.; Guan, C. Effects of cognitive distraction on upper limb movement decoding from EEG signals. IEEE Transactions on Biomedical Engineering 2022, 70, 166–174. [Google Scholar] [CrossRef]
- Wei, Y.; Wang, X.; Luo, R.; Mai, X.; Li, S.; Meng, J. Decoding movement frequencies and limbs based on steady-state movement-related rhythms from noninvasive EEG. Journal of Neural Engineering 2023, 20, 066019. [Google Scholar] [CrossRef] [PubMed]
- Falcon-Caro, A.; Ferreira, J.F.; Sanei, S. Cooperative Identification of Prolonged Motor Movement from EEG for BCI without Feedback. IEEE Access 2025. [Google Scholar] [CrossRef]
- Bi, L.; Xia, S.; Fei, W. Hierarchical decoding model of upper limb movement intention from EEG signals based on attention state estimation. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2021, 29, 2008–2016. [Google Scholar] [CrossRef] [PubMed]
- Asanza, V.; Peláez, E.; Loayza, F.; Lorente-Leyva, L.L.; Peluffo-Ordóñez, D.H. Identification of lower-limb motor tasks via brain–computer interfaces: A topical overview. Sensors 2022, 22, 2028. [Google Scholar] [CrossRef]
- Yan, Y.; Li, J.; Yin, M. EEG-based recognition of hand movement and its parameter. Journal of Neural Engineering 2025, 22, 026006. [Google Scholar] [CrossRef]
- Kobler, R.J.; Kolesnichenko, E.; Sburlea, A.I.; Müller-Putz, G.R. Distinct cortical networks for hand movement initiation and directional processing: an EEG study. NeuroImage 2020, 220, 117076. [Google Scholar] [CrossRef]
- Körmendi, J.; Ferentzi, E.; Weiss, B.; Nagy, Z. Topography of movement-related delta and theta brain oscillations. Brain Topography 2021, 34, 608–617. [Google Scholar] [CrossRef]
- Peng, B.; Bi, L.; Wang, Z.; Feleke, A.G.; Fei, W. Robust decoding of upper-limb movement direction under cognitive distraction with invariant patterns in embedding manifold. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2024, 32, 1344–1354. [Google Scholar] [CrossRef]
- Khaliq Fard, M.; Fallah, A.; Maleki, A. Neural decoding of continuous upper limb movements: a meta-analysis. Disability and Rehabilitation: Assistive Technology 2022, 17, 731–737. [Google Scholar] [CrossRef]
- Gaidai, R.; Goelz, C.; Mora, K.; Rudisch, J.; Reuter, E.M.; Godde, B.; Reinsberger, C.; Voelcker-Rehage, C.; Vieluf, S. Classification characteristics of fine motor experts based on electroencephalographic and force tracking data. Brain Research 2022, 1792, 148001. [Google Scholar] [CrossRef]
- Li, Y.; Gao, X.; Liu, H.; Gao, S. Classification of single-trial electroencephalogram during finger movement. IEEE Transactions on biomedical engineering 2004, 51, 1019–1025. [Google Scholar] [CrossRef] [PubMed]
- Nemes, Á.G.; Eigner, G.; Shi, P. Application of Deep Learning to Enhance Finger Movement Classification Accuracy From UHD-EEG Signals. IEEE Access 2024. [Google Scholar] [CrossRef]
- Wenhao, H.; Lei, M.; Hashimoto, K.; Fukami, T. Classification of finger movement based on EEG phase using deep learning. In Proceedings of the 2022 Joint 12th International Conference on Soft Computing and Intelligent Systems and 23rd International Symposium on Advanced Intelligent Systems (SCIS&ISIS). IEEE, 2022, pp. 1–4.
- Ma, Z.; Xu, M.; Wang, K.; Ming, D. Decoding of individual finger movement on one hand using ultra high-density EEG. In Proceedings of the 2022 16th ICME International Conference on Complex Medical Engineering (CME). IEEE, 2022, pp. 332–335.
- Sun, Q.; Merino, E.C.; Yang, L.; Van Hulle, M.M. Unraveling EEG correlates of unimanual finger movements: insights from non-repetitive flexion and extension tasks. Journal of NeuroEngineering and Rehabilitation 2024, 21, 228. [Google Scholar] [CrossRef]
- Anam, K.; Bukhori, S.; Hanggara, F.; Pratama, M. Subject-independent Classification on Brain-Computer Interface using Autonomous Deep Learning for finger movement recognition. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2020, Vol. 2020, pp. 447–450.
- Haddix, C.; Bates, M.; Garcia Pava, S.; Salmon Powell, E.; Sawaki, L.; Sunderam, S. Electroencephalogram Features Reflect Effort Corresponding to Graded Finger Extension: Implications for Hemiparetic Stroke. Biomedical Physics & Engineering Express 2025. [Google Scholar]
- Tian, B.; Zhang, S.; Xue, D.; Chen, S.; Zhang, Y.; Peng, K.; Wang, D. Decoding intrinsic fluctuations of engagement from EEG signals during fingertip motor tasks. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2025. [Google Scholar] [CrossRef]
- Peng, C.; Peng, W.; Feng, W.; Zhang, Y.; Xiao, J.; Wang, D. EEG correlates of sustained attention variability during discrete multi-finger force control tasks. IEEE Transactions on Haptics 2021, 14, 526–537. [Google Scholar] [CrossRef]
- Todd, N.P.; Govender, S.; Hochstrasser, D.; Keller, P.E.; Colebatch, J.G. Distinct movement related changes in EEG and ECeG power during finger and foot movement. Neuroscience Letters 2025, 853, 138207. [Google Scholar] [CrossRef]
- Jounghani, A.R.; Backer, K.C.; Vahid, A.; Comstock, D.C.; Zamani, J.; Hosseini, H.; Balasubramaniam, R.; Bortfeld, H. Investigating the role of auditory cues in modulating motor timing: insights from EEG and deep learning. Cerebral Cortex 2024, 34, bhae427. [Google Scholar] [CrossRef]
- Nielsen, A.L.; Norup, M.; Bjørndal, J.R.; Wiegel, P.; Spedden, M.E.; Lundbye-Jensen, J. Increased functional and directed corticomuscular connectivity after dynamic motor practice but not isometric motor practice. Journal of Neurophysiology 2025. [Google Scholar] [CrossRef]
- A.S., A.; G., P.K.; Ramakrishnan, A. Brain-scale theta band functional connectome as signature of slow breathing and breath-hold phases. Computers in Biology and Medicine 2025, 184, 109435. [CrossRef]
- Kumar, P.; Adarsh, A.; et al. Modulation of EEG by Slow-Symmetric Breathing incorporating Breath-Hold. IEEE Transactions on Biomedical Engineering 2024. [Google Scholar] [CrossRef] [PubMed]
- Watanabe, T.; Itagaki, A.; Hashizume, A.; Takahashi, A.; Ishizaka, R.; Ozaki, I. Observation of respiration-entrained brain oscillations with scalp EEG. Neuroscience Letters 2023, 797, 137079. [Google Scholar] [CrossRef] [PubMed]
- Herzog, M.; Sucec, J.; Jelinčić, V.; Van Diest, I.; Van den Bergh, O.; Chan, P.Y.S.; Davenport, P.; von Leupoldt, A. The test-retest reliability of the respiratory-related evoked potential. Biological psychology 2021, 163, 108133. [Google Scholar] [CrossRef]
- Morelli, M.S.; Vanello, N.; Callara, A.L.; Hartwig, V.; Maestri, M.; Bonanni, E.; Emdin, M.; Passino, C.; Giannoni, A. Breath-hold task induces temporal heterogeneity in electroencephalographic regional field power in healthy subjects. Journal of applied physiology 2021, 130, 298–307. [Google Scholar] [CrossRef]
- Wang, Y.; Zhang, Y.; Zhang, Y.; Wang, Z.; Guo, W.; Zhang, Y.; Wang, Y.; Ge, Q.; Wang, D. Voluntary Respiration Control: Signature Analysis by EEG. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2023, 31, 4624–4634. [Google Scholar] [CrossRef]
- Navarro-Sune, X.; Raux, M.; Hudson, A.L.; Similowski, T.; Chavez, M. Cycle-frequency content EEG analysis improves the assessment of respiratory-related cortical activity. Physiological Measurement 2024, 45, 095003. [Google Scholar] [CrossRef]
- Hudson, A.L.; Wattiez, N.; Navarro-Sune, X.; Chavez, M.; Similowski, T. Combined head accelerometry and EEG improves the detection of respiratory-related cortical activity during inspiratory loading in healthy participants. Physiological Reports 2022, 10, e15383. [Google Scholar] [CrossRef]
- Goheen, J.; Wolman, A.; Angeletti, L.L.; Wolff, A.; Anderson, J.A.; Northoff, G. Dynamic mechanisms that couple the brain and breathing to the external environment. Communications biology 2024, 7, 938. [Google Scholar] [CrossRef]
- Kæseler, R.L.; Johansson, T.W.; Struijk, L.N.A.; Jochumsen, M. Feature and classification analysis for detection and classification of tongue movements from single-trial pre-movement EEG. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2022, 30, 678–687. [Google Scholar] [CrossRef]
- Zero, E.; Bersani, C.; Sacile, R. Identification of brain electrical activity related to head yaw rotations. Sensors 2021, 21, 3345. [Google Scholar] [CrossRef]
- Gulyás, D.; Jochumsen, M. Detection of Movement-Related Brain Activity Associated with Hand and Tongue Movements from Single-Trial Around-Ear EEG. Sensors 2024, 24, 6004. [Google Scholar] [CrossRef] [PubMed]
- Meng, J.; Zhao, Y.; Wang, K.; Sun, J.; Yi, W.; Xu, F.; Xu, M.; Ming, D. Rhythmic temporal prediction enhances neural representations of movement intention for brain–computer interface. Journal of Neural Engineering 2023, 20, 066004. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; Li, M.; Wang, H.; Zhang, M.; Xu, G. Preparatory movement state enhances premovement EEG representations for brain–computer interfaces. Journal of Neural Engineering 2024, 21, 036044. [Google Scholar] [CrossRef] [PubMed]
- Bigand, F.; Bianco, R.; Abalde, S.F.; Nguyen, T.; Novembre, G. EEG of the Dancing Brain: Decoding Sensory, Motor, and Social Processes during Dyadic Dance. Journal of Neuroscience 2025, 45. [Google Scholar] [CrossRef]
- Ody, E.; Kircher, T.; Straube, B.; He, Y. Pre-movement event-related potentials and multivariate pattern of EEG encode action outcome prediction. Human Brain Mapping 2023, 44, 6198–6213. [Google Scholar] [CrossRef]
- Janyalikit, T.; Ratanamahatana, C.A. Time series shapelet-based movement intention detection toward asynchronous BCI for stroke rehabilitation. IEEE Access 2022, 10, 41693–41707. [Google Scholar] [CrossRef]
- Meng, J.; Li, X.; Li, S.; Fan, X.; Xu, M.; Ming, D. High-Frequency Power Reflects Dual Intentions of Time and Movement for Active Brain-Computer Interface. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2025. [Google Scholar] [CrossRef]
- Zhang, D.; Yao, L.; Chen, K.; Wang, S.; Chang, X.; Liu, Y. Making sense of spatio-temporal preserving representations for EEG-based human intention recognition. IEEE transactions on cybernetics 2019, 50, 3033–3044. [Google Scholar] [CrossRef]
- Derchi, C.; Mikulan, E.; Mazza, A.; Casarotto, S.; Comanducci, A.; Fecchio, M.; Navarro, J.; Devalle, G.; Massimini, M.; Sinigaglia, C. Distinguishing intentional from nonintentional actions through eeg and kinematic markers. Scientific Reports 2023, 13, 8496. [Google Scholar] [CrossRef]
- Gu, B.; Wang, K.; Chen, L.; He, J.; Zhang, D.; Xu, M.; Wang, Z.; Ming, D. Study of the correlation between the motor ability of the individual upper limbs and motor imagery induced neural activities. Neuroscience 2023, 530, 56–65. [Google Scholar] [CrossRef]
- Zhang, M.; Wu, J.; Song, J.; Fu, R.; Ma, R.; Jiang, Y.C.; Chen, Y.F. Decoding coordinated directions of bimanual movements from EEG signals. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2022, 31, 248–259. [Google Scholar] [CrossRef] [PubMed]
- Tantawanich, P.; Phunruangsakao, C.; Izumi, S.I.; Hayashibe, M. A Systematic Review of Bimanual Motor Coordination in Brain-Computer Interface. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2024. [Google Scholar] [CrossRef] [PubMed]
- Wang, J.; Bi, L.; Fei, W.; Xu, X.; Liu, A.; Mo, L.; Feleke, A.G. Neural correlate and movement decoding of simultaneous-and-sequential bimanual movements using EEG signals. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2024. [Google Scholar] [CrossRef] [PubMed]
- Li, X.; Mota, B.; Kondo, T.; Nasuto, S.; Hayashi, Y. EEG dynamical network analysis method reveals the neural signature of visual-motor coordination. Plos one 2020, 15, e0231767. [Google Scholar] [CrossRef]
- De Pretto, M.; Deiber, M.P.; James, C.E. Steady-state evoked potentials distinguish brain mechanisms of self-paced versus synchronization finger tapping. Human movement science 2018, 61, 151–166. [Google Scholar] [CrossRef]
- Noboa, M.d.L.; Kertész, C.; Honbolygó, F. Neural entrainment to the beat and working memory predict sensorimotor synchronization skills. Scientific Reports 2025, 15, 10466. [Google Scholar] [CrossRef]
- Mondok, C.; Wiener, M. A coupled oscillator model predicts the effect of neuromodulation and a novel human tempo matching bias. Journal of Neurophysiology 2025. [Google Scholar] [CrossRef]
- Nave, K.M.; Hannon, E.E.; Snyder, J.S. Steady state-evoked potentials of subjective beat perception in musical rhythms. Psychophysiology 2022, 59, e13963. [Google Scholar] [CrossRef]
- Leske, S.; Endestad, T.; Volehaugen, V.; Foldal, M.D.; Blenkmann, A.O.; Solbakk, A.K.; Danielsen, A. Beta oscillations predict the envelope sharpness in a rhythmic beat sequence. Scientific Reports 2025, 15, 3510. [Google Scholar] [CrossRef]
- Comstock, D.C.; Balasubramaniam, R. Differential motor system entrainment to auditory and visual rhythms. Journal of Neurophysiology 2022, 128, 326–335. [Google Scholar] [CrossRef]
- Wang, X.; Zhou, C.; Jin, X. Resonance and beat perception of ballroom dancers: An EEG study. Plos one 2024, 19, e0312302. [Google Scholar] [CrossRef] [PubMed]
- Ross, J.M.; Comstock, D.C.; Iversen, J.R.; Makeig, S.; Balasubramaniam, R. Cortical mu rhythms during action and passive music listening. Journal of neurophysiology 2022, 127, 213–224. [Google Scholar] [CrossRef] [PubMed]
- Lenc, T.; Lenoir, C.; Keller, P.E.; Polak, R.; Mulders, D.; Nozaradan, S. Measuring self-similarity in empirical signals to understand musical beat perception. European Journal of Neuroscience 2025, 61, e16637. [Google Scholar] [CrossRef] [PubMed]
- Pandey, P.; Ahmad, N.; Miyapuram, K.P.; Lomas, D. Predicting dominant beat frequency from brain responses while listening to music. In Proceedings of the 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE; 2021; pp. 3058–3064. [Google Scholar]
- Cheng, T.H.Z.; Creel, S.C.; Iversen, J.R. How do you feel the rhythm: Dynamic motor-auditory interactions are involved in the imagination of hierarchical timing. Journal of Neuroscience 2022, 42, 500–512. [Google Scholar] [CrossRef]
- Yoshimura, N.; Tanaka, T.; Inaba, Y. Estimation of Imagined Rhythms from EEG by Spatiotemporal Convolutional Neural Networks. In Proceedings of the 2023 IEEE Statistical Signal Processing Workshop (SSP). IEEE; 2023; pp. 690–694. [Google Scholar]
- de Vries, I.E.; Daffertshofer, A.; Stegeman, D.F.; Boonstra, T.W. Functional connectivity in the neuromuscular system underlying bimanual coordination. Journal of neurophysiology 2016, 116, 2576–2585. [Google Scholar] [CrossRef]
- Keitel, A.; Pelofi, C.; Guan, X.; Watson, E.; Wight, L.; Allen, S.; Mencke, I.; Keitel, C.; Rimmele, J. Cortical and behavioral tracking of rhythm in music: Effects of pitch predictability, enjoyment, and expertise. Annals of the New York Academy of Sciences 2025, 1546, 120–135. [Google Scholar] [CrossRef]
- Alameda, C.; Sanabria, D.; Ciria, L.F. The brain in flow: A systematic review on the neural basis of the flow state. Cortex 2022, 154, 348–364. [Google Scholar] [CrossRef]
- Irshad, M.T.; Li, F.; Nisar, M.A.; Huang, X.; Buss, M.; Kloep, L.; Peifer, C.; Kozusznik, B.; Pollak, A.; Pyszka, A.; et al. Wearable-based human flow experience recognition enhanced by transfer learning methods using emotion data. Computers in Biology and Medicine 2023, 166, 107489. [Google Scholar] [CrossRef]
- Rácz, M.; Becske, M.; Magyaródi, T.; Kitta, G.; Szuromi, M.; Márton, G. Physiological assessment of the psychological flow state using wearable devices. Scientific Reports 2025, 15, 11839. [Google Scholar] [CrossRef]
- Lorenz, A.; Mercier, M.; Trébuchon, A.; Bartolomei, F.; Schon, D.; Morillon, B. Corollary discharge signals during production are domain general: An intracerebral EEG case study with a professional musician. Cortex 2025, 186, 11–23. [Google Scholar] [CrossRef]
- Uehara, K.; Yasuhara, M.; Koguchi, J.; Oku, T.; Shiotani, S.; Morise, M.; Furuya, S. Brain network flexibility as a predictor of skilled musical performance. Cerebral Cortex 2023, 33, 10492–10503. [Google Scholar] [CrossRef] [PubMed]
- Ahmed, Y.; Ferguson-Pell, M.; Adams, K.; Ríos Rincón, A. EEG-Based Engagement Monitoring in Cognitive Games. Sensors 2025, 25, 2072. [Google Scholar] [CrossRef] [PubMed]
- Wu, S.F.; Lu, Y.L.; Lien, C.J. Measuring effects of technological interactivity levels on flow with electroencephalogram. IEEE Access 2021, 9, 85813–85822. [Google Scholar] [CrossRef]
- Hang, Y.; Unenbat, B.; Tang, S.; Wang, F.; Lin, B.; Zhang, D. Exploring the neural correlates of Flow experience with multifaceted tasks and a single-Channel Prefrontal EEG Recording. Sensors 2024, 24, 1894. [Google Scholar] [CrossRef]
- van Schie, H.T.; Iotchev, I.B.; Compen, F.R. Free will strikes back: Steady-state movement-related cortical potentials are modulated by cognitive control. Consciousness and Cognition 2022, 104, 103382. [Google Scholar] [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).