Preprint
Review

This version is not peer-reviewed.

Technology-Enhanced Musical Practice Using Brain-Computer Interfaces: A Topical Review

A peer-reviewed article of this preprint also exists.

Submitted:

06 August 2025

Posted:

08 August 2025

You are already at the latest version

Abstract
High-performance musical instrument training is a demanding discipline that engages cognitive, neurological, and physical skills. Professional musicians invest substantial time and effort into mastering repertoire and developing the muscle memory and reflexes required to perform complex works in high-stakes settings. While existing surveys have explored the use of music in therapeutic and general training contexts, there is a notable lack of work focused specifically on the needs of professional musicians and advanced instrumental practice. This topical review explores the potential of EEG-based brain-computer interface (BCI) technologies to integrate real-time feedback of biomechanic and cognitive features in advanced musical practice. Building on a conceptual framework of technology enhanced musical practice (TEMP), we review empirical studies of broad contexts, addressing EEG signal decoding of biomechanic and cognitive tasks that closely relate to the specified TEMP features (movement and muscle activity, posture and balance, fine motor and dexterity, breathing control, head and facial movement, movement intention, tempo processing, ptich recognition and cognitive engagement), assessing their feasibility and limitations. Our analysis highlights current gaps and provides a foundation for future development of BCI-supported musical training systems to support high-performance instrumental practice.
Keywords: 
;  ;  ;  ;  ;  ;  

1. Introduction

Musical performance, when viewed from the musician’s perspective, is a demanding activity that represents the culmination of years of dedication, study, and practice. In high-profile contexts—such as performances with renowned orchestras, at prestigious venues, and during major festivals—professional musicians are expected, both by their audiences and, perhaps more critically, by themselves, to deliver performances that are not only technically flawless but also highly expressive and authentic. Achieving this level of artistry consistently requires a deep commitment that spans many years and encompasses a wide range of subdisciplines, including music history and theory, harmonic analysis, auditory training, instrumental technique, and both solo and ensemble performance practice [1,2]. In other words, musical performance requires a triad of knowledge (Figure 1): musicians must know the piece (the music), the instrument, and themselves.
Consequently, a significant portion of a musician’s professional life focuses on developing and maintaining effective study and practice routines. These routines must be tailored not only to the specific instrument and musical genre but also to the unique challenges faced by each individual musician [1]. Traditional practice methods typically involve a combination of solitary work and tutored practice sessions, where real-time feedback from an instructor plays a critical role in identifying and correcting both technical and expressive issues. Such feedback is especially vital in addressing technical aspects that are difficult to perceive from a first-person perspective, such as involuntary muscle movements, subtle tempo inconsistencies, and issues related to posture and ergonomics.
To address these challenges, musicians often employ various strategies, including practicing in front of mirrors, using physical aids (such as tape or braces) to control movement, and recording their practice sessions for later analysis [2]. In this context, emerging human-computer interaction (HCI) tools—particularly electroencephalography (EEG)-based brain computer interfaces (BCIs)—have been increasingly investigated as real-time feedback systems [3,4,5]. These technologies could potentially be used to provide immediate information about low-level parameters associated with cognitive and biomechanical processes during instrumental performance. As a result, they could deliver both quantitative and qualitative insights into technical execution, supporting immediate correction as well as long-term skill development.
Theoretically, a set of functional requirements for an idealized Technology-Enhanced Musical Practice (TEMP) system based on EEG-based BCIs could be devised (see Section 3). Such a system would be able to monitor physical and cognitive aspects of a musician’s performance during training and provide assessment and corrective feedback.
While prior reviews have examined the use of BCIs and music (listening) as a stimulus for guiding, entraining and modulating the brain in desired directions in applications ranging from neuro-rehabilitation to therapeutic interventions, stress management, diagnostics of neurological disorders, and sports performance enhancement [6,7], to the best of our knowledge, no prior work has examined how EEG-based BCIs might enable the operationalization of the functional requirements of a TEMP system. This review seeks to fill this gap by examining the existing literature on how EEG-based BCI technology could support the requirements of a TEMP system, thus augmenting current skilled musical instrument practice systems in terms of motor learning, cognitive feedback, and training efficiency.
In this work, we evaluate the potential use of EEG-based BCIs in supporting TEMP-related components of musical practice. We describe a conceptual framework (TEMP) designed to group relevant features, and assess the potential feasibility of operationalizing such features via EEG-based BCI technology. From this analysis, we assess the potential feasibility and current limitations of implementing TEMP-related components using EEG-based BCI technologies. Our focus is on specific, measurable parameters, rather than abstract qualities such as emotion, expressiveness, or artistic intention. Although there is now sufficient knowledge to begin investigating some of these more complex dimensions, such aspects should be explored only in later iterations of system development. This is motivated by the goal of proposing a broadly applicable training system that musicians of any instrument could use. By excluding genre- or style-specific considerations, the framework aims to remain as generic and adaptable as possible. The scope is limited to technical performance on traditional acoustic or electroacoustic instruments (keys, strings, woodwinds, brass, and percussion) and does not address the generation of music or sound through signal-mapping from biosensors or brain-computer interfaces.
The remainder of this paper is organized as follows. Section 2 provides a summarized overview of BCIs, technology-enhanced music training, and relevant studies that used BCIs in music training and performance. Section 3 describes the conceptual TEMP framework. Section 4 explains the research methodology and objectives. Section 5 summarizes the most relevant findings from the collected articles. Section 6 presents a discussion correlating the state-of-the-art of BCI technology that emerged from the research results with the features delineated in the conceptual TEMP framework. Section 7 presents our conclusion and future work.

2. Background

2.1. Brain-Computer Interfaces

Once a Sci-Fi dream, non-invasive BCIs are now a consumer-grade technology. Leveraging state-of-the-art Machine-Learning (ML) algorithms, BCIs have been used as controllers and brain-state monitors, giving rise to neuro-rehabilitation, gaming, and assistive-tech applications [8].
A BCI is a device that can measure neural activity during the performance of an active or passive task. BCI devices can be invasive, where electrodes are placed inside specific regions of the brain and directly measure voltage variations in that position, or non-invasive, where the neural activity is measured indirectly.
Such is the case of electroencephalography (EEG), which measures electrical voltage variations of the scalp that are a consequence of the underlying electrical activity of the brain. EEG BCI electrical patterns can be detected in passive and active paradigms. In passive paradigms, the user is stimulated with sensory or cognitive input, and in response, a characteristic neural activity emerges. In active strategies neural activity patterns are elicited after the user thinks (or imagines themselves) of performing a limb movement, such as taking a step, or thinking about a concept or idea, such as “fruit”, or from a stimuli such as an actual image of a banana [9].
Despite remarkable progress, the utilization of BCIs remains primarily confined to the digital health and accessibility domains. In the music context, most research and applications are focused on the use of BCIs to generate or control musical parameters, and to modulate neural activity during cognitive or motor tasks [6].

2.2. Technology-Enhanced Musical Practice

The path from musical apprentice to professional performer involves years of training, and evidence shows that a musician will have dedicated 10,000 hours of practice to become a professional player [2]. This effort has life-lasting effects and will even shape the practitioner’s anatomy and brain development [1]. Hence, several strategies and tools that assist in this process have been developed over the years, and since the 90’s, digital technology has played an important role in this matter [10]. The integration of digital technologies into music pedagogy has transformed the landscape of musical practice, moving beyond traditional methods toward more data-driven, interactive, and individualized approaches. Here, technology-enhanced musical practice (TEMP) refers to the use of computational tools — including software platforms, biosensors, motion capture, and brain-computer interfaces (BCIs) — to support and optimize the learning and refinement of musical performance [10]. These innovations aim to augment the effectiveness of practice sessions by providing real-time feedback, quantitative assessments, and cognitive or physiological insights that were previously only possible in tutored practice sessions, or by analyzing video and audio recordings.
Early developments in this field focused on Machine Audition (MA)-based feedback systems for pitch and tempo accuracy. These systems laid the foundation for currently popular tools like MakeMusic and Yousician [11,12], which use music information retrieval algorithmic analysis to provide real-time correction and progress tracking [10]. These are typical “play-along” applications focused on beginners and amateur music enthusiasts. They offer the possibility to play their instrument together with an interactive musical score, which can be of traditional notation or a simplified representation such as tablature, and provide instant feedback on the correctness of pitch and tempo. While this approach can be fruitful and fun for hobbyists, biomechanical and cognitive parameters can not be detected via machine audition strategies. For example, on instruments where fingering strategy is important (such as string and keyboard instruments), it is not possible to provide feedback on the fingering positioning just by audio analysis. Moreover, MA systems rely on note attack and envelope following for tempo estimation and music tonality for pitch estimation, and even though this can be very accurate for certain genres and instruments, like Pop music percussion and keyboard instruments, it performs poorly in non-rhythmic music that is written based on durations, and music that explores instrumental micro tonality [10]. Thus, even though relevant and useful, to develop an advanced, professional training system, other sensing technologies and strategies that can deal with real-time feedback of biomechanic and cognitive information are required.
With regards to applications for musical training and practice using EEG-based BCIs, Folgieri et al. [13] devised a study to assess the effect of real-time neurofeedback in guitar practice. Their system used an EEG-based BCI to track user focus and motor performance and provided a calculated ratio between score reading (focus) and playing (motor performance) in the form of visual feedback, extracted from the synchrony and asynchrony between sensorimotor and cognitive information decoded from the EEG signals, where a good balance between reading attention and actual playing would indicate good coordination between playing and reading ability. They implemented an experimental design with 20 participants, divided into a BCI group and a control group, that participated in regular musical training sessions over a period of 2 months and evaluated their accuracy in performing a predetermined chord progression. After the training period, they evaluated the player’s performance both in note and tempo accuracy in the task. Their results indicate that the BCI group performed significantly better in all evaluated parameters, concluding that the use of real-time neurofeedback improved the player’s learning process.
A study by Pop-Jordanova et al. [4] aimed to determine the effects of alpha neurofeedback and EMG biofeedback protocols on improving musical performance in violinists and viola players. Their objective was to investigate the impact of this combined alpha-EEG/EMG biofeedback on electrophysiological and psychometric parameters, as well as to compare the responses of musicians with high and low individual alpha peak frequency (APF) to usual practice versus practice combined with biofeedback training. They also wanted to assess whether biofeedback computer software is an effective technology for improving psychomotor performance in musicians.
The experiment involved 12 music students (10 violinists, 2 viola players), divided into an experimental group that received biofeedback (alpha/EMG) combined with music practice, and a control group that only did music practice over a two-month period. The biofeedback system used EEG electrodes on the scalp and EMG electrodes on the forehead. Feedback, in the form of “applause” sounds, was provided in real-time when participants achieved simultaneous supra-threshold bursts of alpha activity and sub-threshold bursts of integrated EMG (IEMG). Participants practiced their usual repertoire during the sessions, with the stated goal of achieving high-quality performance accompanied by feelings of ease and comfort. The results indicated that alpha-EEG/EMG biofeedback training, when used during music performance, improved all measured EEG and EMG parameters associated with optimal psychomotor functioning. The efficiency of the biofeedback training was positively correlated with baseline alpha activity indices, including APF, individual alpha band width (IABW), and amount of alpha suppression (AAS) change. Practice combined with biofeedback led to an increase in alpha activity indices and a decrease in IEMG in both low and high APF groups, with changes being more pronounced in the high APF group. The study concluded that alpha-EEG/EMG biofeedback training is effective in alleviating psychosomatic disturbances during musical execution and can enhance desired self-regulation and musical performance quality.
Riquelme et al. [5] explored whether musical training, specifically in pianists, influences the ability to control an EEG-based BCI system using motor imagery (MI). They aimed to assess and compare the performance of pianists interacting with an MI-based BCI system against a control group of non-musicians. They hypothesized that the anatomical and functional differences developed through musical practice might lead to improved BCI control for musicians. The experimental setup involved testing the BCI performance of four pianists and four non-pianists using motor imagery of left and right hand movements to control a BCI system. The study followed a standard training protocol over three sessions, including a training-only trial followed by trials with real-time feedback based on the user’s interpreted brain activity. EEG signals were recorded using a 16-channel device and processed using Common Spatial Patterns (CSP) for feature extraction and Linear Discriminant Analysis (LDA) for classification. Both online and offline analyses of the BCI accuracy were conducted, focusing on the sessions where feedback was provided. The results showed that pianists achieved a significantly higher mean level of BCI control (74.69%) through MI during the final tested trial compared to the control group (63.13%). Offline analysis supported this finding, indicating a generally better performance for the pianist group across different data subsets. Scalp topography analysis further suggested that non-pianists exhibited a larger area of brain activation during the task compared to pianists. The study concluded that these findings provide indications that musical training is indeed a factor that improves performance with a BCI device operated via movement imagery.
Even though research on this topic is scarce, these studies highlight the potential of the technology and indicate that further research might reveal rewarding results.

3. Conceptual TEMP Framework

As a novel contribution, we propose the conceptual TEMP framework for high-performance training, integrating standard digital practice tools with advanced, sensor-informed functionalities aimed at monitoring physical and cognitive aspects of performance. The purpose of this framework is not to replicate the role of a human instructor, but to complement and enhance it by delivering detailed, real-time, and individualized feedback across multiple layers of the performer’s experience.
Also, the TEMP framework is conceived as a practice-oriented tool focused on the technical-motor dimensions of instrumental music training. While we acknowledge the importance of cognitive and expressive aspects of musical performance, including non-motoric musical imagination such as harmony, expression or timbre, these are currently outside the scope due to both the technological limitations of EEG decoding and the high degree of stylistic, genre-specific, and individual variability in these domains.
It is important to note that the outlined TEMP features are not presented as hierarchically equivalent in terms of musical importance. Rather, they are intended as complementary functional dimensions, whose relevance may vary depending on the instrumentalist’s profile, training goals, and pedagogical context. This flexible structure allows the TEMP framework to adapt to diverse approaches in instrumental practice.
The following points outline the core functional objectives of this conceptual framework, which serves as a basis for subsequent discussion on how current sensing technologies might support its implementation.

3.1. Biomechanical Awareness

To support technical refinement and injury prevention, the TEMP framework should provide awareness of the musician’s full-body biomechanics during practice. This includes:
  • Posture and Balance: Monitoring of overall playing posture, alignment, and weight distribution.
  • Movement and Muscle Activity: Real-time monitoring of muscle tension and relaxation patterns, especially in key areas such as forearms, hands, neck, shoulders, and lower back.
  • Fine Motor and Dexterity: Capture of detailed finger, hand, wrist, arm, and facial muscle movements.
  • Breathing Control: For wind and voice instrumentalists, diaphragm engagement and respiratory patterns are key parameters of the technique.
  • Head and Facial Movement: Monitoring facial tension and head alignment to identify strain or compensatory patterns that may indicate suboptimal technique. These capabilities are particularly valuable in identifying inefficiencies or compensatory behaviors that may not be easily perceived from a first-person perspective.
  • Movement intention: A core functionality of the TEMP framework would be the ability to distinguish intentional, goal-directed movement from involuntary or reflexive motion. This distinction is essential in helping musicians identify habits such as unwanted tension, tremor, or unintentional shifts in posture. By separating these movement types, the system can provide feedback that distinguishes between technical errors and unconscious physical responses, enhancing the performer’s body awareness and self-regulation during practice.
  • Coordination and Movement Fluidity: Evaluation of coordination and movement fluidity during transitions and articulations.

3.2. Tempo Processing

Tempo constitutes a foundational parameter of musical practice. From the earliest stages of training, performers are educated to develop an acute awareness of temporal regularity, enabling them to recognize externally presented tempi with precision. This perceptual skill is fundamental in ensemble coordination, stylistic authenticity, and interpretive nuance across repertoires and performance contexts. Equally central is the capacity to execute tempo reliably: motor functions must translate the musician’s internal pulse into consistent rhythmic output.
A third, cognitively mediated dimension involves the mental rehearsal or imagination of tempo in the absence of explicit sound or movement, supporting score study, silent practice, and preparatory planning. Because perception, production, and imagery interact continuously before and during performance, real-time, context-sensitive feedback on all three modes of tempo processing would represent an important training enhancement. This would allow for the diagnosis of discrepancies between intended and realized tempi and reinforce stable internal timing models.

3.3. Pitch Recognition

Pitch and interval recognition represent a fulcral parameter in instrumental music performance and practice, specially in non-tempered instruments such as fretless strings or variable tube length brass instruments, where correct pitch is directly correlated with motor actions. Correct pitch interpretation can also be an indicative of good technique and achievement of proper timbristic goals. Moreover, the capability of detecting and providing pitch intention or imagination, combined with MA could potentially be used to indicate melodic and harmonic accuracy.

3.4. Cognitive Engagement

As performers reach higher levels of proficiency, much of their motor control becomes automated. While this supports fluency, it can also lead to disengaged or reflexive execution. The TEMP framework should therefore aim to assess the extent to which playing is being performed with full awareness of artistic and motor intentions. This state of full awareness and engagement is also described as the “Flow State”.
Identifying the state of “Flow” and quantifying the degree of engagement and awareness can help advanced performers recognize when they are practicing mindfully versus when they are relying too heavily on muscle memory, thus encouraging more deliberate and reflective practice habits.

4. Review Strategy and Scope

This topical review explores whether current EEG-based brain-computer interface (BCI) technologies can support the physiological and cognitive components defined in the TEMP framework. Rather than following the structure of a systematic review or aiming for exhaustive coverage of the literature, this review adopts a targeted, feature-oriented mapping strategy, guided by the functional requirements outlined in Section 3. The analysis focused on identifying empirical studies that demonstrate the feasibility of decoding or monitoring the following TEMP-related components using non-invasive EEG:
  • Posture and Balance.
  • Movement and Muscle Activity.
  • Fine Motor and Dexterity.
  • Breathing Control.
  • Head and Facial Movement.
  • Movement Intention.
  • Coordination and Movement Fluidity.
  • Tempo Processing.
  • Pitch Recognition.
  • Cognitive Engagement.
Rather than evaluating BCI applications strictly within the music domain, we examined empirical studies from broader contexts (e.g., rehabilitation, human-computer interaction, gaming, or stress monitoring) to assess whether current EEG-based methods are technically capable of supporting the TEMP-defined features in real time. The focus is not on whether BCIs are already used in musical training, but on how close existing evidence brings us to a feasible prototype implementation.

4.1. Literature Search Approach

This work adopts an exploratory approach, proposing a conceptual model (TEMP) constructed from a synthesis of scientific literature on EEG-BCI and musical practice. It is not an empirically tested model, but rather a theoretical framework intended to guide future development and validation in both controlled and ecological environments.
To explore the feasibility of each TEMP feature, we conducted a series of targeted keyword searches combining EEG/BCI-related terms with feature-specific concepts (e.g., posture, coordination, muscle activity). These exploratory queries were applied to the title and abstract fields across relevant databases to identify recent empirical studies aligned with the TEMP framework. A full list of representative search terms used for each TEMP feature is available in Appendix A.

4.2. Search Scope and Selection Parameters

Searches were conducted using PubMed and IEEE Xplore databases, covering both medical and technical perspectives of EEG decoding. To identify studies relevant to the TEMP framework’s objectives and ensure technical and translational value, a set of practical selection criteria was applied (Table 1).
We prioritized peer-reviewed journal articles published from 2020 onward, reflecting recent advancements in EEG-based BCI technologies. Studies were selected when they reported empirical findings involving human participants, addressed at least one TEMP-related component, and provided technical performance indicators (e.g., detection accuracy, latency, or signal reliability) to inform feasibility assessments of EEG-based detection in real-time contexts.
To maintain focus on non-invasive EEG systems and active user participation paradigms, studies employing alternative neuroimaging methods (e.g., fMRI, fNIRS), relying exclusively on passive BCI strategies based on external stimulation (e.g., P300, SSVEP), or addressing purely motor imagery tasks without overt movement execution were not considered. Additionally, studies focused solely on facial expression analysis, emotion recognition, or simulations without human participant data were excluded.
These selection parameters were intended to highlight studies offering ecologically valid and technically informative insights for TEMP-feature implementation. Relevant review or survey articles were also included when they offered synthesized comparisons of recent EEG-BCI developments applicable to at least one TEMP domain.

4.3. Objectives of the Review

This review aims to a) explore whether current EEG-based BCI research empirically addresses the functional domains defined in the TEMP framework; b) identify underexplored areas, c) consider the degree of technical progress toward real-time application, and d) reflect on the translational potential of existing approaches for developing sensor-informed music training systems.

4.4. Search and Screening Results

The database search returned 8141 records (3376 PubMed + 4765 IEEE Xplore). After automatic duplicate removal and manual title/abstract screening, 96 articles met all inclusion criteria for full analysis. The search and screening results by TEMP category are displayed in Table 2. An overview of the highlights of each included paper is presented in Table A2.

5. Results

From the search and screening process previously described in Section 4, we proceed with an overview of the current state-of-the-art, key findings, and research evidence related to the core features of the TEMP framework.

5.1. Biomechanical Awareness

In our context, biomechanics awareness can be understood as the process of EEG-based BCI to identify and interpret the brain’s electrical signals that are related to the body’s physical state, movement, posture, force production, and associated sensations. This field explores how the brain internally represents, monitors, and controls the mechanics of the body.
The analyzed sources collectively demonstrate that various aspects of biomechanics, from fine finger movements and upper limb actions to head rotations and overall posture and balance, have distinct correlates in EEG signals that can be decoded and potentially used for HCI purposes. In the following subsections, we present the most significant findings from each biomechanics feature specified in Section 3.1.

5.1.1. Posture and Balance

Our analysis shows that EEG recordings can indeed provide a valuable window into the complex neural mechanisms underlying human posture and balance control [14,15]. Maintaining balance is a dynamic process involving the hierarchical organization and interconnectedness of neural ensembles throughout the central nervous system, including the cerebral cortex, and necessitates the continuous integration of multisensory inputs such as visual, vestibular, and somatosensory information [15]. A prominent EEG signal elicited by sudden balance disturbances is the perturbation-evoked potential (PEP) [16], characterized by components like the N1 peak (a negative voltage deflection typically observed 100-200 ms after the perturbation) localized in frontal and central brain regions. The N1 is considered an indicator of postural perturbation and is influenced by physical and psychological factors [16]. Beyond event-related potentials, resting state brain activity and other evoked responses like Heartbeat-Evoked Potentials (HEPs) have also been investigated in relation to posture and interoception [17].
Analysis of EEG signals in the frequency domain has revealed that specific oscillatory patterns are associated with postural control and stability [18]. Low-frequency spectral components, particularly the theta rhythm (3-10 Hz), carry information related to the direction of induced changes in postural stability [19]. Other frequency bands such as alpha (8-12 Hz) and beta (13-40 Hz) are also relevant, showing modulation with balance perturbations and cognitive load [18]. For instance, increased cognitive workload can lead to an increase in frontal theta power and a decrease in parietal alpha power [20]. Functional connectivity analysis, which assesses the coordination between different brain regions, demonstrates that connections within brain networks are reconfigured during balance tasks, especially under dual-task conditions or with increasing difficulty, involving changes in connectivity in delta, theta, alpha, and beta bands [21].
These EEG insights are being applied in various research areas, including investigating the effects of interventions like cervical traction on brain activity in digital device users [17], exploring how different body postures (sitting vs. standing) modulate interoceptive processing and visual processing [22], and understanding the influence of posture and environment (e.g., Virtual Reality) on cognitive state decoding for passive Brain-Computer Interfaces (BCIs) [20]. Studies have successfully classified the direction of postural stability changes from EEG data, even in individuals with neurological conditions like chronic stroke [19]. Furthermore, the reliability of EEG signals like the N1 potential is being characterized to determine their potential as biomarkers for balance health [14]. Challenges in these applications include dealing with movement-related artifacts in mobile or standing conditions and ensuring the robustness of mental state decoding across different contexts [20].

5.1.2. Movement and Muscular Activity

Researchers have widely explored how EEG signals can be used to understand human movement, particularly in the upper limbs—hands, wrists, arms, and shoulders [23,24,25,26,27,28,29,30,31]. A central goal has been to decode how the hand moves through space, including its position, speed, and motion path [23,25,32,33,34]. Other studies focused on recognizing different types of hand actions, like grasping or pushing, and the forces involved [27,30,35].
Movements like reaching, flexing the wrist or arm, and finger tapping were also analyzed, along with more complex or continuous actions split into smaller motion units [24,28,29,30,36,37,38]. Although less common, similar decoding methods were applied to the lower limbs, especially to detect when a movement begins or to classify walking-related tasks [25,26,31,34,39].
Besides identifying the type of movement, researchers also decoded how fast a limb moved, how strong the motion was, and in what direction it went [23,24,25,33,34,35,36,40].
Several EEG patterns were key for this decoding. For instance, slow brain waves in the delta band (below 4 Hz) often carry important information about how limbs move [23,30,34,41]. These slow waves also responded to planning and starting a movement [41,42]. Other frequency bands, like theta (4–8 Hz), beta, and low-gamma (up to 40 Hz), also contributed in specific cases [23,28,42].
Another useful signal was the movement-related cortical potential (MRCP), which appears just before and during voluntary movement. MRCPs helped detect fast, planned actions known as ballistic movements and provided clues about movement direction and speed [28,35,38,41,43].
In terms of results, EEG systems showed strong performance in classifying movement. For example, detecting when someone moved their arm versus staying still reached up to 88.94% accuracy [24], and identifying movement direction in multi-choice tasks reached about 80% [31,35]. Even low-cost, commercial EEG devices performed well, with accuracies around 73% in some tasks [33,34]. Deep learning methods significantly improved these results, achieving up to 99% accuracy in decoding different movement features [30].
Continuous tracking of movements—estimating motion over time rather than just classifying it—is clearly more difficult. On average, studies reported a moderate correlation (around 0.46) between decoded and actual movements [44]. More advanced methods, like Kalman filters or deep learning, improved this to about 0.5–0.57 [32,34].
The analyzed studies reported tasks involving reaching, following targets, or self-initiated actions [23,28,32,33,34]. Practical uses include rehabilitation for motor impairments [24,25,31,33,43], controlling prosthetic limbs and exoskeletons [24,25,30,32,38,39,44], and developing more natural brain-computer interfaces [23,24,25].

5.1.3. Fine Motor and Dexterity

Recent research has shown that EEG signals can successfully distinguish different fine motor tasks by analyzing brain activity patterns alongside behavioral data [45,46]. For example, tasks such as sinusoidal versus steady force tracking, performed with either hand, can be classified with high accuracy using EEG combined with force measurements, for both novices and experts [45]. Across individuals, classification results consistently exceeded chance levels, indicating reliable extraction of task-relevant brain features [45,46]. Similarly, distinguishing between left and right voluntary finger movements using single-trial EEG data has been achieved with accuracies sometimes exceeding 90%, based on spatial and temporal signal characteristics [46]. However, achieving such high accuracy with single-trial EEG remains challenging due to low signal-to-noise ratios (SNR) [46].
Beyond general task classification, efforts have targeted identifying specific finger movements in 2,3,4,5-class separation [47,48,49,50,51]. Although overlapping brain activity for different fingers complicates this task, EEG-based decoding is possible. Studies using ultra-high-density (UHD) EEG, which offers more electrodes and better spatial resolution, report improved accuracy in classifying individual finger movements compared to traditional EEG setups [47,49]. With one study reporting around 81% accuracy when distinguishing thumb from little finger movements with UHD EEG [49].
Research has also progressed toward decoding detailed movement parameters like speed and hand position. Visually guided reaching tasks have been successfully classified for direction, speed, and force using EEG [33]. Even with commercially available mobile EEG systems, movement speed classification accuracies reached about 73%, demonstrating practical feasibility [33]. Furthermore, decoding levels of attempted finger extension (low, medium, high effort) is possible from EEG signals, including in stroke patients unable to physically perform the movements [52].
Several key EEG features support effective decoding of fine motor control. ERDs and ERSs reflect decreases and increases in power in specific frequency bands, particularly the mu/alpha (8–13 Hz) and beta (13–30 Hz) bands, during motor tasks and imagery [33,46,47,50,52]. These oscillatory patterns are essential in identifying individual finger movements and graded effort levels [47,52]. MRCPs provide additional information related to movement initiation and execution, including finger flexion and extension [33,50]. Overall, spectral power features across theta, alpha, and beta bands, and the spatial distribution of these signals, play major roles in decoding [33,47,48,52,53,54]. Some studies have also explored EEG phase patterns, which can outperform amplitude-based methods in classifying finger movements [48].
Factors influencing decoding success include the EEG system’s spatial resolution; UHD EEG systems provide better differentiation of signals from closely spaced motor areas, though volume conduction effects may limit gains compared to lower-density setups [49,50]. The context of expertise matters as well—classification improves when tasks closely reflect the expert’s real-world activities [45]. Individual differences in EEG patterns, especially in experts, complicate group-level decoding but underline the specialized neural adaptations from training [45,46]. Additionally, the choice of machine learning techniques and feature extraction methods strongly affects performance, with approaches like Support Vector Machines (SVM), Deep Learning, and Riemannian geometry-based methods showing promise [33,45,46,47,48,50,53,55,56,57]. Increasing the amount of training data also improves decoding accuracy [50].

5.1.4. Breathing Control

Breathing, while primarily a brainstem-regulated process, can also be consciously modulated and engages distributed cortical networks [58,59]. Recent studies emphasize not only the role of breathing in sustaining life but also its impact on cognition, emotion, and motor control [60].
A range of breathing tasks has been explored in EEG studies, each shedding light on different neural dynamics. One line of research has focused on slow, controlled breathing and breath-holding. In these tasks, specific respiratory phases such as inhalation, exhalation, and their corresponding holds were examined in isolation [58]. Other investigations have used inspiratory occlusions, resistive loads, or voluntary breath-holding to study respiratory control under more challenging conditions [61,62]. Voluntary breathing and its distinction from automatic respiration have also been a focal point [63].
Several EEG signal patterns and features have emerged as especially relevant. Theta-band functional connectivity, in particular, has been identified as a discriminative marker for respiratory phase classification. One study found this signal feature to be highly effective in distinguishing between inhale, exhale, and breath-hold states using EEG connectivity patterns across 61 scalp electrodes [58]. Respiratory-Related Evoked Potentials (RREPs), measured in response to mechanical stimuli such as airway occlusions, provide a window into the sensory processing of respiratory signals [61]. Another set of findings demonstrated respiration-entrained brain oscillations that are widespread across cortical areas, highlighting the brain’s capacity to synchronize with the rhythm of breathing [60].
Low-frequency EEG components, particularly in the sub-2 Hz range, have also been linked to voluntary breathing. These were found to increase in the frontal and right-parietal areas during conscious respiratory control and correlated positively with breathing intensity as measured by phase locking and entropy metrics [63]. Similarly, EEG power fluctuations in the delta (1–3 Hz) and alpha (8–13 Hz) bands were observed in response to breath-holding, with hypercapnia playing a key modulatory role [62]. Other novel approaches, such as cycle-frequency (C-F) analysis, have improved the temporal resolution and interpretability of EEG signals associated with cyclic respiratory patterns [64].
Decoding accuracy varies across studies, depending on the complexity of the task and the EEG features used. The most notable performance was achieved using theta-band functional connectivity features, where a classifier reached an accuracy of 95.1% in distinguishing respiratory phases [58]. Another study focused on detecting respiratory discomfort achieved a classification area under the curve (AUC) of 0.85 (where a perfect classification represents an AUC value of 1), which increased to 0.89 when EEG data were fused with head accelerometry and smoothed over longer windows [65].
Applications of these findings span both clinical and cognitive domains. In a clinical context, EEG-based decoding of breathing is being explored for the development of brain-ventilator interfaces, which could detect patient discomfort in mechanically ventilated individuals and improve patient-ventilator synchrony [65]. Studies have also emphasized the relevance of breathing-related EEG features for biofeedback training, cognitive load monitoring, and assessing the neural impact of respiratory diseases such as COPD, asthma, and sleep apnea [58,61,63]. Furthermore, breathing tasks are being used to study fundamental brain functions, such as attention, emotion, and memory, by leveraging respiration as a modulator of cortical oscillations [60,66].

5.1.5. Head and Facial Movement

Our search for studies focused on EEG decoding of head, tongue, and facial movement produced very scarce results, with only 3 studies satisfying the inclusion criteria.
In terms of the types of motor tasks studied, one work focused on decoding tongue movements in four directions (left, right, up, and down) from pre-movement EEG activity [67]. Participants in this study were ten able-bodied individuals who performed tongue movements while EEG data were recorded. The analysis excluded actual movement-related artifacts by focusing on signals before movement onset. Another study targeted head yaw rotations—left and right—triggered by visual cues [68]. This work also involved ten participants and sought to establish a mapping between EEG signals and head position, aiming to enable movement recognition for human-computer interaction tasks, including potential driving applications. A third study explored the detection of brain signals associated with both right-hand and tongue movements using a low-cost EEG system positioned around the ear [69]. Here, the aim was to assess whether such a minimalistic EEG setup could effectively support movement classification for control and rehabilitation purposes.
Several EEG signal patterns and features have proven relevant across these studies. In the case of tongue movements, decoding was based largely on MRCPs and SMRs, particularly those detectable before movement onset [67]. MRCPs showed lateralized activation patterns: leftward movements had greater negativity in the right hemisphere and vice versa, while vertical movements displayed differences in amplitude [67]. Features extracted for classification included temporal, spectral, entropy-based, and template-based measures, with temporal and template features offering the best performance [67]. In the head movement study, EEG signals were primarily recorded from occipital and parietal regions, leveraging the roles of these areas in visual processing and motor coordination [68]. The around-ear EEG study also employed MRCPs and SMRs, including event-related desynchronization/synchronization in mu and beta frequency bands [69], with both temporal and spectral features used for classification.
Results across these studies demonstrate varying levels of decoding accuracy, depending on the movement type, number of classes, and EEG configuration. For tongue movement detection versus idle, accuracies ranged from 91.7% to 95.3% using a linear discriminant analysis (LDA) classifier, with rightward movements being the most accurately detected [67]. When classifying between multiple movement types, accuracies decreased as the number of classes increased: 62.6% for four classes, 75.6% for three (left, right, up), and 87.7% for two (left and right) [67]. LDA outperformed other classifiers like SVM, random forests, and multilayer perceptrons in these tasks [67]. In the head movement study, classification accuracy was evaluated using correlation coefficients rather than percentage accuracy. Within-subject training and testing yielded strong correlations, up to r = 0.98, but performance dropped sharply in cross-subject evaluations [68]. In the around-ear EEG study, classification of tongue movements achieved the highest median accuracy at 83%, followed by hand movements at 73% (for control purposes) and 70% (for rehabilitation purposes) [69]. Classifier performance was generally consistent across LDA, SVM, and RF, with KNN showing poorer results [69]. These findings are being explored in distinct contexts and applications. Tongue movement decoding has significant implications for BCIs intended for individuals with high-level spinal cord injuries or ALS, who may have retained tongue control but limited or no hand mobility [67]. The proximity of the tongue’s cortical representation to the ears suggests the possibility of using aesthetically unobtrusive, minimal EEG headsets for such applications [67]. Head movement decoding is aimed at broader human-computer interaction scenarios, including vehicle or wheelchair control, where users might need to issue directional commands without using their limbs [68]. The around-ear EEG approach aligns with efforts to make BCIs more practical, low-cost, and socially acceptable. This is especially important for long-term rehabilitation use or everyday assistive control, where bulky or conspicuous equipment can be a barrier [69]. The study also emphasizes the need for improved electrode technologies and validation in real-world, online settings involving motor-impaired users [69].
Notably, across the referenced sources, no direct research was identified on decoding general facial muscle movement from EEG signals. While facial EMG and eye blinking have been mentioned as possible control methods [67], these approaches do not involve decoding motor intentions for facial expressions via EEG, as is done for tongue and head movements.

5.1.6. Movement Intention

Research in this domain has studied various types of breathing and motor-related tasks. Primarily, the focus is on decoding voluntary motion intentions, including both imagined and executed movements. These tasks are embedded in experimental setups designed to isolate brain activity preceding movement, such as self-paced or cue-based actions, as well as tasks incorporating rhythmic temporal prediction to enhance anticipatory signals [70,71]. Studies have also explored spontaneous and self-initiated movements, revealing distinct neural patterns compared to externally cued tasks [71,72].
Regarding to the most prominent EEG patterns and features, MRPs and the Bereitschaftspotential (BP), also known as the readiness potential, a slow-building negative electrical potential in the brain that occurs before voluntary, self-initiated movement, are critical markers that occur before voluntary motion and reflect preparatory activity, especially in low-frequency bands [71,73,74]. ERS and ERD in beta and alpha frequency ranges are also consistently observed. For example, beta ERD in motor and frontal areas is sensitive to temporal cues and contributes significantly to decoding accuracy [70]. Oscillatory activity in mu, alpha, and beta bands plays a central role, especially in motor imagery tasks [31,75]. Additionally, potentials such as the contingent negative variation (CNV), P300, and N1-P2/N2 complexes are linked to movement anticipation and timing [74,75].
Studies have reported high decoding accuracies, with notable improvements when incorporating temporal prediction or preparatory movement states. For example, a time-synchronized task with rhythmic prediction achieved left-right movement decoding accuracies of 89.71% using CSP and 97.30% with Riemann tangent space [70]. A spatio-temporal deep learning model reported 98.3% accuracy on a large multiclass dataset [76], while a brain typing system reached 93% accuracy for selecting among five commands [76]. Time series shapelet-based methods achieved an average F1-score of 0.82 in ankle movement detection, with low latency and a good true positive rate in pseudo-online settings [74]. Introducing a preparatory state in movement tasks improved classification accuracy from 78.92% to 83.59% and enhanced consistency in comparing spontaneous premovement and prepared premovement [71].
These decoding advances have been applied in a variety of contexts, primarily in neurorehabilitation and assistive technologies. BCIs using intention classification have been integrated into systems for post-stroke therapy, where detected movement intentions trigger neuromodulatory devices to promote plasticity [74]. They are also used for controlling robotic limbs and assistive devices such as wheelchairs, enabling users with severe motor impairments to regain autonomy [31,76]. Emerging applications explore complex actions, including bimanual tasks like self-feeding, and propose frameworks for robust, user-specific systems capable of decoding multiple simultaneous intentions, such as motion and timing [70,75]. Future directions aim to enhance the naturalness, usability, and robustness of BCIs in real-world scenarios, emphasizing the need for distraction-resilient and multi-effector systems [31].
Beyond decoding intentional actions, studies have investigated the neural basis of non-intentional or spontaneous movements, as well as executed versus imagined actions. This distinction is particularly valuable in clinical assessments for individuals with disorders of consciousness or severe communicative limitations [77]. Here, EEG-based models are being developed to identify intentionality in neural activity, providing critical information in contexts where behavioral responses are absent [77]. Comparisons between imagined and executed movements have revealed differential neural correlates that inform the design of more adaptive BCIs [31,73].
In addition, research on spontaneous movements—those not prompted by external cues—has highlighted unique EEG features, such as variations in MRCP and ERD patterns, that differ significantly from prepared movements [71]. These insights could support the development of asynchronous BCIs that operate without the need for fixed external stimuli, thereby increasing the flexibility and autonomy of users [72,74].

5.1.7. Coordination and Movement Fluidity

Decoding movement coordination from EEG signals—especially bimanual movements—is an area of growing interest within brain-computer interface (BCI) research, with promising applications in motor enhancement and neurorehabilitation [78]. While much of the current work has focused on decoding movements of a single hand [79], there is increasing attention on bimanual motor tasks, which are essential for performing daily activities and achieving comprehensive functional recovery [80].
Recent advances in this field have explored a variety of experimental paradigms. These include decoding coordinated spatial directions during task-oriented bimanual movements [79], comparing simultaneous versus sequential movements toward the same target [81], and decoding continuous movement parameters, such as position, velocity, and force, rather than simply classifying discrete tasks [80]. Recent studies have applied advanced deep learning methods to improve decoding accuracy. For example, hybrid models that combine convolutional neural networks (CNNs) with bidirectional long short-term memory (BiLSTM) networks have been used to extract complex spatiotemporal features from EEG signals. These approaches have demonstrated the potential feasibility of decoding coordinated movement directions in bimanual tasks [79].
Bimanual movements exhibit distinct neural signatures compared to unimanual movements. Coordinated bimanual tasks typically show bilateral event-related desynchronization (ERD), while unimanual tasks tend to elicit ERD primarily in the contralateral hemisphere [79]. Furthermore, individual differences in motor abilities—such as hand dexterity (measured by the Purdue Pegboard Test) and motor imagery skills (assessed with the Movement Imagery Questionnaire-3)—are significantly associated with specific EEG patterns, particularly alpha-band relative ERD [78]. EEG-based dynamical network analyses have also highlighted neural markers of visual-motor coordination in both the alpha and gamma frequency bands, which are associated with motor control and visual processing [82].
These EEG decoding strategies are being increasingly applied in the context of neurorehabilitation, particularly for individuals with motor impairments due to stroke or spinal cord injury (SCI). BCI-based therapies are gaining recognition for their potential to enhance upper limb recovery, expand movement range, and support the execution of complex bimanual tasks. Ultimately, these technologies aim to empower patients to regain independence in activities of daily living (ADL) and to control external assistive devices—such as robotic arms, prosthetics, exoskeletons, robotic gloves, and virtual avatars [80,81].

5.2. Tempo and Rhythm

Contemporary research into the neural mechanisms of rhythm perception and synchronization has significantly advanced through sophisticated EEG analysis methodologies that are a consequence of the technology’s high temporal resolution, enabling deeper insights into rhythmic cognition and associated neural dynamics.
State-of-the-art EEG research employs diverse analytical frameworks, prominently featuring steady-state evoked potentials (SSEPs), which reliably capture stable neural responses at specific rhythmic frequencies and harmonics [83,84,85,86]. Additionally, time-frequency analyses examining alpha, beta, and mu frequency bands provide insights into rhythmic anticipation, motor preparation, and entrainment [70,87,88,89,90]. Recent innovations include autocorrelation-based methods for detecting rhythmic periodicities in noisy EEG data, enhancing methodological robustness [91]. Spatial filtering and advanced machine learning techniques, such as Random Forest and k-nearest neighbor (kNN) algorithms, have also been effectively employed for decoding beat frequencies from naturalistic music stimuli [92].
The studied motor and cognitive tasks vary broadly, including externally paced sensorimotor synchronization (SMS), self-paced tapping, passive listening, and motor imagery paradigms [83,88,89,93]. Tasks have been designed to differentiate temporal and movement intentions, enabling precise decoding of compound cognitive-motor intentions [70]. Studies have used ambiguous rhythms with contexts inducing specific metrical interpretations to link subjective beat perception with neural correlates [86]. Imagined rhythmic tasks without actual physical performance provided opportunities to examine covert motor system involvement, revealing motor-to-auditory information flows and hierarchical metrical processing [93,94]. Additionally, experimental paradigms investigating simultaneous processing of competing rhythms and exploring auditory versus visual modality-specific motor entrainment have enriched the understanding of multimodal rhythmic cognition [85,88,90].
Analyses have identified specific EEG signal patterns for decoding rhythmic tasks. SSEPs consistently indicate neural entrainment to periodic auditory stimuli, and their amplitude has been linked to conscious beat perception even in passive listening scenarios [83,84,85,86]. Oscillatory patterns, particularly alpha-beta event-related desynchronization (ERD), are robust indicators of temporal anticipation and motor execution [70,87]. Mu rhythm modulations, isolated through Independent Component Analysis (ICA), have provided crucial evidence for motor system activation during rhythm perception without overt movement, highlighting the topographic organization within the somatomotor cortex [88,90]. Machine learning models have utilized spectral band power features derived from EEG segments to effectively decode dominant beat frequencies, further demonstrating neural tracking of musical rhythm [92].
Empirical findings highlight the complex and nuanced nature of neural rhythmic processing. Notably, distinct brain regions are engaged during externally synchronized versus self-paced tapping, involving the inferior frontal gyrus and bilateral inferior parietal lobules, respectively, underscoring different neural mechanisms for internal and external rhythmic timing [83]. Studies have indicated stronger motor entrainment for visual rather than auditory rhythms, challenging previous assumptions regarding modality dominance [88]. Additionally, higher SSEP amplitudes were observed at frequencies matching consciously perceived metrical patterns, even without deliberate motor planning, emphasizing a neural-subjective link in rhythm perception [86]. High accuracy (88.51%) in decoding complex cognitive-motor intentions has been achieved, highlighting the predictive power of rhythmic temporal expectations [70]. EEG decoding performance has also demonstrated the benefit of utilizing longer EEG segments and dense spatial data for classifying the dominant beat frequency of naturalistic music, achieving accuracy significantly above chance [92].
The explored contexts and applications demonstrate extensive practical implications. EEG-based rhythm decoding methods offer potential advancements in BCIs, providing nuanced motor and cognitive control for assistive technologies [70,92]. Autocorrelation and SSEP analyses expand applicability to developmental populations and rehabilitation contexts, enhancing rhythm-based therapeutic strategies [86,91]. Improved methodological rigor and statistical reporting standards have increased reproducibility and comparability across EEG studies, strengthening research quality [95]. Furthermore, the identification of motor system involvement in rhythm perception, even without overt movement, enriches theories of rhythmic cognition across domains such as music, dance, language processing, and visual rhythm perception [86,89,90,93].

5.3. Pitch Recognition

Recent work shows that pitch content can be decoded from EEG both during overt listening and during silent imagery. In one study four Bach melodies were identified at the level of individual bars, with decoding accuracies above 70% for several participants; performance was driven mainly by low-frequency activity (< 1 Hz) that followed the pitch contour [96]. A separate experiment extended the approach to isolated tones: seven diatonic pitches imagined without acoustic input were classified above chance (mean 35%, chance = 14%), with the most informative features located in left–right hemispheric differences in the beta and low-gamma range [97].
The electrophysiological correlates of pitch expectation and pitch surprisal have been tracked across multiple frequency bands, particularly in the alpha (8–12 Hz) and beta (13–30 Hz) ranges. Musicians demonstrate heightened encoding of pitch entropy and surprisal, particularly in the beta band, compared to non-musicians, indicating that musical expertise enhances cortical sensitivity to melodic structure [98]. Continuous recordings obtained with tonal and atonal excerpts revealed stronger envelope tracking—and stronger tracking of stochastic pitch surprisal—when the melodic structure was less predictable, consistent with an automatic allocation of cortical resources under heightened uncertainty [95].
Pitch imagination tasks—such as pitch naming without auditory input—elicit measurable centro-parietal positivity (CPP) in EEG signals, previously associated with decision-making and confidence in perceptual judgments. However, CPP build-up appears to reflect perceptual evidence accumulation rather than metacognitive confidence alone, underscoring the potential utility of EEG in quantifying covert pitch processing [99].
Even though the analyzed studies did not evaluate specific applications taking advantage of pitch recognition, the available evidence shows that EEG is capable of capturing both the categorical content (individual tones) and the probabilistic structure (entropy and surprisal) of pitch, during perception and imagery alike.

5.4. Cognitive Engagement

The Flow state, commonly described as an optimal psychological experience characterized by intense concentration, intrinsic motivation, and effortless involvement in activities, has been extensively studied in psychology and neuroscience. Individuals experiencing Flow report a sense of control, diminished self-consciousness, and an altered perception of time, making this state highly desirable for performance enhancement, learning, and well-being.
Research on EEG-based detection of Flow, engagement, and related cognitive–affective states has expanded rapidly, moving from laboratory-bound paradigms to wearable, multimodal, and even intracranial recordings. Early syntheses already highlighted a convergent emphasis on fronto-central alpha and theta rhythms during Flow, together with reduced medial prefrontal activity, but also pointed to considerable methodological heterogeneity that still hampers direct comparisons across studies [100]. More recent work has addressed these gaps by combining low-cost headsets, peripheral sensors, and transfer-learning pipelines to increase ecological validity and generalisability [101,102]. At the same time, high-resolution recordings using Stereo-EEG or UHD-EEG systems have begun to map the spectro-spatial signatures of self-generated speech, music, or complex motor preparation, extending the state-of-the-art beyond classical stimulus–response designs [103,104].
Empirical evidence of Flow can be observed across a diverse set of behavioural contexts. Cognitive challenges such as mental arithmetic and reading aloud were used to elicit contrasting Flow and non-Flow states [101]. Gameplay remains a popular paradigm, ranging from brief two-minute commercial games [105] to serious educational games that manipulate learning-technology innovations [106] and Tetris variants spanning boredom to overload [107]. Fine-grained motor behaviour has equally featured: repetitive finger presses [108], virtual-reality fingertip force control matched to individual skill [53], and long-tone musical Go/NoGo tasks preceding performance [104]. Naturalistic production modalities—reading stories aloud or playing the violin—have complemented passive listening comparisons [103]. By allowing each participant to select the task that induced the strongest subjective flow, one study further emphasised the personalised nature of the phenomenon [107].
Regarding the most relevant EEG signal features used for decoding the Flow state, absolute or relative band-power in delta (0.5–4 Hz) to gamma (31–50 Hz) ranges remains the principal descriptor [53,107,108]. In fronto-central and parietal sites, moderate alpha power and increased beta–gamma activity have repeatedly marked intensified flow or team-flow experience [53,100]. Spectral ratios such as β / ( θ + α ) and β / α have been proposed as engagement indices, although their discriminative value varies [105]. Beyond power, coherence and global phase synchrony better capture large-scale organisation: higher alpha and beta coherence accompany high-flow trials [53], and preparation for musical performance shows frequency-specific network flexibility detected through dynamic phase-locking analyses [104]. Convolutional neural networks operating directly on raw Emotiv EPOC signals have outperformed manually engineered features, underscoring the utility of automatic representation learning [101].
Most studies report that Flow-related states can be decoded above chance. Using the consumer-grade Emotiv EEG, a subject-independent CNN reached 64.97% accuracy, which rose to 75.10% when emotional arousal knowledge was transferred from a pre-analysed (DEAP) dataset [101]. A virtual-reality fingertip task achieved mean within-subject accuracies exceeding 80% in the beta band and comparable performance for coherence measures [53]. In a pooled cross-subject analysis of short game sessions, an SVM driven by combined engagement indices reached 81% accuracy and 80% F1, meeting the benchmark generally deemed sufficient for rehabilitation feedback systems [105]. Portable single-channel devices inevitably yield more modest predictive power; nevertheless, delta, theta, and gamma activity at Fpz explained a significant fraction of variance in subjective flow scores [107]. Where classification is not the goal, signal-averaged MRPs identified tightly synchronous generators over primary and supplementary motor cortices during rhythmic tapping [108], and single-case SEEG demonstrated stronger cortical encoding during first compared with repeated hearings of self-produced speech or music [103].
Application domains mirror this methodological spectrum. Real-time detection of flow promises adaptive work-support systems that eschew self-report biases [101]. In rehabilitation, decoding intrinsic engagement fluctuations could enable closed-loop neurofeedback for fine motor recovery [53], while flexible network markers observed during musical preparation inform theories of motor planning [104]. Education benefits from EEG-based monitoring of attention and flow within serious games, guiding the design of learning technology innovations [106]. Lightweight headbands and single-channel sensors facilitate studies in naturalistic or mobile settings, broadening participation and paving the way for personalised neuroadaptive interfaces [102,107]. Collectively, the literature converges on the feasibility of objective, real-time assessment of flow and engagement, yet also highlights the need for larger, standardised datasets, multi-modal fusion strategies, and rigorous cross-subject validation if these insights are to generalise across users and contexts.

6. Discussion

This review set out to evaluate the extent to which current EEG-based BCI research supports the functional specifications of the TEMP framework. By mapping the empirical findings summarized in Section 5 onto the TEMP features list introduced in Section 3, three potential feasibility tiers emerge: (i) capabilities that are already technically viable and could be prototyped for context-correct ecological experiments; (ii) capabilities that are within experimental reach; and (iii) capabilities that remain largely aspirational and demand substantive advances in EEG decoding, multimodal fusion, and ecological validation. In what follows, we discuss each TEMP pillar in turn and outline a staged development roadmap. Here, it is important to highlight that no statistical meta-synthesis was conducted, and that the presented observations are intended to guide future research rather than substitute experimental validations.
Several aspects of EEG decoding discussed in the literature are already technically mature and could potentially be prototyped in a musical practice context. For instance, discrete classification of bimanual coordination based on EEG markers such as bilateral μ / β ERD and reconfigurations of alpha and gamma-band visual–motor networks is sufficiently robust to allow real-time feedback. This could potentially support musicians by flagging out-of-sync movements during piano playing, guitar fretting, or string bowing, directly addressing pedagogical needs related to inter-hand synchrony and bilateral motor coordination [79,80]. Similarly, EEG-based classifiers combining spatial patterns of mu and beta-bands ERD/ERS, slow delta–theta oscillations, and movement-related cortical potentials (MRCPs) achieve reliable decoding of multi-directional limb reaches, velocity categories, and coarse muscular effort levels [24,27,29,30]. These capabilities could potentially enable real-time detection and correction of gross kinematic errors such as lateral bow drift or excessive muscular exertion in instrumental practice.
At the finger level, EEG classifiers currently distinguish individual finger movements on a single hand, suggesting the possibility of real-time identification of fingering mistakes in keyboard or plucked-string instruments [47,48,49]. Additionally, EEG markers have demonstrated the capacity to classify broad levels of finger velocity and muscular effort, which could potentially offer musicians feedback on the consistency of speed and the presence of excessive force during rapid arpeggios or scalar passages [52,54]. With respect to respiratory control, cortical connectivity in the theta-band reliably discriminates inhale, exhale, and breath-hold phases, indicating a possibility for EEG-driven breathing phase indicators—useful for singers and wind instrumentalists to confirm correct timing of inhalations and proper diaphragmatic support [58,60]. Furthermore, EEG decoding of cued movement intention, leveraging signals such as the Bereitschaftspotential and beta-band ERD, presents the potential to detect preparatory movements in advance, thereby alerting musicians to inadvertent gestures such as premature shoulder lifts or unnecessary tension before a movement is fully executed [70,71].
Wearable EEG studies have also shown that moderate frontal alpha power in combination with elevated beta–gamma coherence can distinguish high-flow states from disengaged practice episodes in motor and game contexts [53,101]. In short, practice sessions, engagement indices derived from these features classify flow versus non-flow states with around 80% accuracy [105], suggesting that the TEMP framework could potentially use these signals to drive adaptive training protocols. For example, technical exercises might automatically accelerate when strong focus is detected or pause when neural signs of overload emerge. These possibilities highlight an emerging potential for EEG-informed feedback to help musicians self-regulate their attention and mental effort during demanding practice sessions.
Beyond immediate feasibility, several EEG decoding capabilities have demonstrated promise but still require further research to fully support nuanced musical applications. Continuous decoding of limb and finger trajectories has achieved only moderate accuracy in current EEG studies, limiting their suitability for precise real-time guidance on detailed kinematic control such as smooth bow trajectories, subtle wrist rotations, or intra-finger force distribution [25,32,34,50,53,56]. Likewise, EEG-based decoding of muscular effort, although effective at distinguishing broadly differing effort levels, remains inadequate for capturing the subtle gradations required for fine dynamic control [35,38,41]. Thus, while coarse muscular overactivation during demanding passages might be detectable, the subtle muscular adjustments distinguishing nuanced dynamic variations (ex: mezzo-forte from forte) are still beyond reliable EEG decoding.
Subtle respiratory dynamics, particularly the fine breath-pressure modulations crucial for nuanced phrasing, also present decoding challenges. EEG markers currently lose sensitivity at finer levels of respiratory effort, suggesting only limited applicability for nuanced control of sustained notes or gradual dynamics such as diminuendi [62,63,65]. Furthermore, EEG-based detection of spontaneous or asynchronous movement intentions, crucial for applications in highly expressive or improvisatory performance contexts, has not yet been validated under the complex motor conditions typical of real-world instrumental practice [74]. Similarly, EEG decoding of internalized tempo and rhythm has advanced significantly, with reliable decoding of dominant beats and subjective metrical interpretations achievable over longer EEG segments; however, the sub-20 ms timing accuracy required for virtuoso-level rhythmic precision still eludes current methods [83,86,88,92]. EEG-based decoding of pitch recognition has shown experimentally promising results, with above-chance classification of both perceived and imagined tones under controlled conditions [96,97,98]. These findings suggest that cortical responses to pitch content, including internal auditory imagery, can be captured and differentiated using non-invasive methods. However, current studies have focused primarily on isolated listening or imagery tasks, typically without concurrent instrumental performance. As a result, the extent to which pitch decoding remains reliable when combined with the complex motor and auditory demands of active playing is still unknown. Further research is needed to determine whether these signals can support real-time feedback during realistic musical practice. While flow-related EEG markers are promising, their generalization across individuals remains limited. Convolutional neural networks trained on one user group require explicit domain adaptation before being applied elsewhere, and low-density or single-channel systems typically capture only coarse engagement rather than the nuanced “sweet-spot” of immersive concentration [101,107].
Several additional EEG decoding capabilities remain entirely aspirational, having not yet been experimentally demonstrated or validated in contexts directly relevant to music practice. Notably, continuous EEG-based decoding of subtle postural shifts or fine alignment adjustments typical of instrumental playing has not been reported. Existing EEG markers, such as the perturbation-evoked N1 response, are inherently limited to detecting discrete balance disturbances rather than tracking ongoing alignment or weight distribution [14,16]. Similarly, comprehensive EEG decoding of detailed facial musculature and expressions, important for instruments involving complex embouchure control or vocalists requiring nuanced facial tension management, has not yet been demonstrated. Current EEG approaches also fail to decode subtle embouchure adjustments or head alignment with sufficient cross-subject generalisation for practical use [68]. Furthermore, EEG-only methods have not achieved the sub-millimetre precision needed to reconstruct finger trajectories and pad pressure distributions required for refining sophisticated articulation techniques or subtle rotational finger adjustments. Finally, although markers of cognitive engagement and Flow state have been identified, a universally applicable, robust EEG signature capturing the nuanced cognitive "sweet spot" of creative musical engagement, independent of individual calibration or task-specific training, has not yet emerged [101,107].
Lastly, it is important to note that EEG signals primarily reflect cortical activity, leaving a substantial portion of subcortical dynamics, such as those involving the basal ganglia, which play a key role in musical practice and sensorimotor control, largely inaccessible to current EEG-based BCIs. This limitation does not hinder the development of the TEMP framework and highlights a promising direction for future research.
An integrated potential feasibility map is presented in Table 3.

7. Conclusions and Future Work

The reviewed literature demonstrates considerable potential for EEG-based BCI applications—conceptualized within the TEMP framework—to enhance high-level instrumental training by delivering detailed, real-time feedback on biomechanical, cognitive, and technical aspects of performance. The TEMP concept, as evidenced by current state-of-the-art EEG technologies, appears technically feasible, particularly in areas such as detecting bimanual coordination, discrete finger movements, general motor intentions, respiratory phases, and cognitive states like flow and engagement.
However, despite promising advancements, substantial technical and practical challenges remain. Real-time continuous decoding of subtle biomechanical adjustments, nuanced muscular effort gradations, fine respiratory control, and precise internalized tempo processing and pitch recognition still require further research and development. Achieving millisecond timing precision necessary for professional rhythmic accuracy and nuanced musical articulation is currently beyond the capabilities of available EEG decoding methodologies and pitch recognition during active playing has (to the best of our knowledge) not been studied. To further advance the TEMP concept, several technological strategies merit exploration. The integration of multimodal sensing approaches, such as camera-based motion tracking and inertial measurement units (IMUs) for precise body and head tracking, could significantly enhance biomechanical awareness and overcome some limitations inherent to EEG. Additionally, hybrid systems combining EEG with electromyography (EMG) could improve detection accuracy of fine motor control and muscular tension; however, with the downside of overwhelming the musician with an intricate network of sensors. Machine learning algorithms, particularly deep learning models, should continue to be refined for enhanced pattern recognition and predictive capabilities.
Also, potential barriers to technology acceptance must be addressed. Key challenges include the invasiveness and practicality of EEG setups, user discomfort, and the social stigma of wearing visible neurotech equipment during practice. There are also concerns about data privacy, user consent, and ethical use of biometric and neurological data. User training and education to foster familiarity and trust in BCI systems will be critical for widespread acceptance. Future research should prioritize usability studies, addressing ergonomic design and system integration into existing musical training routines to ensure ease of use and acceptance by musicians and educators alike.
Finally, while the TEMP model offers a promising framework for structuring EEG-BCI-assisted musical practice, it is important to acknowledge that it remains conceptual and speculative. Its formulation is grounded in a critical review of the literature, but has not yet been empirically validated in ecological settings involving professional musicians. This article is therefore intended as a theoretical starting point, proposing functional categories and technical objectives to inform the development of future prototypes. As a next step, we recommend conducting pilot studies with professional musicians, incorporating co-design processes, usability testing, and qualitative analysis of user experience. Such validation with the target population will be essential to evaluate the applicability, relevance, and acceptability of the TEMP model in real-world musical practice.

Author Contributions

Conceptualization, AVP; methodology, AVP, JE, JC, CPV; writing—original draft preparation, all authors; writing—review and editing, all authors. All authors have read and agreed to the published version of the manuscript.

Funding

This work is financed through national funds by FCT - Fundação para a Ciência e a Tecnologia, I.P., in the framework of the projects UIDB/00326/2025, UIDB/04501/2025 and UIDP/04501/2025.

Acknowledgments

The authors would like to acknowledge the use of AI tools, more specifically ChatGPT and NotebookLM. These tools were used for summarizing and extracting structured information for the selected articles (title, authors, year, objectives, method, summary of results, summary of conclusions). However, all articles were entirely selected and analyzed by the authors. In addition, ChatGPT was used in the entire document to correct grammar, spelling, and, where necessary, summarize and rephrase for clarity and conciseness. This process was guided and double-checked by the authors, and, no generative AI information or insights were used. ChatGPT was also used to assist with LaTeX formatting issues.

Appendix A. Utilized Search Queries

Table A1. Search Queries by Category.
Table A1. Search Queries by Category.
TEMP Feature Search Query
Posture and Balance (“EEG” OR “brain-computer interface” OR “brain computer interface” OR “BCI”) AND (“posture” OR “alignment” OR “weight distribution” OR “proprioception” OR “kinesthesia” OR “balance”)
Movement and Muscle Activity (“EEG” OR “brain-computer interface” OR “brain computer interface” OR “BCI”) AND (“muscle activity” OR “motor detection” OR “motor execution” OR “movement detection” OR “movement”)
Fine Motor and Dexterity (“EEG” OR “brain-computer interface” OR “brain computer interface” OR “BCI”) AND (“fine motor control” OR “fine motor skills” OR “finger movement” OR “motor dexterity” OR “manual dexterity” OR “finger tapping” OR “precise movement” OR “precision motor tasks” OR “finger control” OR “force” OR “pressure” OR “finger identification”)
Breathing Control (“EEG” OR “brain-computer interface” OR “brain computer interface” OR “BCI”) AND (“breathing” OR “respiration” OR “respiratory control” OR “diaphragm” OR “respiratory effort” OR “respiratory patterns” OR “breath regulation” OR “inhalation” OR “exhalation”)
Head and Facial Movement (“EEG” OR “brain-computer interface” OR “brain computer interface” OR “BCI”) AND (“facial movement” OR “facial muscle activity” OR “facial tension” OR “facial expression” OR “head movement” OR “head posture” OR “head position” OR “head tracking” OR “cranial muscle activity” OR “facial motor control”)
Movement Intention (“EEG” OR “brain-computer interface” OR “brain computer interface” OR “BCI”) AND (“voluntary movement” OR “involuntary movement” OR “motor intention” OR “movement intention” OR “intent detection” OR “reflex movement” OR “automatic motor response” OR “conscious movement” OR “unconscious movement” OR “motor inhibition” OR “motor control” OR “volitional” OR “reflexive movement” OR “intentional movement” OR “purposeful movement” OR “spasmodic movement”)
Coordination and Movement Fluidity (“EEG” OR “brain-computer interface” OR “brain computer interface” OR “BCI”) AND (“motor coordination” OR “movement fluidity”)
Tempo Processing (“EEG” OR “brain-computer interface” OR “brain computer interface” OR “BCI”) AND (“tempo perception” OR “tempo tracking” OR “internal tempo” OR “imagined tempo” OR “motor imagery tempo” OR “rhythm perception” OR “timing perception” OR “sensorimotor timing” OR “mental tempo” OR “temporal processing” OR “beat perception” OR “rhythm processing” OR “timing accuracy”)
Pitch Recognition (“EEG” OR “brain-computer interface” OR “brain computer interface” OR “BCI”) AND (“pitch perception” OR “pitch tracking” OR “pitch recognition” OR “internal pitch” OR “imagined pitch” OR “pitch imagination” OR “pitch imagery” OR “auditory imagery” OR “sensorimotor pitch” OR “mental pitch” OR “pitch processing” OR “melody perception” OR “pitch accuracy”)
Cognitive Engagement (“EEG” OR “brain-computer interface” OR “brain computer interface” OR “BCI”) AND (“flow” OR “musical flow” OR “musical performance” OR “music performance” OR “movement focus” OR “active movement control” OR “automatic performance” OR “performance engagement”)

Appendix B. Included papers highlights

Table A2. Summary of Surveyed Literature
Table A2. Summary of Surveyed Literature
Citation Number Objectives BCI Task Results
14 Quantify the reliability of perturbation-evoked N1 as a biomarker of balance. Repeated EEG recordings during surface translations in young and older adults across days and one year. N1 amplitude and latency showed excellent within-session and test-retest reliability (ICC > 0.9), supporting clinical feasibility.
15 Examine age-related changes in cortical connectivity during dynamic balance. EEG during sway-referenced posturography in young vs. late-middle/older adults. Older adults displayed weaker PCC-to-PMC connectivity that correlated with poorer COP entropy, indicating a shift toward cognitive control of balance.
16 Test whether perturbation N1 reflects cognitive error processing. Whole-body translations with expected vs. opposite directions while EEG recorded. Error trials elicited Pe and ERAS, whereas early N1 remained unchanged, evidencing distinct neural generators for perturbation and error signals.
17 Assess a head-weight support traction workstation’s effect on rest brain activity in heavy digital users. Compare supported traction vs. conventional traction during 5-min seated rest; EEG + EDA. Supported traction increased global alpha power and comfort without elevating high-frequency activity, sustaining alertness.
18 Investigate cortical reorganization after short-term visual-feedback balance training in older adults. Three-day stabilometer program with pre/post EEG and postural metrics. Training reduced RMS sway, increased ventral-pathway alpha power, altered alpha MST topology, and these changes correlated with balance gains.
19 Determine if EEG rhythms encode direction-specific postural perturbations. HD-EEG during four translation directions; CSP-based single-trial classification. Low-frequency (3–10 Hz) features enabled above-chance classification in both healthy and stroke groups, implicating theta in directional encoding.
20 Evaluate passive-BCI workload decoding across posture and VR display contexts. Sparkles mental-arithmetic paradigm while sitting/standing with screen or HMD-VR. Within-context accuracy was stable; adding VR or standing minimally affected SNR, but cross-context transfer reduced classifier performance.
21 Explore auditory-stimulus effects on functional connectivity during balance. Stabilometry + EEG with visual feedback alone vs. with music. Music reduced whole-brain FC (delta–gamma) and decreased balance-quality indices, suggesting cognitive load diversion.
22 Probe posture-dependent modulation of interoceptive processing via HEPs. EEG-HEP during sitting, stable standing, and unstable standing. HEP amplitudes over central sites were lower in standing and further reduced on an unstable surface, independent of cardiac or power-spectrum changes, implying attentional reallocation from interoception to balance control.
23 Develop an interpretable CNN that can decode 2-D hand kinematics from EEG and test within-, cross- and transfer-learning strategies to shorten calibration time Continuous trajectory decoding during a pursuit-tracking task ICNN out-performed previous decoders while keeping model size/training time low; transfer-learning further improved prediction quality
24 Propose an enhanced Regularized Correlation-based Common Spatio-Spectral Patterns (RCCSSP) framework to better separate ipsilateral upper-limb movements in EEG 3-class movement execution classification (right arm, right thumb, rest) Mean accuracy = 88.94%, an 11.66% gain over the best prior method
25 Introduce a Motion Trajectory Reconstruction Transformer (MTRT) that uses joint-geometry constraints to recover full 3-D shoulder–elbow–wrist trajectories Continuous multi-joint trajectory reconstruction (sign-language data) Mean Pearson ρ across 6 joints = 0.94 – highest among all baseline models; NRMSE = 0.159
26 Present a multi-class Filter-Bank Task-Related Component Analysis (mFBTRCA) pipeline for upper-limb movement execution 5- to 7-class movement classification Accuracy = 0.419 ± 0.078 (7 classes) and 0.403 ± 0.071 (5 classes), improving on earlier methods
27 Explore Multilayer Brain Networks (MBNs) combined with MRCP features to decode natural grasp types and kinematic parameters 4-class natural grasp type + binary grasp-taxonomy Four-class grasp accuracy = 60.56%; binary grasp-type and finger-number accuracies ≈ 79%
28 Design a pseudo-online pipeline (ensemble of SVM, EEGNet, Riemannian-SVM) for real-time detection and classification of wrist/finger extensions Movement-vs-rest detection + contra/ipsi classification Detection: TPR = 79.6 ± 8.8%, 3.1 ± 1.2 FP min 1 , 75 ms mean latency; contra/ipsi classification ≈ 67%
29 Investigate whether EEG can distinguish slow vs. fast speeds for eight upper-limb movements using FBCSP and CNN Subject-independent speed classification FBCSP-CNN reached 90% accuracy ( κ = 0.80) for shoulder flexion/extension—the best across all movements tested
30 Create a neuro-physiologically interpretable 3D-CNN with topography-preserving inputs to decode reaction-time, mode (active/passive) and 4-direction movements Multi-label movement-component classification Leave-one-subject-out accuracies: RT 79.81%, Active/Passive 81.23%, Direction 82.00%; beats 2D-CNN and LSTM baselines
31 Provide the first comprehensive review of EEG-based motor BCIs for upper-limb movement, covering paradigms, decoding methods, artifact handling and future directions Narrative literature review (no experimental task) Synthesizes state-of-the-art techniques, identifies gaps and proposes research road-map
32 Propose a state-based decoder: CSP-SVM identifies movement axis, followed by axis-specific Gaussian Process Regression for trajectory reconstruction Hybrid discrete (axis) + continuous hand-trajectory decoding Axis classifier accuracy = 97.1%; mean correlation actual-vs-predicted trajectories = 0.54 (orthogonal targets) and 0.37 (random)
33 Demonstrate that an off-the-shelf mobile EEG (Emotiv EPOC) can decode hand-movement speed & position outside the lab and propose a tailored signal-processing pipeline Executed left/right-ward reaches at fast vs slow pace; classify speed and reconstruct continuous x-axis kinematics Speed-class accuracies: 73.36 ± 11.95% (overall); position & speed reconstruction ρ = 0.22–0.57 / 0.26–0.58, validating commercial EEG for real-world BCIs
34 Replace the classic centre-out task with an improved paradigm and create an adaptive decoder-ensemble to boost continuous hand-kinematic decoding Continuous 2-D reaches; decode six parameters (px, py, vx, vy, path length p, speed v) from low- δ EEG Ensemble raised Pearson r by ≈ 75% for directional parameters and ≈ 10% for non-directional ones versus the classic setup (e.g., px: 0.21 vs 0.14)
35 Quantify how a 2-back cognitive distraction degrades decoding and present a Riemannian Manifold–Gaussian NB (RM-GNBC) that is more distraction-robust 3-D hand-movement direction classification with and without distraction RM-GNBC outperformed TSLDA by 6% (p = 0.026); Riemannian approaches showed the smallest accuracy drop
36 Explore steady-state movement-related rhythms (SSMRR) and test whether limb (hand) and movement frequency can be decoded from EEG Rhythmic finger tapping at two frequencies with both hands; four-class (hand × frequency) classification 4-class accuracy = 73.14 ± 15.86% (externally paced) and 66.30 ± 17.26% (self-paced)
37 Introduce a diffusion-adaptation framework with orthogonal Bessel functions to recognize prolonged, continuous gestures from EEG even with small datasets Continuous upper-limb movements segmented into up to 10 sub-gestures; inter-subject gesture classification Leave-one-subject-out accuracy ≈ 70% for ten sub-gestures while keeping computational cost low
38 Propose a hierarchical model combining Attention-State Detection (ASD) and Motion-Intention Recognition (MIR) to keep upper-limb intention decoding reliable when subjects are distracted Binary right-arm movement-intention (move/no-move) under attended vs distracted conditions Hierarchical model accuracy = 75.99 ± 5.31% (6% higher than a conventional model)
39 Provide the first topical overview of EEG-BCIs for lower-limb motor-task identification, cataloging paradigms, pre-processing, features and classifiers Literature survey – contrasts active, imagined and assisted lower-limb tasks and associated BCI pipelines Maps 22 key studies; highlights Butterworth 0.1–30 Hz filtering, power/correlation features and LDA/SVM as most recurrent, achieving >90% in several benchmarks
40 Investigate raw low-frequency EEG and build a CNN-BiLSTM to recognise hand movements and force/trajectory parameters with near-perfect accuracy Classify four executed hand gestures, picking vs pushing forces, and four-direction displacements Accuracies: 4 gestures = 99.14 ± 0.49%, picking = 99.29 ± 0.11%, pushing = 99.23 ± 0.60%, 4-direction displacement = 98.11 ± 0.23%
41 Disentangle cortical networks for movement initiation vs directional processing using delta-band EEG during center-out reaches Four-direction center-out task; direction classifiers cue-aligned vs movement-aligned Windowed delta-band classifier peaked at 55.9 ± 8.6% (cue-aligned) vs 50.6 ± 7.5% (movement-aligned); parieto-occipital sources dominated direction coding
42 Map the topography of delta & theta oscillations during visually guided finger-tapping with high-density EEG Self-paced finger tapping; spectral analysis rather than online decoding Theta showed contralateral parietal + fronto-central double activation, while delta was confined to central contralateral sites, revealing distinct spatial roles in movement execution
43 Develop an embedding-manifold decoder whose neural representation of movement direction is invariant to cognitive distraction Binary 4-direction center-out reaching; classify direction from 1–4 Hz MRCPs while subjects are attentive vs. performing a 2-back distraction task Mixed-state model reached 76.9 ± 10.6% (attentive) / 76.3 ± 9.7% (distracted) accuracy, outperforming a PCA baseline and eliminating the need to detect the user’s attentional state
44 Perform the first meta-analysis of EEG decoding of continuous upper-limb kinematics, pooling 11 studies to assess overall feasibility and key moderators Narrative review (executed + imagined continuous trajectories); compares linear vs non-linear decoders Overall random-effects mean effect size r = 0.46 (95% CI 0.32–0.61); synergy-based decoding stood out (best study r ≈ 0.80), and non-linear models outperformed linear ones
45 Examine whether EEG + force-tracking signals can separate four force-modulation tasks and tell apart fine-motor experts from novices. Multi-class task-type classification and expert-vs-novice group classification. Task-type could be decoded with high accuracy (≈ 88–95%) in both groups, whereas group membership stayed at chance, revealing very individual EEG patterns in experts.
46 Present a spatio-temporal CSSD + LDA algorithm to discriminate single-trial EEG of voluntary left- vs-right finger movement. Binary executed finger-movement classification (L vs R index). Mean accuracy 92.1% on five subjects without trial rejection.
47 Evaluate SVM and deep MLP networks for decoding UHD-EEG during separate finger extensions; visualize salient time-channels. Pairwise binary classification for every finger pair (5 fingers → 10 pairs). MLP reached 65.68% average accuracy, improving on SVM (60.4%); saliency maps highlighted flexion & relaxation phases.
48 Test whether scalp-EEG phase features outperform amplitude for thumb-vs-index movement recognition using deep learning. Binary executed finger movement (thumb ↔ index). Phase-based CNN achieved 70.0% accuracy versus 62.3% for amplitude features.
49 Build an ultra-high-density (UHD) EEG system to decode individual finger motions on one hand and benchmark against lower-density montages. 2- to 5-class executed single-finger classification. UHD EEG accuracies: 80.86% (2-class), 66.19% (3-class), 50.18% (4-class), 41.57% (5-class) – all significantly above low-density setups.
50 Map broadband and ERD/ERS signatures of nine single/ coordinated finger flex-ext actions to gauge EEG decodability. Detection (movement vs rest) + pairwise discrimination among 9 actions. Combined low-freq amplitude + alpha/beta ERD features gave > 80% movement detection; Thumb vs other actions exceeded 60% LOSO accuracy.
51 Propose Autonomous Deep Learning (ADL) – a streaming, self-structuring network – for subject-independent five-finger decoding. 5-class executed finger-movement classification with online adaptation. ADL scored ≈ 77% (5-fold CV) across four subjects, beating CNN (72%) and RF (53%) and remaining stable in leave-one-subject-out tests.
52 Decode four graded effort levels of finger extension in stroke and control participants using EEG, EMG, and their fusion. Four-class effort-level classification (low, medium, high, none). Controls: EEG + EMG 71% (vs chance 25%); stroke paretic hand: EEG alone 65%, whereas EMG alone failed (41%).
53 Detect intrinsic flow/engagement fluctuations during a VR fine-finger task from EEG using ML. Binary high-flow vs low-flow state decoding. Spectral-coherence classifier exceeded 80% cross-validated accuracy; high-frequency bands contributed most.
54 Identify EEG markers of optimal vs sub-optimal sustained attention during visuo-haptic multi-finger force control. Compare EEG in low-RT-variability (optimal) vs high-variability (sub-optimal) trials. Optimal state showed 20–40 ms frontal-central haptics-potential drop and widespread alpha suppression—proposed biomarkers for closed-loop attention BCIs.
55 Determine whether movement of the index finger and foot elicits distinct movement-related power changes over sensorimotor cortex (EEG) and cerebellum (ECeG) Executed left & right finger-extension and foot-dorsiflexion while recording 104-channel EEG + 10% cerebellar extension montage Robust movement-related β -band desynchronisation over Cz, δ / θ synchronisation, and a premovement high- δ power decrease over cerebellum, demonstrating opposing low-frequency dynamics between cortex and cerebellum
56 Explore how auditory cues modulate motor timing and test if deep-learning models can decode pacing vs continuation phases of finger tapping from single-trial EEG Right-hand finger-tapping task with four conditions (synchronized / syncopated × pacing / continuation) CNN achieved 70% mean accuracy (stimulus-locked) and 66% (response-locked) in the 2-class pacing-vs-continuation problem—20% and 16% above chance, respectively
57 Compare dynamic position-control (PC) versus isometric force-control (FC) wrist tasks and examine how each type of motor practice alters cortico- and inter-muscular coherence 40-trial wrist-flexion “ramp-and-hold” tasks with pre- and post-practice blocks for PC or FC training β -band CM coherence rose significantly after PC practice and was accompanied by a stronger descending (cortex → muscle) component; FC practice showed no coherence change
58 Examine how EEG functional connectivity (FC) changes across inhale, inhale-hold, exhale and exhale-hold during ultra-slow breathing (2 cpm). Multi-class EEG classifier that labels the four respiratory phases from FC features. Random-committee classifier reached 95.1% accuracy with 403 theta-band connections, showing the theta-band connectome is a reliable phase “signature”.
59 Probe neural dynamics during slow-symmetric breathing (10, 6, 4 cpm) with and without breath-holds. Within-subject comparison of EEG metrics (coherence, phase-amplitude coupling, modulation index) across breathing patterns. Slow-symmetric breathing increased coherence and phase-amplitude coupling; alpha/beta power highest during breath-holds, but adding holds did not change the overall EEG-breathing coupling.
60 Test whether respiration-entrained oscillations can be detected with high-density scalp EEG in humans. Coherence analysis between respiration signal and EEG channels during quiet breathing. Significant respiration–EEG coherence was found in most participants, confirming scalp EEG can capture respiration-locked brain rhythms.
61 Determine the one-week test–retest reliability of respiratory-related evoked potentials (RREP) under unloaded and resistive-loaded breathing. Repeated RREP acquisition; reliability quantified with intraclass-correlation coefficients (ICC). Reliability ranged from moderate to excellent (ICC 0.57–0.92) for all RREP components in both conditions, supporting RREP use in longitudinal BCI studies.
62 Investigate how 30-s breath-hold cycles and the resulting CO 2 /O 2 swings modulate regional EEG power over time. Cross-correlation of end-tidal gases with EEG global & regional field power (delta, alpha bands). Apnea termination raised delta and lowered alpha power; CO 2 positively, O 2 negatively, correlated with EEG, with area-specific time lags, indicating heterogeneous cortical chemoreflex pathways.
63 Identify scalp-EEG “signatures” of voluntary (intentional) respiration for future conscious-breathing BCIs. Compute 0-2 Hz power, EEG–respiration phase-lock value (PLV) and sample-entropy while subjects vary breathing effort. Voluntary breathing enhanced low-frequency power frontally & right-parietally, increased PLV, and lowered entropy—evidence of a strong EEG marker set for detecting respiratory intention.
64 Introduce cycle-frequency (C-F) analysis as an alternative to conventional time–frequency EEG analysis for rhythmic breathing tasks. Compare C-F versus standard T–F on synthetic and real EEG during spontaneous vs. loaded breathing. C-F gave sharper time & frequency localization and required fewer trials, improving assessment of cycle-locked cortical activity.
65 Assess whether adding head-accelerometer data boosts EEG-based detection of respiratory-related cortical activity during inspiratory loading. Covariance-based EEG classifier alone vs. “Fusion” (EEG + accelerometer); performance via ROC/AUC. Fusion plus 50-s smoothing raised detection (AUC) above EEG-only; head motion information allows fewer EEG channels while preserving accuracy.
66 Review and demonstrate mechanisms that couple brain, breathing, and external rhythmic stimuli. Entrain localized EEG (Cz) and breathing to variable-interval auditory tones; analyze synchronization & “dynamic attunement”. Both breathing and Cz power spectra attuned to stimulus timing, increasing brain–breath synchronization during inter-trial intervals, supporting long-timescale alignment mechanisms.
67 Show that scalp EEG recorded before movement can reliably decode four tongue directions (left, right, up, down) and explore which features/classifiers work best for a multi-class tongue BCI. Offline detection of movement-related cortical potentials and classification of tongue movements (2-, 3-, 4-class scenarios) from single-trial pre-movement EEG. LDA gave 92–95% movement-vs-idle accuracy; 4-class accuracy 62.6%, 3-class 75.6%, 2-class 87.7%. Temporal + template features were most informative.
68 Build and validate a neural-network model that links occipital + central EEG to head yaw (left/right) rotations elicited by a light cue, aiming at hands-free HCI for assistive robotics. System-identification BCI: predict continuous head-position signal from sliding windows of EEG; evaluation within- and across-subjects. Within-subject testing reached up to r = 0.98, MSE = 0.02, while cross-subject generalization was poor, showing the model works but needs user-specific calibration.
69 Assess whether a low-cost, 3-channel around-ear EEG setup can detect hand- and tongue-movement intentions. Binary classification of single-trial ear-EEG (movement vs idle) in three scenarios: hand-rehab, hand-control, tongue-control. Mean accuracies: 70% (hand-rehab), 73% (hand-control), 83% (tongue-control) – all above chance, indicating practical ear-EEG BCIs are feasible.
70 Add rhythmic temporal prediction to a motor-BCI so that “time + movement” can be encoded together and movement detection is easier Visual finger-tapping task – left vs right taps under 0 ms, 1000 ms or 1500 ms prediction; 4-class “time × side” decoding 1000 ms prediction yielded 97.30% left-right accuracy and 88.51% 4-class accuracy – both significantly above the no-prediction baseline
71 Test whether putting subjects into a preparatory movement state before the action strengthens pre-movement EEG and boosts decoding Two-button task; “prepared” vs “spontaneous” pre-movement; MRCP + ERD features fused with CSP Preparation raised pre-movement decoding from 78.92% to 83.59% and produced earlier/larger MRCP & ERD signatures
72 In a real-world setting (dyadic dance), disentangle four simultaneous neural processes. Mobile EEG + mTRF modelling during spontaneous paired dance mTRF isolated all four processes and revealed a new occipital EEG marker that specifically tracks social coordination better than self- or partner-kinematics alone
73 Ask if pre-movement ERPs (RP / LRP) and MVPA encode action-outcome prediction Active vs passive finger presses that trigger visual / auditory feedback; MVPA decoding Decoding accuracy ramps from ≈-800 ms and peaks ≈ 85% at press time; late RP more negative when an action will cause feedback, confirming RP carries sensory-prediction info
74 Propose a time-series shapelet algorithm for asynchronous movement-intention detection aimed at stroke rehab Self-paced ankle-dorsiflexion; classification + pseudo-online detection Best F1 = 0.82; pseudo-online: 69% TPR, 8 false-positives / min, 44 ms latency – outperforming six other methods
75 Show that 20–60 Hz power can simultaneously encode time (500 vs 1000 ms) and movement (left / right) after timing-prediction training Time-movement synchronization task, before (BT) and after (AT) training; 4-class decoding After training, high- γ ERD gave 73.27% mean 4-class accuracy (best subject = 93.81%) while keeping movement-only accuracy near 98%
76 Convert 1-D EEG to 2-D meshes and feed to CNN + RNN for cross-subject, multiclass intention recognition PhysioNet dataset (108 subjects, 5 gestures) + real-world brain-typing prototype 98.3% accuracy on the large dataset; 93% accuracy for 5-command brain-typing; test-time inference ≈ 10 ms
77 Determine whether EEG (RP) adds value over kinematics for telling intentional vs spontaneous eye-blinks Blink while resting (spontaneous) vs instructed fast / slow blinks; logistic-regression models with EEG + EOG EEG cumulative amplitude (RP) alone classified intentional vs spontaneous blinks with 87.9% accuracy and boosted full-model AUC to 0.88; model generalized to 3 severely injured patients
78 Determine whether individual upper-limb motor ability (dexterity, grip strength, MI vividness) explains the inter-subject variability of motor-imagery EEG features. Motor-imagery EEG recording; correlate relative ERD patterns ( μ / β bands) with behavioral and psychological scores. Alpha-band rERD magnitude tracks hand-dexterity (Purdue Pegboard) and imagery ability
79 Develop a task-oriented bimanual-reaching BCI and test deep-learning models for decoding movement direction. Classify three coordinated directions (left, mid, right) from movement-related cortical potentials with a hybrid CNN + BiLSTM network. CNN-BiLSTM decodes three-direction bimanual reach from EEG at  73% peak accurac
80 Systematically review recent progress in bimanual motor coordination BCIs, covering sensors, paradigms and algorithms. Literature review (36 studies, 2010–2024) spanning motor-execution, imagery, attempt and action-observation BCIs. Bilateral beta/alpha ERD plus larger MRCP peaks emerge as core EEG markers of bimanual coordination
81 Compare neural encoding of simultaneous vs. sequential bimanual reaching and evaluate if EEG can distinguish them before movement. 3-class manifold-learning decoder (LSDA + LDA) applied to pre-movement and execution-period EEG. Low-frequency MRCP/ERD patterns allow pre-movement separation of sequential vs simultaneous bimanual reaches
82 Introduce an eigenvector-based dynamical network analysis to reveal meta-stable EEG connectivity states during visual-motor tracking. Track functional-connectivity transitions while participants follow a moving target vs. observe/idle. Eigenvector-based dynamics expose meta-stable alpha/gamma networks that differentiate visual-motor task states
83 Isolate the neural mechanisms that distinguish self-paced finger tapping from beat-synchronization tapping using steady-state evoked potentials (SSEPs). EEG while participants (i) tapped at their own pace, (ii) synchronized taps to a musical beat, or (iii) only listened. Synchronization recruited the left inferior frontal gyrus, whereas self-paced tapping engaged bilateral inferior parietal lobule—indicating functionally different beat-production networks.
84 Test whether neural entrainment to rhythmic patterns, working-memory capacity, and musical background predict sensorimotor synchronization skill. SS-EPs recorded during passive listening to syncopated/unsyncopated rhythms; separate finger-tapping and counting-span tasks. Stronger entrainment to unsyncopated rhythms surprisingly predicted worse tapping accuracy; working memory (not musical training) was the positive predictor of tapping consistency.
85 Use a coupled-oscillator model to explain tempo-matching bias and test whether 2 Hz tACS can modulate that bias. Dual-tempo-matching task with simultaneous rhythms; EEG entrainment measured; fronto-central 2 Hz tACS vs sham. Listeners biased matches toward the starting tempo; tACS reduced both under- and over-estimation biases, validating model predictions about strengthened coupling.
86 Determine if SSEPs track subjective beat perception rather than stimulus acoustics. Constant rhythm with context → ambiguous → probe phases; listeners judged beat-match; EEG SSEPs analyzed. During the ambiguous phase, spectral amplitude rose at the beat frequency cued by the prior context, showing SSEPs mirror conscious beat perception.
87 Ask whether the predicted sharpness of upcoming sound envelopes is encoded in beta-band activity and influences temporal judgements. Probabilistic cues signalled envelope sharpness in a timing-judgement task; EEG beta (15–25 Hz) analyzed. Pre-target beta power scaled with expected envelope sharpness and correlated with individual timing precision, linking beta modulation to beat-edge prediction.
88 Compare motor-cortex entrainment to isochronous auditory vs visual rhythms. Auditory or flashing-visual rhythms, with tapping or passive attention; motor-component EEG isolated via ICA. Motor entrainment at the beat frequency was stronger for visual than auditory rhythms and strongest during tapping; μ -power rose for both modalities, suggesting modality-specific use of motor timing signals.
89 Examine whether ballroom dancers show superior neural resonance and behavior during audiovisual beat synchronization. Finger-tapping to 400 ms vs 800 ms beat intervals; EEG resonance metrics compared between dancers and non-dancers. Dancers exhibited stronger neural resonance but no behavioral advantage; the 800 ms tempo impaired both groups and demanded more attentional resources.
90 Clarify how mu-rhythms behave during passive music listening without overt movement. 32-ch EEG during silence, foot/hand tapping, and music listening while movement was suppressed. Music listening produced mu-rhythm enhancement over sensorimotor cortex—similar to effector-specific inhibition—supporting covert motor suppression during beat perception.
91 Introduce an autocorrelation-based extension of frequency-tagging to measure beat periodicity as self-similarity in empirical signals. Applied to adult & infant EEG, finger-taps and other datasets. The new method accurately recovered beat-periodicity across data types and resolved specificity issues of classic magnitude-spectrum tagging, broadening tools for beat-BCI research.
92 Predict which dominant beat frequency a listener is tracking by classifying short EEG segments with machine learning. EEG from 20 participants hearing 12 pop songs; band-power features fed to kNN, RF, SVM. Dense spatial filtering reached 70% binary and 56% ternary accuracy—≈ 20% above chance—showing beat-frequency decoding from just 5 s of EEG.
93 Test the motor-auditory hypothesis for hierarchical meter imagination: does the motor system create and feed timing information back to auditory cortex when we “feel” binary or ternary meter in silence? High-density EEG + ICA isolated motor vs. auditory sources while participants listened, imagined or tapped rhythms. Bidirectional coupling appeared in all tasks, but motor-to-auditory flow became marginally stronger during pure imagination, showing a top-down drive from motor areas even without movement.
94 Determine whether imagined, non-isochronous beat trains can be detected from EEG with lightweight deep models. Binary beat-present/absent classification during 17 subjects’ silent imagery of two beat patterns. EEGNet + Focal-Loss yielded the best performance, using < 3% of the CNN’s weights and showing robustness to label imbalance—making it the preferred model for imagined-rhythm BCIs.
95 Ask how pitch predictability, enjoyment and musical expertise modulate cortical and behavioral tracking of musical rhythm. Passive listening to tonal vs. atonal piano pieces while EEG mutual-information tracked envelope & pitch-surprisal; finger-tapping to short excerpts measured beat following. Envelope tracking was stronger for atonal music; tapping was more consistent for tonal music; in tonal pieces, stronger envelope tracking predicted better tapping; envelope tracking rose with both expertise and enjoyment.
96 Test whether whole melodies can be identified from EEG not only while listening but also while imagining them, and introduce a maximum-correlation (maxCorr) decoding framework. 4-melody classification (Bach chorales) from single-subject, single-bar EEG segments during listening and imagery. Melodies were decoded well above chance in every participant; maxCorr out-performed bTRF, and low-frequency (< 1 Hz) EEG carried additional pitch information, demonstrating practical imagined-melody BCIs.
97 Evaluate the feasibility of decoding individual imagined pitches (C4–B4) from scalp EEG and determine the best feature/classifier combination. Random imagery of seven pitches; features = multiband spectral power per channel. 7-class SVM with IC features achieved 35.7 ± 7.5% mean accuracy (chance ≈ 14%), max = 50%, ITR = 0.37 bit/s, confirming the first-ever pitch-decoding BCI.
98 Disentangle how prediction uncertainty (entropy) and prediction error (surprizal) for note onset and pitch are encoded across EEG frequency bands, and whether musical expertise modulates this encoding. Multivariate temporal-response-function (TRF) models reconstruct narrow-band EEG ( δ , θ , α , β , < 30 Hz) while listeners hear Bach melodies; compare acoustic-only vs. acoustic + melodic-expectation regressors. Adding melodic-expectation metrics improved EEG reconstruction in all sub-30 Hz bands; entropy contributed more than surprisal, with δ - and β -band activity encoding temporal entropy before note onset. Musicians showed extra β -band gains, non-musicians α -band gains—highlighting frequency-specific predictive codes useful for rhythm-pitch BCIs.
99 Determine whether the centro-parietal-positivity (CPP) build-up rate reflects confidence rather than merely accuracy/RT in a new auditory pitch-identification task. 2-AFC pitch-label selection (24 tones) with 4-level confidence rating; simultaneous 32-ch EEG to quantify CPP slope. Confidence varied with tonal distance, yet CPP slope tracked accuracy and reaction time, not confidence, indicating CPP is a first-order evidence-accumulation signal in audition. Provides a novel paradigm to probe confidence-aware auditory BCIs.
100 Systematically review the neural correlates of the flow state across EEG, fMRI, fNIRS and tDCS studies. Literature synthesis (25 studies, 471 participants). Converging evidence implicates anterior attention/executive and reward circuits, but findings remain sparse and inconsistent, underscoring methodological gaps and small-sample bias.
101 Build an automatic flow-vs-non-flow detector from multimodal wearable signals and test emotion-based transfer learning. Arithmetic & reading tasks recorded with EEG (Emotiv Epoc X), wrist PPG/GSR, and Respiban; ML classifiers with and without transfer learning from the DEAP emotion dataset. EEG alone: 64.97% accuracy; sensor fusion: 73.63%; emotion-transfer model boosted accuracy to 75.10% (AF1 = 74.92%), showing emotion data can enhance flow recognition.
102 Demonstrate on-body detection of flow and anti-flow (boredom, anxiety) while gaming. Tetris at graded difficulty; EEG, HR, SpO 2 , GSR, head/hand motion analyzed. Flow episodes showed fronto-central alpha/ θ dominance, U-shaped HRV, inverse-U SpO 2 , and minimal motion, confirming lightweight wearables can monitor flow in real time.
103 Test whether motor corollary-discharge attenuation is domain-general across speech and music. Stereo-EEG in a professional musician during self-produced vs external speech & music. Self-produced sounds evoked widespread 4–8 Hz suppression and 8–80 Hz enhancement in auditory cortex, with reduced acoustic-feature encoding for both domains, proving domain-general predictive signals.
104 Examine if brain-network flexibility predicts skilled musical performance. Pre-performance resting EEG; sliding-window graph community analysis vs piano-timber precision. Higher whole-brain flexibility just before playing predicted finer timbre control when feedback was required, highlighting flexibility as a biomarker of expert sensorimotor skill.
105 Relate game difficulty, EEG engagement indices and self-reported flow; build a high/low engagement classifier. “Stunt-plane” video game (easy/optimal/hard) with β / ( θ + α ) , β / α , 1/ α indices; ML classification. Self-rated flow peaked at optimal difficulty; combining three indices yielded F1 = 89% (within-subject) / 81% (cross-subject); older-adult model F1 = 85%.
106 Investigate how technological interactivity level (LTI) plus balance-of-challenge (BCS) and sense-of-control (SC) shape EEG-defined flow. 9th-grade game-based learning with low / mid / high LTI; chi-square, decision-tree & logistic models on EEG flow states. High LTI + high short-term SC + high BCS increased odds of flow 8-fold, confirming interface interactivity as a key flow driver.
107 Identify EEG signatures of flow across multiple task types with a single prefrontal channel. Mindfulness, drawing, free-recall and three Tetris levels; correlate delta/ θ / γ power with flow scores. Flow scores correlated positively with δ , θ , γ power peaking ∼2 min after onset (max R 2 = 0.163), showing portable one-channel EEG can index flow in naturalistic settings.
108 Test whether cognitive control can modulate steady-state movement-related potentials, challenging “no-free-will” views. Repetitive finger tapping; participants voluntarily reduce pattern predictability; EEG SSEPs analyzed. Participants successfully de-automatized tapping; SSEPs over motor areas were modulated by control, supporting a role for higher-order volition in motor preparation.

References

  1. Barrett, K.C.; Ashley, R.; Strait, D.L.; Kraus, N. Art and science: how musical training shapes the brain. Frontiers in Psychology 2013, 4, 713. [Google Scholar] [CrossRef]
  2. Williamon, A. Musical excellence: Strategies and techniques to enhance performance; Oxford University Press, 2004.
  3. Bazanova, O.; Kondratenko, A.; Kondratenko, O.; Mernaya, E.; Zhimulev, E. New computer-based technology to teach peak performance in musicians. In Proceedings of the 2007 29th International Conference on Information Technology Interfaces. IEEE; 2007; pp. 39–44. [Google Scholar]
  4. Pop-Jordanova, N.; Bazanova, O.; Kondratenko, A.; Kondratenko, O.; Markovska-Simoska, S.; Mernaya, J. Simultaneous EEG and EMG biofeedback for peak performance in musicians. In Proceedings of the Inaugural Meeting of EPE Society of Applied Neuroscience (SAN) in association with the EU Cooperation in Science and Technology (COST) B27; 2006; pp. 23–23. [Google Scholar]
  5. Riquelme-Ros, J.V.; Rodríguez-Bermúdez, G.; Rodríguez-Rodríguez, I.; Rodríguez, J.V.; Molina-García-Pardo, J.M. On the better performance of pianists with motor imagery-based brain-computer interface systems. Sensors 2020, 20, 4452. [Google Scholar] [CrossRef] [PubMed]
  6. Bhavsar, P.; Shah, P.; Sinha, S.; Kumar, D. Musical Neurofeedback Advancements, Feedback Modalities, and Applications: A Systematic Review. Applied psychophysiology and biofeedback 2024, 49, 347–363. [Google Scholar] [CrossRef]
  7. Sayal, A.; Direito, B.; Sousa, T.; Singer, N.; Castelo-Branco, M. Music in the loop: a systematic review of current neurofeedback methodologies using music. Frontiers in Neuroscience 2025, 19, 1515377. [Google Scholar] [CrossRef]
  8. Kawala-Sterniuk, A.; Browarska, N.; Al-Bakri, A.; Pelc, M.; Zygarlicki, J.; Sidikova, M.; Martinek, R.; Gorzelanczyk, E.J. Summary of over fifty years with brain-computer interfaces—a review. Brain sciences 2021, 11, 43. [Google Scholar] [CrossRef] [PubMed]
  9. Nicolas-Alonso, L.F.; Gomez-Gil, J. Brain computer interfaces, a review. sensors 2012, 12, 1211–1279. [Google Scholar] [CrossRef] [PubMed]
  10. Acquilino, A.; Scavone, G. Current state and future directions of technologies for music instrument pedagogy. Frontiers in Psychology 2022, 13, 835609. [Google Scholar] [CrossRef]
  11. MakeMusic, Inc. MakeMusic. 2025. Available online: https://www.makemusic.com/ (accessed on 29 May 2025).
  12. Yousician Ltd. Yousician. 2025. Available online: https://yousician.com (accessed on 29 May 2025).
  13. Folgieri, R.; Lucchiari, C.; Gričar, S.; Baldigara, T.; Gil, M. Exploring the potential of BCI in education: an experiment in musical training. Information 2025, 16, 261. [Google Scholar] [CrossRef]
  14. Mirdamadi, J.L.; Poorman, A.; Munter, G.; Jones, K.; Ting, L.H.; Borich, M.R.; Payne, A.M. Excellent test-retest reliability of perturbation-evoked cortical responses supports feasibility of the balance N1 as a clinical biomarker. Journal of Neurophysiology 2025, 133, 987–1001. [Google Scholar] [CrossRef]
  15. Dadfar, M.; Kukkar, K.K.; Parikh, P.J. Reduced parietal to frontal functional connectivity for dynamic balance in late middle-to-older adults. Experimental Brain Research 2025, 243, 1–13. [Google Scholar] [CrossRef]
  16. Jalilpour, S.; Müller-Putz, G. Balance perturbation and error processing elicit distinct brain dynamics. Journal of Neural Engineering 2023, 20, 026026. [Google Scholar] [CrossRef]
  17. Jung, J.Y.; Kang, C.K.; Kim, Y.B. Postural supporting cervical traction workstation to improve resting state brain activity in digital device users: EEG study. Digital Health 2024, 10, 20552076241282244. [Google Scholar] [CrossRef]
  18. Chen, Y.C.; Tsai, Y.Y.; Huang, W.M.; Zhao, C.G.; Hwang, I.S. Cortical adaptations in regional activity and backbone network following short-term postural training with visual feedback for older adults. GeroScience 2025, 1–14. [Google Scholar] [CrossRef]
  19. Solis-Escalante, T.; De Kam, D.; Weerdesteyn, V. Classification of rhythmic cortical activity elicited by whole-body balance perturbations suggests the cortical representation of direction-specific changes in postural stability. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2020, 28, 2566–2574. [Google Scholar] [CrossRef]
  20. Gherman, D.E.; Klug, M.; Krol, L.R.; Zander, T.O. An investigation of a passive BCI’s performance for different body postures and presentation modalities. Biomedical Physics & Engineering Express 2025. [Google Scholar]
  21. Oknina, L.; Strelnikova, E.; Lin, L.F.; Kashirina, M.; Slezkin, A.; Zakharov, V. Alterations in functional connectivity of the brain during postural balance maintenance with auditory stimuli: a stabilometry and electroencephalogram study. Biomedical Physics & Engineering Express 2025, 11, 035006. [Google Scholar] [CrossRef]
  22. Dohata, M.; Kaneko, N.; Takahashi, R.; Suzuki, Y.; Nakazawa, K. Posture-Dependent Modulation of Interoceptive Processing in Young Male Participants: A Heartbeat-Evoked Potential Study. European Journal of Neuroscience 2025, 61, e70021. [Google Scholar] [CrossRef]
  23. Borra, D.; Mondini, V.; Magosso, E.; Muller-Putz, G.R. Decoding movement kinematics from EEG using an interpretable convolutional neural network. Computers in Biology and Medicine 2023, 165, 107323. [Google Scholar] [CrossRef] [PubMed]
  24. Besharat, A.; Samadzadehaghdam, N. Improving Upper Limb Movement Classification from EEG Signals Using Enhanced Regularized Correlation-Based Common Spatio-Spectral Patterns. IEEE Access 2025. [Google Scholar] [CrossRef]
  25. Wang, P.; Li, Z.; Gong, P.; Zhou, Y.; Chen, F.; Zhang, D. MTRT: Motion trajectory reconstruction transformer for EEG-based BCI decoding. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2023, 31, 2349–2358. [Google Scholar] [CrossRef] [PubMed]
  26. Jia, H.; Feng, F.; Caiafa, C.F.; Duan, F.; Zhang, Y.; Sun, Z.; Solé-Casals, J. Multi-class classification of upper limb movements with filter bank task-related component analysis. IEEE Journal of Biomedical and Health Informatics 2023, 27, 3867–3877. [Google Scholar] [CrossRef] [PubMed]
  27. Gao, Z.; Xu, B.; Wang, X.; Zhang, W.; Ping, J.; Li, H.; Song, A. Multilayer Brain Networks for Enhanced Decoding of Natural Hand Movements and Kinematic Parameters. IEEE Transactions on Biomedical Engineering 2024. [Google Scholar] [CrossRef]
  28. Niu, J.; Jiang, N. Pseudo-online detection and classification for upper-limb movements. Journal of Neural Engineering 2022, 19, 036042. [Google Scholar] [CrossRef] [PubMed]
  29. Zolfaghari, S.; Rezaii, T.Y.; Meshgini, S.; Farzamnia, A.; Fan, L.C. Speed classification of upper limb movements through EEG signal for BCI application. IEEE Access 2021, 9, 114564–114573. [Google Scholar] [CrossRef]
  30. Kumar, N.; Michmizos, K.P. A neurophysiologically interpretable deep neural network predicts complex movement components from brain activity. Scientific reports 2022, 12, 1101. [Google Scholar] [CrossRef]
  31. Wang, J.; Bi, L.; Fei, W. EEG-based motor BCIs for upper limb movement: current techniques and future insights. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2023, 31, 4413–4427. [Google Scholar] [CrossRef]
  32. Hosseini, S.M.; Shalchyan, V. State-based decoding of continuous hand movements using EEG signals. IEEE Access 2023, 11, 42764–42778. [Google Scholar] [CrossRef]
  33. Robinson, N.; Chester, T.W.J.; et al. Use of mobile EEG in decoding hand movement speed and position. IEEE Transactions on Human-Machine Systems 2021, 51, 120–129. [Google Scholar] [CrossRef]
  34. Wang, J.; Bi, L.; Fei, W.; Tian, K. EEG-based continuous hand movement decoding using improved center-out paradigm. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2022, 30, 2845–2855. [Google Scholar] [CrossRef]
  35. Fei, W.; Bi, L.; Wang, J.; Xia, S.; Fan, X.; Guan, C. Effects of cognitive distraction on upper limb movement decoding from EEG signals. IEEE Transactions on Biomedical Engineering 2022, 70, 166–174. [Google Scholar] [CrossRef]
  36. Wei, Y.; Wang, X.; Luo, R.; Mai, X.; Li, S.; Meng, J. Decoding movement frequencies and limbs based on steady-state movement-related rhythms from noninvasive EEG. Journal of Neural Engineering 2023, 20, 066019. [Google Scholar] [CrossRef]
  37. Falcon-Caro, A.; Ferreira, J.F.; Sanei, S. Cooperative Identification of Prolonged Motor Movement from EEG for BCI without Feedback. IEEE Access 2025. [Google Scholar] [CrossRef]
  38. Bi, L.; Xia, S.; Fei, W. Hierarchical decoding model of upper limb movement intention from EEG signals based on attention state estimation. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2021, 29, 2008–2016. [Google Scholar] [CrossRef] [PubMed]
  39. Asanza, V.; Peláez, E.; Loayza, F.; Lorente-Leyva, L.L.; Peluffo-Ordóñez, D.H. Identification of lower-limb motor tasks via brain–computer interfaces: A topical overview. Sensors 2022, 22, 2028. [Google Scholar] [CrossRef]
  40. Yan, Y.; Li, J.; Yin, M. EEG-based recognition of hand movement and its parameter. Journal of Neural Engineering 2025, 22, 026006. [Google Scholar] [CrossRef] [PubMed]
  41. Kobler, R.J.; Kolesnichenko, E.; Sburlea, A.I.; Müller-Putz, G.R. Distinct cortical networks for hand movement initiation and directional processing: an EEG study. NeuroImage 2020, 220, 117076. [Google Scholar] [CrossRef]
  42. Körmendi, J.; Ferentzi, E.; Weiss, B.; Nagy, Z. Topography of movement-related delta and theta brain oscillations. Brain Topography 2021, 34, 608–617. [Google Scholar] [CrossRef] [PubMed]
  43. Peng, B.; Bi, L.; Wang, Z.; Feleke, A.G.; Fei, W. Robust decoding of upper-limb movement direction under cognitive distraction with invariant patterns in embedding manifold. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2024, 32, 1344–1354. [Google Scholar] [CrossRef]
  44. Khaliq Fard, M.; Fallah, A.; Maleki, A. Neural decoding of continuous upper limb movements: a meta-analysis. Disability and Rehabilitation: Assistive Technology 2022, 17, 731–737. [Google Scholar] [CrossRef]
  45. Gaidai, R.; Goelz, C.; Mora, K.; Rudisch, J.; Reuter, E.M.; Godde, B.; Reinsberger, C.; Voelcker-Rehage, C.; Vieluf, S. Classification characteristics of fine motor experts based on electroencephalographic and force tracking data. Brain Research 2022, 1792, 148001. [Google Scholar] [CrossRef] [PubMed]
  46. Li, Y.; Gao, X.; Liu, H.; Gao, S. Classification of single-trial electroencephalogram during finger movement. IEEE Transactions on biomedical engineering 2004, 51, 1019–1025. [Google Scholar] [CrossRef] [PubMed]
  47. Nemes, Á.G.; Eigner, G.; Shi, P. Application of Deep Learning to Enhance Finger Movement Classification Accuracy From UHD-EEG Signals. IEEE Access 2024. [Google Scholar] [CrossRef]
  48. Wenhao, H.; Lei, M.; Hashimoto, K.; Fukami, T. Classification of finger movement based on EEG phase using deep learning. In Proceedings of the 2022 Joint 12th International Conference on Soft Computing and Intelligent Systems and 23rd International Symposium on Advanced Intelligent Systems (SCIS&ISIS); IEEE, 2022; pp. 1–4. [Google Scholar]
  49. Ma, Z.; Xu, M.; Wang, K.; Ming, D. Decoding of individual finger movement on one hand using ultra high-density EEG. In Proceedings of the 2022 16th ICME International Conference on Complex Medical Engineering (CME). IEEE; 2022; pp. 332–335. [Google Scholar]
  50. Sun, Q.; Merino, E.C.; Yang, L.; Van Hulle, M.M. Unraveling EEG correlates of unimanual finger movements: insights from non-repetitive flexion and extension tasks. Journal of NeuroEngineering and Rehabilitation 2024, 21, 228. [Google Scholar] [CrossRef] [PubMed]
  51. Anam, K.; Bukhori, S.; Hanggara, F.; Pratama, M. Subject-independent Classification on Brain-Computer Interface using Autonomous Deep Learning for finger movement recognition. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference; 2020; Vol. 2020, pp. 447–450. [Google Scholar]
  52. Haddix, C.; Bates, M.; Garcia Pava, S.; Salmon Powell, E.; Sawaki, L.; Sunderam, S. Electroencephalogram Features Reflect Effort Corresponding to Graded Finger Extension: Implications for Hemiparetic Stroke. Biomedical Physics & Engineering Express 2025. [Google Scholar]
  53. Tian, B.; Zhang, S.; Xue, D.; Chen, S.; Zhang, Y.; Peng, K.; Wang, D. Decoding intrinsic fluctuations of engagement from EEG signals during fingertip motor tasks. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2025. [Google Scholar] [CrossRef]
  54. Peng, C.; Peng, W.; Feng, W.; Zhang, Y.; Xiao, J.; Wang, D. EEG correlates of sustained attention variability during discrete multi-finger force control tasks. IEEE Transactions on Haptics 2021, 14, 526–537. [Google Scholar] [CrossRef]
  55. Todd, N.P.; Govender, S.; Hochstrasser, D.; Keller, P.E.; Colebatch, J.G. Distinct movement related changes in EEG and ECeG power during finger and foot movement. Neuroscience Letters 2025, 853, 138207. [Google Scholar] [CrossRef]
  56. Jounghani, A.R.; Backer, K.C.; Vahid, A.; Comstock, D.C.; Zamani, J.; Hosseini, H.; Balasubramaniam, R.; Bortfeld, H. Investigating the role of auditory cues in modulating motor timing: insights from EEG and deep learning. Cerebral Cortex 2024, 34, bhae427. [Google Scholar] [CrossRef]
  57. Nielsen, A.L.; Norup, M.; Bjørndal, J.R.; Wiegel, P.; Spedden, M.E.; Lundbye-Jensen, J. Increased functional and directed corticomuscular connectivity after dynamic motor practice but not isometric motor practice. Journal of Neurophysiology 2025. [Google Scholar] [CrossRef] [PubMed]
  58. A.S., A.; G., P.K.; Ramakrishnan, A. Brain-scale theta band functional connectome as signature of slow breathing and breath-hold phases. Computers in Biology and Medicine 2025, 184, 109435. [CrossRef]
  59. Kumar, P.; Adarsh, A.; et al. Modulation of EEG by Slow-Symmetric Breathing incorporating Breath-Hold. IEEE Transactions on Biomedical Engineering 2024. [Google Scholar] [CrossRef]
  60. Watanabe, T.; Itagaki, A.; Hashizume, A.; Takahashi, A.; Ishizaka, R.; Ozaki, I. Observation of respiration-entrained brain oscillations with scalp EEG. Neuroscience Letters 2023, 797, 137079. [Google Scholar] [CrossRef]
  61. Herzog, M.; Sucec, J.; Jelinčić, V.; Van Diest, I.; Van den Bergh, O.; Chan, P.Y.S.; Davenport, P.; von Leupoldt, A. The test-retest reliability of the respiratory-related evoked potential. Biological psychology 2021, 163, 108133. [Google Scholar] [CrossRef] [PubMed]
  62. Morelli, M.S.; Vanello, N.; Callara, A.L.; Hartwig, V.; Maestri, M.; Bonanni, E.; Emdin, M.; Passino, C.; Giannoni, A. Breath-hold task induces temporal heterogeneity in electroencephalographic regional field power in healthy subjects. Journal of applied physiology 2021, 130, 298–307. [Google Scholar] [CrossRef] [PubMed]
  63. Wang, Y.; Zhang, Y.; Zhang, Y.; Wang, Z.; Guo, W.; Zhang, Y.; Wang, Y.; Ge, Q.; Wang, D. Voluntary Respiration Control: Signature Analysis by EEG. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2023, 31, 4624–4634. [Google Scholar] [CrossRef]
  64. Navarro-Sune, X.; Raux, M.; Hudson, A.L.; Similowski, T.; Chavez, M. Cycle-frequency content EEG analysis improves the assessment of respiratory-related cortical activity. Physiological Measurement 2024, 45, 095003. [Google Scholar] [CrossRef]
  65. Hudson, A.L.; Wattiez, N.; Navarro-Sune, X.; Chavez, M.; Similowski, T. Combined head accelerometry and EEG improves the detection of respiratory-related cortical activity during inspiratory loading in healthy participants. Physiological Reports 2022, 10, e15383. [Google Scholar] [CrossRef]
  66. Goheen, J.; Wolman, A.; Angeletti, L.L.; Wolff, A.; Anderson, J.A.; Northoff, G. Dynamic mechanisms that couple the brain and breathing to the external environment. Communications biology 2024, 7, 938. [Google Scholar] [CrossRef]
  67. Kæseler, R.L.; Johansson, T.W.; Struijk, L.N.A.; Jochumsen, M. Feature and classification analysis for detection and classification of tongue movements from single-trial pre-movement EEG. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2022, 30, 678–687. [Google Scholar] [CrossRef]
  68. Zero, E.; Bersani, C.; Sacile, R. Identification of brain electrical activity related to head yaw rotations. Sensors 2021, 21, 3345. [Google Scholar] [CrossRef] [PubMed]
  69. Gulyás, D.; Jochumsen, M. Detection of Movement-Related Brain Activity Associated with Hand and Tongue Movements from Single-Trial Around-Ear EEG. Sensors 2024, 24, 6004. [Google Scholar] [CrossRef]
  70. Meng, J.; Zhao, Y.; Wang, K.; Sun, J.; Yi, W.; Xu, F.; Xu, M.; Ming, D. Rhythmic temporal prediction enhances neural representations of movement intention for brain–computer interface. Journal of Neural Engineering 2023, 20, 066004. [Google Scholar] [CrossRef] [PubMed]
  71. Zhang, Y.; Li, M.; Wang, H.; Zhang, M.; Xu, G. Preparatory movement state enhances premovement EEG representations for brain–computer interfaces. Journal of Neural Engineering 2024, 21, 036044. [Google Scholar] [CrossRef]
  72. Bigand, F.; Bianco, R.; Abalde, S.F.; Nguyen, T.; Novembre, G. EEG of the Dancing Brain: Decoding Sensory, Motor, and Social Processes during Dyadic Dance. Journal of Neuroscience 2025, 45. [Google Scholar] [CrossRef]
  73. Ody, E.; Kircher, T.; Straube, B.; He, Y. Pre-movement event-related potentials and multivariate pattern of EEG encode action outcome prediction. Human Brain Mapping 2023, 44, 6198–6213. [Google Scholar] [CrossRef]
  74. Janyalikit, T.; Ratanamahatana, C.A. Time series shapelet-based movement intention detection toward asynchronous BCI for stroke rehabilitation. IEEE Access 2022, 10, 41693–41707. [Google Scholar] [CrossRef]
  75. Meng, J.; Li, X.; Li, S.; Fan, X.; Xu, M.; Ming, D. High-Frequency Power Reflects Dual Intentions of Time and Movement for Active Brain-Computer Interface. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2025. [Google Scholar] [CrossRef]
  76. Zhang, D.; Yao, L.; Chen, K.; Wang, S.; Chang, X.; Liu, Y. Making sense of spatio-temporal preserving representations for EEG-based human intention recognition. IEEE transactions on cybernetics 2019, 50, 3033–3044. [Google Scholar] [CrossRef] [PubMed]
  77. Derchi, C.; Mikulan, E.; Mazza, A.; Casarotto, S.; Comanducci, A.; Fecchio, M.; Navarro, J.; Devalle, G.; Massimini, M.; Sinigaglia, C. Distinguishing intentional from nonintentional actions through eeg and kinematic markers. Scientific Reports 2023, 13, 8496. [Google Scholar] [CrossRef]
  78. Gu, B.; Wang, K.; Chen, L.; He, J.; Zhang, D.; Xu, M.; Wang, Z.; Ming, D. Study of the correlation between the motor ability of the individual upper limbs and motor imagery induced neural activities. Neuroscience 2023, 530, 56–65. [Google Scholar] [CrossRef] [PubMed]
  79. Zhang, M.; Wu, J.; Song, J.; Fu, R.; Ma, R.; Jiang, Y.C.; Chen, Y.F. Decoding coordinated directions of bimanual movements from EEG signals. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2022, 31, 248–259. [Google Scholar] [CrossRef]
  80. Tantawanich, P.; Phunruangsakao, C.; Izumi, S.I.; Hayashibe, M. A Systematic Review of Bimanual Motor Coordination in Brain-Computer Interface. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2024. [Google Scholar] [CrossRef]
  81. Wang, J.; Bi, L.; Fei, W.; Xu, X.; Liu, A.; Mo, L.; Feleke, A.G. Neural correlate and movement decoding of simultaneous-and-sequential bimanual movements using EEG signals. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2024. [Google Scholar] [CrossRef]
  82. Li, X.; Mota, B.; Kondo, T.; Nasuto, S.; Hayashi, Y. EEG dynamical network analysis method reveals the neural signature of visual-motor coordination. Plos one 2020, 15, e0231767. [Google Scholar] [CrossRef]
  83. De Pretto, M.; Deiber, M.P.; James, C.E. Steady-state evoked potentials distinguish brain mechanisms of self-paced versus synchronization finger tapping. Human movement science 2018, 61, 151–166. [Google Scholar] [CrossRef]
  84. Noboa, M.d.L.; Kertész, C.; Honbolygó, F. Neural entrainment to the beat and working memory predict sensorimotor synchronization skills. Scientific Reports 2025, 15, 10466. [Google Scholar] [CrossRef] [PubMed]
  85. Mondok, C.; Wiener, M. A coupled oscillator model predicts the effect of neuromodulation and a novel human tempo matching bias. Journal of Neurophysiology 2025. [Google Scholar] [CrossRef]
  86. Nave, K.M.; Hannon, E.E.; Snyder, J.S. Steady state-evoked potentials of subjective beat perception in musical rhythms. Psychophysiology 2022, 59, e13963. [Google Scholar] [CrossRef] [PubMed]
  87. Leske, S.; Endestad, T.; Volehaugen, V.; Foldal, M.D.; Blenkmann, A.O.; Solbakk, A.K.; Danielsen, A. Beta oscillations predict the envelope sharpness in a rhythmic beat sequence. Scientific Reports 2025, 15, 3510. [Google Scholar] [CrossRef] [PubMed]
  88. Comstock, D.C.; Balasubramaniam, R. Differential motor system entrainment to auditory and visual rhythms. Journal of Neurophysiology 2022, 128, 326–335. [Google Scholar] [CrossRef] [PubMed]
  89. Wang, X.; Zhou, C.; Jin, X. Resonance and beat perception of ballroom dancers: An EEG study. Plos one 2024, 19, e0312302. [Google Scholar] [CrossRef] [PubMed]
  90. Ross, J.M.; Comstock, D.C.; Iversen, J.R.; Makeig, S.; Balasubramaniam, R. Cortical mu rhythms during action and passive music listening. Journal of neurophysiology 2022, 127, 213–224. [Google Scholar] [CrossRef] [PubMed]
  91. Lenc, T.; Lenoir, C.; Keller, P.E.; Polak, R.; Mulders, D.; Nozaradan, S. Measuring self-similarity in empirical signals to understand musical beat perception. European Journal of Neuroscience 2025, 61, e16637. [Google Scholar] [CrossRef]
  92. Pandey, P.; Ahmad, N.; Miyapuram, K.P.; Lomas, D. Predicting dominant beat frequency from brain responses while listening to music. In Proceedings of the 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM); IEEE, 2021; pp. 3058–3064. [Google Scholar]
  93. Cheng, T.H.Z.; Creel, S.C.; Iversen, J.R. How do you feel the rhythm: Dynamic motor-auditory interactions are involved in the imagination of hierarchical timing. Journal of Neuroscience 2022, 42, 500–512. [Google Scholar] [CrossRef]
  94. Yoshimura, N.; Tanaka, T.; Inaba, Y. Estimation of Imagined Rhythms from EEG by Spatiotemporal Convolutional Neural Networks. In Proceedings of the 2023 IEEE Statistical Signal Processing Workshop (SSP); IEEE, 2023; pp. 690–694. [Google Scholar]
  95. Keitel, A.; Pelofi, C.; Guan, X.; Watson, E.; Wight, L.; Allen, S.; Mencke, I.; Keitel, C.; Rimmele, J. Cortical and behavioral tracking of rhythm in music: Effects of pitch predictability, enjoyment, and expertise. Annals of the New York Academy of Sciences 2025, 1546, 120–135. [Google Scholar] [CrossRef]
  96. Di Liberto, G.M.; Marion, G.; Shamma, S.A. Accurate decoding of imagined and heard melodies. Frontiers in Neuroscience 2021, 15, 673401. [Google Scholar] [CrossRef] [PubMed]
  97. Chung, M.; Kim, T.; Jeong, E.; Chung, C.K.; Kim, J.S.; Kwon, O.S.; Kim, S.P. Decoding Imagined Musical Pitch From Human Scalp Electroencephalograms. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2023, 31, 2154–2163. [Google Scholar] [CrossRef]
  98. Galeano-Otálvaro, J.D.; Martorell, J.; Meyer, L.; Titone, L. Neural encoding of melodic expectations in music across EEG frequency bands. European Journal of Neuroscience 2024, 60, 6734–6749. [Google Scholar] [CrossRef]
  99. Tang, T.; Samaha, J.; Peters, M.A. Behavioral and neural measures of confidence using a novel auditory pitch identification task. Plos one 2024, 19, e0299784. [Google Scholar] [CrossRef]
  100. Alameda, C.; Sanabria, D.; Ciria, L.F. The brain in flow: A systematic review on the neural basis of the flow state. Cortex 2022, 154, 348–364. [Google Scholar] [CrossRef]
  101. Irshad, M.T.; Li, F.; Nisar, M.A.; Huang, X.; Buss, M.; Kloep, L.; Peifer, C.; Kozusznik, B.; Pollak, A.; Pyszka, A.; et al. Wearable-based human flow experience recognition enhanced by transfer learning methods using emotion data. Computers in Biology and Medicine 2023, 166, 107489. [Google Scholar] [CrossRef] [PubMed]
  102. Rácz, M.; Becske, M.; Magyaródi, T.; Kitta, G.; Szuromi, M.; Márton, G. Physiological assessment of the psychological flow state using wearable devices. Scientific Reports 2025, 15, 11839. [Google Scholar] [CrossRef]
  103. Lorenz, A.; Mercier, M.; Trébuchon, A.; Bartolomei, F.; Schon, D.; Morillon, B. Corollary discharge signals during production are domain general: An intracerebral EEG case study with a professional musician. Cortex 2025, 186, 11–23. [Google Scholar] [CrossRef]
  104. Uehara, K.; Yasuhara, M.; Koguchi, J.; Oku, T.; Shiotani, S.; Morise, M.; Furuya, S. Brain network flexibility as a predictor of skilled musical performance. Cerebral Cortex 2023, 33, 10492–10503. [Google Scholar] [CrossRef]
  105. Ahmed, Y.; Ferguson-Pell, M.; Adams, K.; Ríos Rincón, A. EEG-Based Engagement Monitoring in Cognitive Games. Sensors 2025, 25, 2072. [Google Scholar] [CrossRef]
  106. Wu, S.F.; Lu, Y.L.; Lien, C.J. Measuring effects of technological interactivity levels on flow with electroencephalogram. IEEE Access 2021, 9, 85813–85822. [Google Scholar] [CrossRef]
  107. Hang, Y.; Unenbat, B.; Tang, S.; Wang, F.; Lin, B.; Zhang, D. Exploring the neural correlates of Flow experience with multifaceted tasks and a single-Channel Prefrontal EEG Recording. Sensors 2024, 24, 1894. [Google Scholar] [CrossRef] [PubMed]
  108. van Schie, H.T.; Iotchev, I.B.; Compen, F.R. Free will strikes back: Steady-state movement-related cortical potentials are modulated by cognitive control. Consciousness and Cognition 2022, 104, 103382. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The triad of knowledge for the musician. Musicians must know: i) the instrument, its potential and its limitations, ii) know the piece, its context and potential interpretations, and iii) know themselves, their potential and limitations.
Figure 1. The triad of knowledge for the musician. Musicians must know: i) the instrument, its potential and its limitations, ii) know the piece, its context and potential interpretations, and iii) know themselves, their potential and limitations.
Preprints 171382 g001
Table 1. Relevance and Scope Parameters for Literature Mapping.
Table 1. Relevance and Scope Parameters for Literature Mapping.
Scope Inclusion Parameters Exclusion Conditions
Journal articles. Use invasive or non-EEG extracranial neuroimaging technology such as infra-red spectroscopy (fNIRS) or magnetic resonance (fMRI).
Published in 2020 or later. Use of external stimuli dependent passive BCI strategies, such as P300 oddball or visually evoked potentials.
Reports on non-invasive EEG-based BCI systems. Works that only analyze motor imagery tasks, where the users perform only mental action, without actually performing the reciprocal physical motion.
Presents experimental results with human participants ( n 1 ). Studies of face expression or emotion recognition (specific criteria for the facial movements search).
Evaluates a parameter relevant to at least one TEMP feature. Does not report empirical results, or the study was not tested with human-generated datasets
Includes technical performance metrics (e.g., latency, accuracy, detection reliability).
Table 2. Search and Screening Results by Category.
Table 2. Search and Screening Results by Category.
Category PubMed Results IEEEXplore Results Included in Survey
Posture and Balance 92 202 4
Movement and Muscle Activity 327 2564 22
Fine Motor and Dexterity 1195 320 13
Breathing Control 35 258 9
Head and Facial Movement 143 243 3
Movement Intention 535 547 8
Coordination and Movement Fluidity 352 36 7
Tempo Processing 85 41 3
Pitch Recognition 15 20 4
Cognitive Engagement 597 534 9
Table 3. Re-thought feasibility and implementation strategies for each TEMP feature, grounded in the results and discussion of the review.
Table 3. Re-thought feasibility and implementation strategies for each TEMP feature, grounded in the results and discussion of the review.
TEMP Feature Feasibility Tier Evidence-grounded Prototyping Strategy Key Bottlenecks
Posture & balance (i)Technicallyviable 8–32 ch wearable EEG targeting perturbation-evoked N1 & fronto/parietal θ / α modulations Movement-related EEG artifacts; reliable calibration in standing/playing positions
Gross arm/hand trajectory (i)Technicallyviable CSP → CNN on μ / β ERD + δ / θ MRCPs ∼150 ms latency still perceptible; angular, not mm precision
Finger individuation & force (ii)Withinexperimentalreach Ultra-high-density EEG (256+) or ear-EEG; SVM/Riemann classifier flags wrong-finger presses UHD caps cumbersome; overlap of finger maps lowers SNR
Breathing control (ii)Withinexperimentalreach 32-ch EEG θ -band connectivity distinguishes inhale/exhale/hold after resistive-load calibration Wind-instrument mouthpiece artifacts; elusive fine pressure gradations
Bimanual coordination (ii)Withinexperimentalreach μ / β ERD and reconfigurations of alpha and gamma-band visual–motor networks Require context-appropriate research
Tempo processing (i)Technicallyviable Beat-locked SSEPs (1–5 Hz) + SMA β ERD track internal vs external tempo Sub-20 ms micro-timing below EEG resolution; expressive rubato confounds error metric
Movement intention (iii)Aspirational Detect BP/MRCP 150–300 ms pre-onset; dual threshold for unplanned twitches Require large labeled datasets; day-to-day variability
Facial / head muscletension (iii)Aspirational Require further research Strong EMG/blink contamination; no robust decoding of subtle embouchure
Pitch recognition (ii)Withinexperimentalreach Left–right hemispheric differences in the beta and low-gamma range No study combining with actual intrument playing
Engagement & flow state (ii)Withinexperimentalreach Wearable frontal EEG → small CNN fine-tuned via transfer-learning; uses moderate α plus high β / γ coherence Signatures idiosyncratic; single-channel headsets give only coarse signal
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated