Preprint
Article

This version is not peer-reviewed.

A Safe and Efficient Brain-Computer Interface Using Moving Object Trajectories and LED-Controlled Activation

A peer-reviewed article of this preprint also exists.

Submitted:

13 February 2025

Posted:

14 February 2025

You are already at the latest version

Abstract

Nowadays, Brain-Computer Interface (BCI) systems are frequently used to connect individuals who have lost their mobility with the outside world. These BCI systems enable individuals to control external devices using brain signals. However, these systems have certain disadvantages for users. This paper proposes a novel approach to minimize the disadvantages of visual stimuli on the eye health of system users in BCI systems employing Visual Evoked Potential (VEP) and P300 methods. The approach employs moving objects with different trajectories instead of visual stimuli. It uses a Light Emitting Diode (LED) with a frequency of 7 Hz as a condition for the BCI system to be active. The LED is assigned to the system to prevent it from being triggered by any involuntary or independent eye movements of the user. Thus, the system user will be able to use a safe BCI system with a single visual stimulus that blinks on the side without needing to focus on any visual stimulus through moving balls. Data were recorded in two phases: when the LED was on, and the LED was off. The recorded data were processed using a Butterworth filter and Power Spectral Density (PSD) method. In the first classification phase, which was performed for the system to detect the LED in the background, the highest accuracy rate of 99.57% was achieved with the Random Forest (RF) classification algorithm. In the second classification phase, which involves classifying moving objects within the proposed approach, the highest accuracy rate of 97.89% and an ITR value of 36.75 (bits/min) were achieved using the RF classifier.

Keywords: 
;  ;  ;  ;  ;  

1. Introduction

Nowadays, many serious health problems negatively affect people’s quality of life. One of these problems is stroke, which results in a person losing their ability to move and becoming bedridden. Although individuals with this condition have normal brain activity, they are confined to bed due to neurological disorders and have difficulty meeting their daily needs. Brain-computer interface (BCI) systems have been developed to assist these individuals in fulfilling their basic daily needs. BCI systems are designed to produce meaningful outputs by measuring the potential differences in the human brain. Neurons located on the human brain’s surface constantly interact with each other. The rhythms resulting from these interactions are grouped according to their frequency values. These groups are known as alpha, beta, theta, delta, and gamma waves. Table 1 provides the frequency and amplitude values of these waves.
The delta wave, which has a frequency range of 0.5-4 Hz, is the slowest wave with the highest amplitude (Table 1). This type of wave is seen in infants up to one year old and in adults during deep sleep [1]. Theta waves, ranging from 4-7 Hz, are observed during light sleep and relaxation. Alpha waves, appearing in the 7-12 Hz range, are associated with eye closure and relaxation. Beta waves, observed in the 12-30 Hz range, dominate during states of alertness and anxiety. These waves are also elevated in individuals solving mathematical problems [2]. Gamma waves, with frequencies above 30 Hz, have the lowest amplitude and play a crucial role in detecting neurological diseases. They relate to perception, recognition, and similar cognitive functions [3]. These waves, categorized by their frequency values, are frequently used in BCI systems.
BCI systems typically use Electroencephalography (EEG), a non-invasive method that works a person’s brain signals without requiring surgical intervention. In this method, the signals measured from the individual’s brain activity are processed and taught to machines. Associating the output obtained from signal processing with a command allows for controlling electronic devices such as electric wheelchairs, beds, and lamps [4]. This function enables individuals with paralysis to interact with their environment and meet their needs. The basic scheme used in BCI systems is depicted in Figure 1.
As shown in Figure 1, BCI systems are fundamentally composed of four groups. These groups are signal acquisition, preprocessing, feature extraction, and classification stages. In the signal acquisition stage, brain signals are recorded using EEG devices. During the preprocessing stage, operations such as trimming, filtering, and normalization are applied to the recorded data. Feature extraction methods are applied to the filtered data. The customized data are then classified in the classification stage, the final step of signal processing, using different learning algorithms. Then, an output command is generated based on the values obtained from classification.
In EEG-based BCI systems, methods such as Visual Evoked Potential (VEP) [5], Motor Imagery (MI) [6] and P300 [7] are frequently used during the signal acquisition phase. In the VEP method, visual stimuli such as flashing lights or images at different frequency values are presented to the subject, and the resulting voltage changes in the brain are recorded. In the VEP method, when the frequency value exceeds 6 Hz, a state known as Steady-State Visual Evoked Potential (SSVEP) occurs. The P300 method involves a positive deflection in brain waves in response to certain stimuli such as lights, sounds, or various visual cues. P300 waves exhibit a positive deflection within a 300 ms to 600 ms time window and are recorded using EEG devices. In the MI method, the physical movements of the individual are replaced by imagined mental movements. When a person imagines any physical movement, specific patterns emerge in the brain, which are recorded using EEG devices. The recorded EEG signals from all these methods are processed using signal-processing techniques in a computer environment. Output is generated through classification based on the detected value from signal processing.
The primary goal of developing EEG-based BCI systems is to benefit individuals who have lost their ability to move. However, while these systems offer advantages, they also have some disadvantages. These issues can be listed as follows:
Research and testing of the systems are only conducted in laboratory environments;
The comfort of these systems, which users must rely on for life, is insufficient;
The systems operate slowly;
They are generally high-cost, which limits their accessibility to a broader audience;
The systems are not particularly suited for long-term use by the user [8].
In addition to these disadvantages, EEG devices are relatively cost-effective compared to recording techniques such as Magnetoencephalography (MEG) and Functional Magnetic Resonance Imaging (fMRI). However, they remain prohibitively expensive for daily use by individuals with disabilities [7]. Another problem is that the visual and mental fatigue caused by vibrating stimuli in visual stimulus-based BCI systems hinders users’ adoption of such systems and complicates their usage [9]. Furthermore, users must continuously focus on visual stimuli for the system to be active. Nevertheless, research has shown that users experience eye problems over time due to prolonged and continuous exposure to visual stimuli [10]. This serious issue impedes users’ long-term use of these systems and threatens eye health. Additionally, in visual stimulus-based systems, the frequency values of the stimuli create potentials in the brain via the eyes, rendering these systems unsuitable for individuals with preexisting eye health issues.
In this study, a new hybrid system that uses Electrooculography (EOG) artifacts contained in EEG signals and the SSVEP method was developed in order to provide an alternative to the eye health problems caused by the visual stimuli contained in visual stimulus-based BCI systems in system users and to increase the usage time, comfort, usability and safety of the system. The SSVEP method is used more frequently than other common BCI paradigms because it can be more easily detected in EEG signals and does not require the creation of any training set [11]. Therefore, in the current study, the SSVEP method was preferred to activate the system. Additionally, synchronized operation of BCI systems is a known challenge of these systems [12].
In EEG signals recorded with an EEG device, EOG artifacts occur as a result of eye movements, mostly in the channels located in the frontal lobe of the brain. Artifacts are unwanted signals that can negatively impact neurological processes. These signals may influence the characteristics of neurological events and may even be mistakenly perceived as control signals in BCI systems [13]. EOG artifacts are filtered through various methods when they are not intended to affect neurological conditions. However, in studies where eye movements need to be detected [14], EOG artifacts found in EEG signals can be used as a source. Particular eye movements, including blinks, upward and downward glances, leftward and rightward shifts, as well as eye closures, can be identified, isolated, and categorized using EEG data. These detected movements can subsequently be linked to distinct command outputs for use in a BCI typing system [15].
The hybrid system designed in the study proposes an innovative approach of moving objects, each with different routes, instead of the visual stimuli contained in visual stimulus-based BCI systems. In the system, moving balls are shown to the user via a computer screen. The command generated depending on the output assigned to the orbital movement focused by the user is sent as a control signal to peripheral devices (bed, wheelchair, etc.). However, this proposed approach has a drawback. The system user may involuntarily or unconsciously make the same orbital movement with the moving balls without looking at the monitor, and the system may be activated involuntarily. In order to solve this problem, a Light Emitted Diode (LED) with a frequency of 7Hz was placed in the upper middle of the screen as a condition for the system to be active. When the system is wanted to be activated with conscious eye movements, the system checks the presence of the LED at the point of view through the SSVEP method and is not activated in cases where the LED cannot be detected. Thus, it uses the LED as the safety valve of the system. The designed BCI system is shown in Figure 2.
In the system, the general scheme of which is given in Figure 2, since the number of LEDs is 1, not much light is emitted. In addition, the system does not require the user to look directly or focus on the visual stimulus. In common SSVEP based BCI systems, candidates are required to focus and look at least 4 flashing lights [16]. In P300-based systems, it is necessary to look at the light at least two or more times for each command [17]. However, in the proposed system, the candidate only needs to look in the direction of a flashing visual stimulus without focusing. Thus, the system user can use the system safely through a single visual stimulus, without the feeling of glare in the eye and without being exposed to the disturbing effect of light.
The designed system uses the 14-channel Emotiv EPOC X device, which is cost-effective, portable, wireless, easy to clean and use. Data in the study were recorded at a sampling frequency of 256Hz. The recording process was done in two stages: with the light on (illuminated) and with the light off (non-illuminated). Pre-processing steps such as cutting and filtering were applied to the recorded data, features were extracted and effective channel selection was made. The features extracted using active channels were classified in two stages. In the first classification stage, the data recorded with the light on and off were distinguished from each other using the trapezoidal method using the SSVEP method. In the second classification stage, EEG data containing EOG artifacts detected in the LED active state in the background in the first stage were classified in order to distinguish objects moving up-down, left-right, right-cross and left-cross.
In this study, it is aimed to find solutions to the feeling of glare in the user’s eyes and eye health problems that occur over time, caused by the visual stimuli contained in visual stimulus-based BCI systems. Since these problems directly negatively affect the system usage time, usability and comfort, a more usable, safe and comfortable system design is aimed with the proposed hybrid method and approach. In the study, EOG artifacts contained in EEG data were recorded through moving objects suggested in the approach, while the 7Hz LED, which is the condition for the system to be active, was active and disabled. Active channel selection was made through the recorded signals, and moving objects were classified with machine learning algorithms using EOG artifacts of the detected active channels. The detection of the LED, which acts as a safety valve, was made through the trapezoidal method using the SSVEP method. The proposed study is a hybrid system as it uses EOG artifacts in EEG signals to classify moving objects and uses the SSVEP method to detect the presence of LED by the system. The study proved that the LED placed in the background can be detected without the need for focusing and that moving objects can be classified using the EOG artifacts in the EEG signals through the proposed approach. Thus, a hybrid BCI system that is relatively harmless to visual stimulus-based BCI systems in terms of eye health, relatively more comfortable, suitable for long-term use and safer in terms of control has been proposed.

2. Related Works

One of the important disadvantages of BCI systems is the high cost of the systems [8]. For this reason, researchers have focused on designing high-performance systems using low-cost EEG devices. The Emotiv EPOC wearable EEG device is frequently preferred by researchers due to its cheap cost, long battery life, short setup time and ease of use [18].
In the study, SSVEP method was used via 7Hz LED as the condition for the system to be active. SSVEP has a higher signal-to-noise ratio (SNR) value and requires less training than other methods such as P300 and MI [19]. Since the method does not require any mental effort as in the MI method, it is easier to apply and less tiring [20]. In addition, studies have shown that the method gives relatively higher accuracy and ITR rates compared to other methods [21]. Also, can be more easily detected in EEG signals [11]. Considering the stated advantages, the SSVEP method was preferred in the study since the negative impact of the flashing light on the user is minimized through the proposed approach.
In this section, a literature review was conducted on hybrid BCI systems in which SSVEP and EOG signals are used together and visual stimulus-based BCI systems using the Emotiv EPOC EEG device.
In a study, researchers [22] proposed a hybrid BCI system by combining the SSVEP method and EOG signals. In the study, researchers obtained the prior probability distribution of the target with the SSVEP method in order to reduce the transition time between tasks of the SSVEP-based system. They obtained target prediction output by using EOG signals to optimize the probability distribution. In another study [14] where EOG signals and SSVEP signals were used together, researchers controlled a robotic arm with six degrees of freedom. Designing an EOG-based switch using triple eye blink, the researchers achieved an accuracy rate of 92.09% and an information transfer rate (ITR) value of 35.98 (bits/min) in the experiment. In another study [23], researchers proposed a new hybrid asynchronous BCI system based on a combination of SSVEP and EOG signals. The researchers used 12 buttons, each representing 12 characters with different frequency values, to trigger SSVEPs in the interface they designed. They recorded signals on 10 subjects by changing the size of the buttons. At the end of the study, they stated that the asynchronous hybrid BCI system has great potential for communication and control. In another study [24] where SSVEP method and EOG signals were used in the same system, researchers used 20 buttons corresponding to 20 characters in the interface they designed. Researchers used ten healthy subjects in the study and asked the subject to look at the lights, each of which was flashing at the same time during the experiment. They recorded EOG signals when the buttons moved in different directions. At the end of the experiments, they achieved an accuracy rate of 94.75%. In another study [25], which combined the SSVEP method and EOG signals, a hybrid printer system was developed using 36 targets. The researchers divided the targets into nine groups of letters, numbers, and characters. While using EOG signals to detect the target group, they identified the target within the selected group with SSVEP. Researchers tested the proposed system on ten subjects and obtained an accuracy rate of 94.16%. In a different study that [26] used both methods together, researchers presented a comparison data set for BCI systems. The data set consisted of data from the SSVEP-based BCI system and the SSVEP-EMG and SSVEP-EOG-based BCI systems. They conducted the experiments using a virtual keyboard containing nine visual stimuli flashing between 5.58Hz and 11.1Hz. They used ten participants in the copywriting task and collected data for 10 sessions for each system. They evaluated the systems with criteria such as accuracy, ITR, and NASA-TLX workload index.
In a visual stimulus-based BCI system [27], researchers designed a screen containing visual stimuli at frequencies of 7Hz, 9Hz, 11Hz and 13Hz. They recorded signals over 10-min sessions. To augment the dataset, white noise with amplitudes of 0.5 and 5 was added to augment the size of the training set threefold. The classification was performed using SVM and k-NN classifiers. Without data augmentation, they achieved 51% and 54% accuracy rates, respectively. The accuracy rates improved with augmented data to 55% and 58%. In another study [28], a drone controlled by EEG signals was developed and tested on 10 healthy subjects and detected visual stimuli at frequencies of 5.3 Hz, 7 Hz, 9.4 Hz, and 13.5 Hz. This system resulted in an average accuracy of 92.5% and an ITR value of 10 bits/min in the BBA system. In another study [29], four LEDs with frequencies of 13 Hz, 14 Hz, 15 Hz, and 16 Hz were placed outside a visual interface. Next, the system was tested on five participants who completed image flickering experiments at four different frequencies in four directions. In this research, 23 participants were asked to complete tasks in different rooms. However, 12 participants either could not complete the tasks or did not achieve sufficient results. Those who completed all three tasks obtained an average accuracy of 79.2% and an ITR of 15.23 (bits/min). These studies indicate that systems incorporating multiple visual stimuli with different frequency values pose risks to users’ eye health due to the need for sustained focus. Consequently, these systems are uncomfortable and unsuitable for prolonged use. In this respect, some studies [29] have observed that users experienced visual stimulus-related issues during the experiment, leading to task failure. Besides, the signal recording durations in these studies are often long, and the ITR values of the systems are relatively low.
Various studies have been conducted in the literature to mitigate the negative effects of visual stimuli on system users. For instance, in [30], a BCI system operating at high frequencies (56-70 Hz) was proposed to reduce the sensation of flicker caused by vibrating stimuli. The system was tested with low-frequency (26-40 Hz) stimuli. The study achieved accuracy rates of 98% for low-frequency stimuli and 87.19% for high-frequency stimuli. The ITR of the system was calculated to be 29.64 (bits/min). This study demonstrated that while accuracy rates decreased with higher frequency values, the system did not provide a solution to the negative effects of visual stimuli on users. In another study [31], a BCI system based on a rotating wing structure was proposed, wherein five healthy subjects aged between 27 and 32 participated. The designed interface had a black screen divided into four sections, each featuring a wing with an “A” mark. Each wing completed its rotation at different speeds and directions. Using the Cubic SVM method, the researchers recorded data for 125 second per class and achieved the highest success rate of 93.72%. Reviewing these studies indicates that the results obtained by these systems do not provide a permanent solution to the existing problems. Moreover, these systems have long recording times, relatively low accuracy rates, and vulnerability to unintended eye movements.
Overall, relevant studies have focused on eye movements for BCI systems as an alternative to visual stimuli. For instance, in [32], eye movements were used to control a wheelchair. This research proposed a brain activity paradigm based on imagined tasks, including closing the eyes for alpha responses and focusing attention on upward, rightward, and leftward eye movements. The experiment was conducted with twelve volunteers to collect EEG signals. Employing a traditional motor imagery paradigm, the researchers achieved an average accuracy of 83.7% for left and right commands. Another study [33] examined the relationship between eye blinking activity and the human brain. This research used channels AF3 and F7 for the left eye and AF4 and F8 for the right eye. The through the Convolutional Neural Networks (CNN) structure in a different study focusing on eye movements [34], researchers investigated the effect of visual stimuli on the classification accuracy of human brain signals. The study involved 16 healthy participants who were shown arrows indicating right and left directions while their brain beta waves were recorded. Using SVM methods, the researchers achieved an average accuracy of 70% in standard tests and an accuracy of 76% in tests with effective visual stimuli. These studies primarily focus on the eye movements of the system user. However, systems remain vulnerable to unintended eye movements made by the user. Implementing sequential movement coding makes it challenging to users to adapt to the system, resulting in increased error rates. Additionally, these systems often have long recording times and relatively low accuracy rates.
The P300 method is among the frequently used techniques in the field. Using this method, researchers [35] classified P300 signals obtained from six healthy subjects aged 20 to 37 using Deep Learning (DL) techniques. The interface they designed required participants to follow two different scenarios. For classification, they employed a 5-layer CNN deep learning module. The results showed that with deep learning classification, the transition time between stimuli achieved 100% success in training data. Meanwhile, in test data, 80% success was achieved with a 125-ms inter-stimulus interval and 40% success with a 250-ms inter-stimulus interval. In another study using the P300 method [36], researchers developed a hybrid BCI hardware platform incorporating both SSVEP and P300. They created a chipboard platform with four independent radial green visual stimuli at frequencies of 7 Hz, 8 Hz, 9 Hz, and 10 Hz. The platform was designed to extract SSVEP and four high-power red LEDs flashing at random intervals to evoke P300 events. The platform was tested with five healthy subjects. The researchers successfully detected P300 events concurrently with four event markers in the EEG signals. In another study [37], researchers investigated pattern recognition using the P300 component induced by visual stimuli. The study involved 19 healthy participants. Participants were instructed to look at a screen and count how many times it flashed, and the data were recorded. The researchers found that Bayesian Networks (BN) achieved the highest accuracy rate of 99.86%. These studies indicate that while visual stimuli are actively used in these methods, the systems often do not provide sufficient comfort for the user.
In [38], the Motor Imagery method was used to examine the control of a spider robot. The researchers recorded EEG signals and tested the detection of imagined hand movements for controlling the robot. Specifically, the imagined opening of the hand was associated with forward movement of the robot, while the imagined closing of the hand was associated with backward movement. Through a CNN, the researchers achieved a maximum classification accuracy rate of 87.6%. In another study on motor imagery [39], researchers analyzed motor imagery data obtained over 20 days from a participant. The study involved commands for right, left, up, and down movements. During the classification phase, they employed the Ensemble subspace discriminant classifier and achieved an optimal daily average accuracy of 61.44% for the 4-class classification. For five participants, the average accuracy for the 4-class classification was 50.42%, while the binary classification accuracy for right and left movements was 71.84%.
In [40], the control of a wheelchair was investigated. To this end, three white LEDs with frequencies of 8 Hz, 9 Hz, and 10 Hz were installed in the screen’s right, left, and upper corners, respectively. The setup was tested on five different participants aged between 29 and 58 years, with movements including left, right, and forward. These authors employed Canonical correlation analysis (CCA) and Multivariate synchronization index (MSI) methods to determine the dominant frequency. The researchers achieved an accuracy rate of 96% for both methods. In [41], the authors differentiated Error Related Potentials (ERP) in both online and offline conditions with 14 participants in a visual feedback task. Participants were shown red, blue, and green visual stimuli at periods of 500ms, 700ms, and 3000ms. The results showed an accuracy rate of 81% using deep learning techniques. In another study [42], a system was designed to detect emotional parameters such as excitement, stress, focus, relaxation, and interest. Participants were shown a 15-min mathematics competition video to evoke excitement, attention, and focus. The researchers tested the collected experimental datasets using Naive Bayes and Linear Regression learning algorithms. The Linear Regression classifier achieved an accuracy rate of 62%, while the Naive Bayes classifier achieved an accuracy rate of 69%.
In a study investigating the impact of adjustable visual stimulus intensity, researchers [43] designed a system to examine the effects of LED brightness on evoking SSVEP in the brain. The LED frequencies were set to 7 Hz, 8 Hz, 9 Hz, and 10 Hz, the brightness levels were adjusted to 25%, 50%, 75%, and 100%, and the system was tested on 5 individuals. The study found that the highest median response was achieved with a brightness level of 75%, which provided the highest SSVEP responses for all five participants. However, the 75% brightness level, despite yielding the best response, was found to be uncomfortably high for system users. Additionally, the number of visual stimuli used in the system was quite large. Thus, the applied method and obtained results do not provide an effective solution.

3. Materials and Method

3.1. EEG Device

When designing BCI systems, the goal is to select a wearable, wireless, low-cost, and high-comfort EEG device for data recording. Besides, factors such as the number of channels, cost, setup time, ease of use, and the type of BCI application play a crucial role in selecting the EEG device [44]. The Emotiv Epoc X device has 14 channels. This system is powered by lithium batteries, providing active mobile use for up to approximately 12 hours. The device’s wireless capability and adjustability to head sizes facilitate its use. Its affordability, setup, and installation time of approximately 15 min also provide advantages for BCI systems. The EEG device is shown in Figure 3.
Figure 3 shows that the Emotiv Epoc X EEG device uses a special saline (saltwater) solution to reduce the impedance between the electrodes and the scalp. Using the saline solution means the participant does not need to shower after data collection, providing significant convenience for easy cleaning. The Emotiv Epoc X device offers users a 16-bit resolution with 128 SPS or 256 SPS options. The EmotivPRO application provided by Emotiv records and manages the data collected with the device. The electrode arrangement of the device is shown in Figure 4.
Figure 4 exhibits the electrode arrangement of the Emotiv Epoc X EEG device. The electrodes are positioned according to the international 10/20 system [45]. Electrodes labeled F3, F4, AF3, AF4, F7, and F8 are used to monitor the participant’s neural activity. Electrodes T7, T8, FC5, and FC6 are positioned for auditory, visual, and speech functions, while P8 and P7 electrodes measure perception and numerical processing states. Electrodes placed at the back of the head, labeled O2 and O1, are for visual perception, response, and memory. CMS and DRL are ground electrodes [18].

3.2. Power Spectral Density (PSD)

EEG signals recorded from the scalp using surface electrodes are captured as voltage and time series data through computers. Understanding and analyzing these signals in terms of time and voltage is quite challenging. Therefore, it is necessary to convert the data from the time domain to the frequency domain. This study used the Welch Power Spectral Density (PSD) method to transform the data into the frequency domain and identify frequency ranges.
In the Welch method, the length of the data is divided into K equal segments with an overlap. Each overlapping block has a length of L. The Welch estimate of the PSD is the average of the periodograms of the overlapping segments, as shown in Eq. (1).
P w e l c h ( f ) = 1 K . L . U i = 0 k 1 | n = 0 L 1 x i ( n ) . ω ( n ) e j 2 π f n
U = 1 L n = 0 L 1 ω 2 ( n )
In Eq. (1), K represents the number of overlapping segments, L denotes their length, N is the length of the signal, and f is the frequency. Also, x i ( n ) is the input signal and ω ( n ) is the windowing function. In Eq. (2), the value denoted by U represents the power of the window function used in Eq. (1) [46].

3.3. Numerical Integration (Trapezoidal)

The present study used the trapezoidal numerical integration method to distinguish between data recorded with the LED (7 Hz) active and data recorded with the LED turned off. The trapezoidal rule is a commonly used method for calculating the area under curves in science and engineering applications. This method divides the integration data into small trapezoids and approximates the area to perform numerical integration. The general trapezoidal formula is given in Eq. (3).
a b f ( x ) d x = h 2 [ f ( x 0 ) + 2 i = 1 n 1 f ( x i ) + f ( x n ) ]
where a and b represent the integration interval of the function, h denotes the width of each sub-trapezoid, and the points ( x 0 , x 1 ,...   x n ) represent the equally spaced points within the interval.

3.4. Normalization (Z-Score)

Normalization is a scaling method that makes the analyzed data easier to process and distinguish in machine learning models. Different methods (e.g., min-max, L2-norm) are used for normalizing data. In this study, the Z-score method was used for normalization. The Z-score normalization process (Eq. 4) involves adjusting the data to have a mean of zero and a standard deviation of one:
x = x μ σ
where x represents the data to be normalized, μ is the mean of the data, and σ is the standard deviation of the data.

3.5. Analysis of Variance (ANOVA)

In the field of signal processing, features are extracted from datasets to distinguish between different classes. Since not all extracted features may be discriminative for the data, selecting or eliminating features may be necessary. In such cases where features are present, ANOVA is often used for selection and elimination. Most statistical software performs ANOVA on raw data [47]. ANOVA is a method used to test the variance differences between groups statistically. This method is expressed by Eq. (5):
SST = i = 1 k j = 1 n i ( Y i j Y ^ ) 2  
where Y i j represents the j observation in group i, Y ^ denotes the mean of all observations, k indicates the number of groups, and n i represents the number of observations in group i.

3.6. Performance Parameters of BCI Systems

3.6.1. Accuracy (Acc)

Accuracy is one of the most commonly used metrics in BCI systems. This metric is obtained by dividing the number of correctly identified examples by the total number of correctly and incorrectly identified examples. Acc is calculated using Eq. (6).
A c c = c o r r e c t l y   c l a s s i f i e d   p r e d i c t i o n s t o t a l   n u m b e r   o f   e x a m p l e s
In the studies conducted, accuracy is frequently calculated using Hold-Out methods. This approach divides the data into training and test sets. The system is trained with the training set, and a model is created. The created model is then tested with the test set. Typically, the data are shuffled and repeated, with the average of the obtained values being taken. This approach ensures the stability of the system [8].

3.6.2. Information Transfer Rate (ITR)

ITR is a comprehensive parameter that provides information about the real-time usability of BCI systems. As an important performance metric, the ITR value is calculated using parameters such as the BCI system’s accuracy, the signal’s duration, and the number of classes. According to Shannon’s theory, ITR is expressed by Eq. (7) [48].
B t = log 2 K + p log 2 p + ( 1 p )   log 2 ( 1 p K 1 )
where K denotes the number of choices available to the system user, and p represents the system’s accuracy. The ITR value is calculated using B t (Eq. 8).
I T R = 60 *   B t T
where T represents the time allocated for a single prediction to be recognized by the system during the classification phase [31].

3.7. Classification Algorithms

3.7.1. K-Nearest Neighbors (k-NN)

K-Nearest Neighbors (k-NN) algorithm is frequently used in BCI systems. This machine learning algorithm employs a simple approach where the class of an element to be classified is assigned based on the class of the nearest element to its value. The number of neighbors to be considered is determined by a pre-defined parameter k. Thus, the performance of the classifier is directly related to the value of k. Typically, k is chosen to be smaller than the square root of the total number of samples. The distance to the neighbors in k-NN is calculated using methods such as Euclidean (Eq. 9), Manhattan (Eq. 10), and Minkowski (Eq. 11) distances [49].
d x y = i = 1 k ( x i y i ) 2
d x y = i = 1 k |   x i y i |
d x y = ( i = 1 k ( |   x i y i | ) q ) 1 / q
Eqs. (9) to (11) represent k as the number of data points, i as the index of the data, and d as the distance.

3.7.2. Support Vector Machine (SVM)

Support Vector Machine (SVM) is a widely used supervised learning method for classification tasks. This method aims to create a linear decision surface that provides the best separation between non-linearly separable data points. While some data can be linearly separated, other data cannot be. In such cases, kernel functions are used to map data into higher dimensions to achieve linear separation [50]. The study used the Gaussian Radial Basis Function (RBF) kernel function. The Gaussian RBF is formulated as shown in Eq. (12):
K ( x i , x j ) = φ ( x i )   .   φ ( x j )
where x i   and x j are two data points. The vector weight of the decision surface created by the RBF kernel function is calculated using Eq. (13).
W = i a i y i φ ( x i )  
The classifier balances the trade-off between dimensionality and flexibility by minimizing Eq. (14) [50].
[ 1 n i = 1 n m a x ( 0 , 1 y i ( x k x i b ) ) ] + λ | w | 2  

3.7.3. Random Forest (RF)

The Random Forest (RF) classifier algorithm emerged as an alternative to Boosting. As an extension of Bagging [51], it offers advantages in terms of training time compared to other algorithms. The algorithm’s ease of application to parallel systems and high-dimensional data, along with its minimal number of parameters, are among its significant advantages over other classification methods. RF is a tree-based ensemble method where each tree relies on a randomly selected subset of variables. In this context, a random vector of real values with dimension f ( x ) and an unknown common distribution for the randomly chosen target variable are considered. The main goal of the RF algorithm is to find a function that follows a loss function for the target variable and to minimize and smooth this function’s value [52].

3.7.4. Linear Discriminant Analysis (LDA)

Linear Discriminant Analysis (LDA) is a commonly used machine-learning algorithm for distinguishing and classifying groups of attributes within a dataset. The algorithm aims to maximize the differences between attribute groups while minimizing the variations within each class [53]. In addition to classification, LDA is used for dimensionality reduction. The within-class and between-class scatter matrices of the algorithm are computed using Eq. (15):
S ω = k i c k ( x i m k ) ( x i m k ) T N ,   S b = k n k ( m k m ) ( m k m ) T N
where 1 n k i c k x i is used to calculate the mean value of class k . The expression m = 1 N i x i represents the mean of the dataset. The algorithm can be differentiated by applying a Gaussian Mixture Model to the training data. The obtained models can be used to classify examples of the classes represented in the training data, although they are not suitable for new classes [54].

3.8. Participants

The research participants aged between 18 and 45 years (avg. 30.7) and were selected from individuals who voluntarily agreed to participate and who had no health problems or dependencies (except smoking). The selected participants were thoroughly informed about the study before the experiments. The informed individuals were included in the study by filling out a Volunteer Consent Form. The experiments were conducted with 10 participants, consisting of 4 females and 6 males. The study was conducted with the ethical approval of the Trabzon Kanuni Training and Research Hospital Medical Faculty, numbered 23618724.

3.9. Data Acquisition

The experimental phase was prepared by placing an LCD monitor with a refresh rate of 144 Hz, a response time of 1 ms, and a screen size of 27 inches (68 cm) on a flat table in an empty room. A light-emitting diode (LED) with a frequency of 7 Hz was mounted at the center of the upper part of the screen. A chair was positioned perpendicular to the screen at a distance of 120 cm from it for the subject. The subject’s horizontal and vertical eye angles, relative to the distance from the screen, are depicted in visual representations in Figure 5a,b, respectively.
The human eye has a curved structure with a visual field of 178° horizontally and 135° vertically. The limiting visual angle in the temporal direction is generally considered 105° [55]. An individual’s eye movements are crucial in eye movement-based systems, as more pronounced eye movements result in more meaningful recorded signals. However, excessively large eye angles indicate that the person is close to the visual target, which can be uncomfortable for the individual. In one study, individuals’ eye angles were 20-30° [56]. Table 2 presents the maximum eye movement angles resulting from the distance depicted in Figure 5 calculated for the X and Y axes.
In the human eye, there is an electrical potential called Corneoretinal potential (CRP) between the cornea and the retina. While the cornea has a positive electrical charge, the retina has a negative electrical charge. CRP potential can be recorded as an EOG signal by placing electrodes around the eyes. Because EOG signals indicate eye position, they can be used to detect eye movements [57]. The angles between the four moving orbits in the study were determined by considering the angle changes occurring in the X and Y axes of the eye. While only the horizontal axis angles of the eye are used in the right and left movement orbit, only the vertical axis angles of the eye are used in the up and down movement orbit. In the right cross orbit, both the right and left axis angles of the eye are used together. In the left cross movement, the horizontal and vertical axis angles of the eye are used in the opposite direction to the right diagonal movement trajectory. Angular changes in the eye axis depending on the movement trajectories used are given in Table 3.
Many different movement trajectories were tested during the development of the current study. For example, signals were recorded for circular motion trajectory and triangular motion trajectory, and the recorded data were processed with the signal processing methods applied in the study. As a result of signal processing, it was seen that these two movements were not distinguishable by the BCI system since they had similar axis angles. As a result of the experiments, four trajectories that gave the best results were determined and used for the study. While trajectories with different axis motion angles can be classified by the system with high accuracy rates, it has been observed that trajectories with similar axis motion angles have a low separability rate.
After preparing the appropriate environment for the subject, the Emotiv EPOC X EEG device and the laptop to be used for data recording were set up in the room. The experimental room was isolated from external factors that could potentially distract the subject’s attention (e.g., noise and light).
Once the environment for data recording was prepared, participants were brought into the experimental room one by one. They were informed that the devices used in the study and none of the experimental phases posed any health risks. Then, they were asked to fill out a Volunteer Consent Form. The next step was to fit the participant with the EEG device. The Emotiv EPOC X EEG device is wearable and can be applied using a saline solution. After fitting the participant with the EEG device using saline solution, the quality of the electrode signals was checked via the EmotivBCI application provided by Emotiv, using the computer. Once the signal quality reached an adequate level (98%), the EEG headset was properly installed. The participant, now ready for the experiment, was instructed on the sequence in which to follow the white balls. The proposed approach involves four different movements for the participant to follow moving right and left, moving up and down, moving crossly to the right, and moving crossly to the left. The sequence for applying the approach is exhibited in Figure 6.
In the initial data recording phase, an LED with a frequency value of 7 Hz was activated (Figure 7a). The experiment began with a 3-s beep sound for both right and left movements while the LED was active. A white marble, completing one full rotation per second, was displayed to the subject for 10 s. After 10 s, the data recording was completed with a 3-s beep sound. The experiment was repeated 10 times for both right and left movements in the same manner. A 1-min rest period was provided between each recording. Similarly, the experiment was conducted for up-down, right-cross, and left-cross movements, each with 10 repetitions. The phases of the experiment with the light on and off are illustrated in Figure 7.
In the second phase of data recording, the LED with a frequency of 7Hz was turned off (Figure 7b). Initially, the subject was shown right and left movements for 10 second, accompanied by a 3-s start beep sound. The recording was concluded with a 3-s beep sound. The experiment was repeated 10 times for both right and left movements in the same manner. Recordings were then made for other movements with 10 repetitions each, and the experiment was completed. In studies on eye movement-based BCIs [33,58,59], it was observed that channels AF3, F7, F8, and AF4 (located on the brain’s frontal lobe) are frequently used as active channels. In this study, these channels were considered likely to be effective.
Raw EEG data containing EOG artifacts recorded using AF3, F7, F8, AF4 channels are shown in Figure 8(a) for up-down movement, Figure 8(b) for left-cross movement, Figure 9(a) for right-left movement, and right-cross movement shown in Figure 9(b). Moreover, the data recorded with the light on are referred to as “illuminated data” and the data recorded with the light-off are referred to as “non-illuminated data.
A total of 800 recordings were made throughout the experiments. The data were sampled at a frequency of 256 Hz. The recorded data were converted into .csv files using the EmotivPRO application from Emotiv. The converted data were then imported into the MATLAB (R2023b) environment for signal processing.

4. Results

In this study, 40 recordings were made for each subject with the light-on position and 40 recordings with the light-off position during the data-recording phase. Overall, 80 recordings (each 16-s long) were obtained per subject using four different object routes and recordings consisting of 10 repetitions. All recorded data were transferred to the MATLAB environment, where the 3-s start and end beep sounds were removed. As a result, raw data with a dimension of 10 s (2560x14) were obtained. The raw data were segmented into 3-s segments (768x14) with 1-s overlap. Consequently, five 3-s segments were obtained from each 10-s recording, resulting in 200 illuminated and 200 non-illuminated data segments per subject. The segmented 3-s illuminated and non-illuminated data were subjected to a fifth-order band-pass Butterworth filter in the 1-45 Hz range. The filtering process was performed using the filter command in MATLAB. An example of the unfiltered (a) and filtered (b) data from a randomly selected channel (AF3) is illustrated in Figure 10.
The filtered, illuminated, and non-illuminated data were analyzed using the Welch PSD method. Windowing was performed using the Hamming method, with a window length (WL) of 637 and an overlap value (noverlap) of 636, selected through trial and error. The Welch frequency resolution was used to determine the frequency ranges for both illuminated and non-illuminated data: 0-4 Hz (delta), 4-8 Hz (theta), 8-13 Hz (alpha), 13-30 Hz (beta), and 30-45 Hz (gamma). For each identified frequency band, features including kurtosis, mean, skewness, trapz, entropy, variance, mobility, and complexity were extracted, resulting in 40 features.

4.1. Channel Selection

Both illuminated and non-illuminated data were classified using RF, LDA, SVM, and k-NN classifiers across all channels to identify effective channels. The classification process was repeated 10 times, and the average accuracy values were computed. The RF classifier, which provided the highest accuracy for both illuminated and non-illuminated data, is presented in Figure 11.
As can be inferred from Figure 11, channels AF3, F7, F8, and AF4 exhibit the highest accuracy rates for both illuminated and non-illuminated data. The locations of these channels on the head are shown in Figure 12.
From Figure 12, it is observed that the channels positioned around the eyes of the EEG device are effective for the study. Thus, these channels were selected as active channels for this study.

4.2. Eye Movement-Based BCI

Our BCI system consists of two sequential classification stages. In the first classification stage, the system differentiates between illuminated and non-illuminated data to identify the visual stimulus in the background. The illuminated data correctly predicted by the classifier in the first classification stage form the test set for the s classification stage. The second classification stage is conducted to detect movements (left-right, up-down, right-cross, and left-cross) within the correctly predicted data. The algorithm of the designed BCI system is presented in Figure 13.
In the first classification stage (i.e., classification of illuminated and non-illuminated data), a fifth-order band-pass Butterworth filter with a 1-15 Hz range was applied to illuminated and non-illuminated data. The filtered data were then processed using the PSD method with a Hamming window of 637 sample length and 636 overlap, values chosen through trial and error. Using Welch frequency resolution, the frequency ranges of 1-10 Hz and 6-8 Hz were identified. Trapezoidal features were extracted from the identified 6-8 Hz and 1-10 Hz ranges using the trapz command in MATLAB. The two trapezoidal features obtained from the two frequency bands were normalized by calculating their ratio (Eq. 16) to yield a single normalized trapezoidal feature.
z ' = t r a p z ( θ ) t r a p z ( ρ )
where θ represents the trapezoidal value for the 6-8 Hz frequency range, while ρ denotes the trapezoidal value for the 1-10 Hz frequency range. The ratio of these two values was used to derive a single normalized trapezoidal feature.
In the designed BCI system, in light data, a potential pattern occurs in the 6-8Hz band, since the LED is on in this range. Due to the formation of the pattern, the value of the normalized trapezoidal feature obtained as a result of the ratio for luminous data is greater than 1. In data without light, since the LED is off, no potential occurs in the 6-8Hz range. Therefore, for data without light, the normalized trapezoidal feature ratio takes values close to 1. These trapezoidal features obtained in both cases are classified using machine learning algorithms. In Figure 14, the potentials occurring when the LED is off (a) and when the LED is on (b) are shown representatively.
When looking Figure 14a (LED off), it is observed that there is no potential change in the 6-8 Hz range. In contrast, looking at Figure 14b (LED on), a potential change in the 6-8 Hz range is evident. It is determined whether the data is illuminated or non-illuminated according to the size of the trapezoidal value obtained by proportion the L1 and L2 distances, which represent the frequency range of the potentials occurring. Thus, the presence or absence of light in the area where the system user is looking is decided by the BCI system. In cases where light cannot be detected, the system does not activate and the system user is provided with the opportunity to use a safe BCI system.
The 3-dimensional feature scatter plot created using the selected effective channels AF3, F8, and AF4 is illustrated in Figure 15.
In the first classification stage, a single feature was extracted; however, four features were derived by combining the data obtained from the selected effective channels (AF3, F7, F8, and AF4). The feature data were partitioned into 75% training and 25% test sets using the Hold-out method to classify between illuminated and non-illuminated data. In order to test the robustness of the system, training and test data of a randomly selected subject (subject-2) were limited to training data using machine learning algorithms. The accuracy rates obtained are shown in Table 4.
When Table 4 is examined, the accuracy rates obtained are within acceptable ranges, there is no under-fitting or over-fitting situation, and the number of trials is sufficient to demonstrate the method’s robustness. After examining the robustness of the system, first classification was performed using RF, LDA, k-NN, and SVM classifiers. The classification processes were repeated 10 times, and the average results were computed to determine the accuracy rate for each subject. The accuracy rate results of the first classification stage are shown in Table 5 and compared in Figure 16. In Table 5, the illuminated data is given as class 1, and the non-illuminated data is given as class 2.
According to Table 5 and Figure 16, accuracy rates of 99.57%, 99.11%, 95.74%, and 98.22% were achieved for the RF, SVM, LDA, and k-NN algorithms, respectively. It was observed that the RF and SVM algorithms yielded relatively better results compared to the other methods. The classification results provided by these algorithms indicate that the approach successfully detects the visual stimulus used for safety purposes in the background system. In the first classification stage, the illuminated data correctly predicted by the classifiers served as the test data for the second classification stage.
In the second classification stage, all accurately predicted illuminated data were processed using a fifth-order Butterworth filter within the 1-45 Hz range. The filtering process utilized the filtfilt function in MATLAB. Following the filtering, the Welch PSD method was applied to the filtered data using a Hamming window with a window length of 637 and an overlap value of 636. Using Welch frequency resolution, the frequency ranges of 0-4 Hz (delta), 4-8 Hz (theta), 8-13 Hz (alpha), 13-30 Hz (beta), and 30-45 Hz (gamma) were identified. For each identified frequency band, features including kurtosis, mean, skewness, trapz, entropy, variance, mobility, and complexity were extracted. As an example, Figure 17 presents a 3-dimensional feature scatter plot created using randomly selected entropy, mean, and skewness features from the randomly selected channel AF3.
Since 8 features were extracted for each frequency band, 40 features were obtained for all frequency bands. Combining the data from the identified active channels increased the number of features to 160. The feature data were divided into 75% training and 25% test sets using the Hold-out method. The correctly predicted data from the first classification stage were identified from the test set, while incorrectly predicted data were excluded from the test set. The data were classified using RF, LDA, k-NN, and SVM classifiers. The classification was repeated 10 times, and each subject’s accuracy rates and ITR values were calculated by averaging the results. The accuracy and ITR rates obtained from the classification are provided in Table 6 and compared in Figure 18. Table 6 shows up-down movement as class 1, right-left movement as class 2, left-cross movement as class 3, and right-cross movement as class 4.
Table 6 and Figure 18 reveal accuracy rates of 97.89%, 97.37%, 95.12%, and 90.39% and ITR of 36.75, 36.06, 33.4, and 28.01 (bits/min) for the RF, SVM, LDA, and k-NN algorithms, respectively. The RF classifier provided the best results, while the SVM classifier yielded very close results. In comparison, LDA and k-NN performed relatively lower than the other classifiers. In the classification process for distinguishing illuminated data in the second stage, the RF classifier demonstrated the best performance with an accuracy rate of 97.89% and an ITR value of 36.75 bits/min.

4.3. Future Selection

For the classification of illuminated data, feature selection was performed using the ANOVA method among the 160 features. The best features were identified for each subject. The classification was then performed using the RF algorithm, starting from the best feature and including the next one iteratively. Accuracy rate graphs based on the number of features for each subject are provided at the end of the study (Figure A1). The graph for the top 20 features for each subject is depicted in Figure 19.
Figure 19 shows that the features numbered 82 and 83 from channel F8 (i.e., the mean and skewness parameters of the delta band) were identified as the best features for all participants. Additionally, the features numbered 92 and 93 from channel F8, corresponding to the theta band’s trapezoidal and entropy features, are among the top 20 features for 9 participants. Moreover, features 42, 43, 52, and 53 from channel F7, corresponding to the mean and skewness of the delta band and the trapezoidal and entropy of the theta band, were identified as the best features for 9 participants. In conclusion, the study indicates that the Delta band is the most dominant, with the best features being mean and skewness. The trapezoidal and entropy features and the theta band are also prominent after the delta band.
A new feature set was created using the mean and skewness features from the delta band and the trapezoidal and entropy features from the theta band, as identified through the ANOVA feature selection method. Since each channel contains 4 features, the total feature set consists of 16 features. The second classification stage was repeated using the new feature set. Accuracy rates and ITR values obtained from classification results are presented in Table 7 and Figure 20. In Table 7, up-down movement is given as class 1, right-left movement as class 2, left-cross movement as class 3, and right-cross movement as class 4.
Examining Table 7 and Figure 20 indicates that the accuracy rate for the k-NN classifier significantly increased to 93.33% from 90.39%, with the ITR value rising to 32.04 (bits/min) from 28.01 (bits/min). For the RF and LDA classifiers, very similar values were obtained before and after ANOVA. However, the SVM classifier’s accuracy rate decreased to 96.53%, and the ITR value declined to 36.06 bits/min. As a result, the number of features was reduced from 160 to 16. The ANOVA feature selection method was found to be effective for the current study.
General performance values of the system including classification-1 and classification-2 stages are given in Table 8. The overall accuracy rate of the system was calculated by dividing the number of correctly predicted illuminated data in the second classification stage to the total number of illuminated data found in the first classification stage. When Table 7 is looking, it is seen that the designed BCI system gives the best performance values with the RF classifier with an accuracy rate of 97.42% and ITR of 35.75 (bits/min).

5. Discussion and Future Work

Reviewing the studies in the field are examined, it shows that a significant part of the research in visual evoked potential-based BCI systems focuses on system performance [60]. However, the visual stimuli used in these systems seriously threaten the eye health of users, reduce system comfort, and reduce system usage time [9,10]. In fact, in some studies, it was observed that the subjects could not complete the experimental stages due to the negative effects of visual stimuli on the users [29]. Visual stimulus-based BCI systems are also unsuitable for individuals with inadequate eye health, as these systems require the ability to perceive the frequency values of visual stimuli. Additionally, while EEG devices used in these systems are more cost-effective compared to many EEG recording methods (such as MEG and fMRI), they are still not sufficiently affordable to be widely adopted. Moreover, these devices are inadequate in terms of usability and comfort [7].
This study proposed an innovative approach based on the hybrid method to minimize the negative effects of visual stimuli contained in visual stimulus-based BCI systems on the user. Instead of visual stimuli, four different moving objects (white balls), each moving in different trajectories, were used in the approach. Classification of moving objects was carried out using EOG artifacts found in the recorded EEG signals. However, the system may be activated unintentionally when the user makes the same eye movement as moving objects. This problem is also seen in many other eye movement-based BCI studies [32,33,34]. Although various actions (for example, serial blinking or looking in a certain direction) have been tried to solve the problem, these methods complicate the use of the system and make it difficult for users to focus, increasing error rates. As a solution to this problem in the developed system, the user can activate the system by using the 7Hz LED placed on the top of the computer screen, without being exposed to the negative effects of any visual stimulus and without focusing on the stimulus. The system checks the presence of the background LED via the SSVEP method when EEG data is received. Detection of the LED indicates that the user made a conscious eye movement to activate the system. Otherwise, when the LED is not detected, it is concluded that the eye movements are independent of the system and the system should not be active. Since there is only one LED, the stimulating brightness of the system is low and the user does not have to focus directly on the LED. Thus, the system prevents users from experiencing a visual flash sensation, allowing users to safely use the system with moving objects without focusing on any visual stimuli.
In summary, in the study, while the control of moving objects was done using EOG artifacts contained in EEG signals, the detection of the LED placed in the background was made using the SSVEP method. The system’s classification of moving objects using EOG artifacts has been proposed as an alternative to visual stimuli in visual stimulus-based BCI systems. Thus, it is aimed to find solutions to the problems of the feeling of glare in the eyes caused by visual stimuli, the negative impact on system comfort and usability due to the feeling of glare, eye disorders caused by long-term use and reducing the system usage time. During the commissioning phase of the system using the LED in the background, the SSVEP method aims to enable the system to be safely activated and deactivated by the user. The success rate of the system during its activation phase is important. Because it is not difficult to foresee that systems that make wrong decisions will cause enough difficulties for paralyzed individuals. Thus, constant blinking etc. is required to activate the system in the field. An alternative solution to these methods has also been provided.
Table 9 provides basic information on some visual stimulus-based studies using the Emotiv Epoc EEG device.
As can be inferred from previous studies [35,61,64], the number of subjects is relatively insufficient compared to the general standard. The number of subjects directly affects the performance metrics of the systems. Compared to other studies, the number of subjects used in this study is moderate. The accuracy rate is among the most critical parameters for the usability of systems. As shown in Table 8, the accuracy rate obtained in this study can be considered effective. Additionally, some studies [62,64,65,66] report that the ITR values are either not computed or are low. The ITR value reflects the speed of the system and is the most important parameter for assessing its practical applicability. The ITR value of 36.7 (bits/min) achieved in this study is considered successful compared to the values considered in the studies reviewed.
During the experiments, illuminated data were recorded without requiring any focus on visual stimuli or eye contact. Nevertheless, visual stimuli still caused discomfort for the subjects over time. Although this study significantly alleviates the reliance on visual stimuli for the system user, visual stimuli can still cause discomfort in various ways. Furthermore, the Emotiv EPOC X EEG device has its own disadvantages. These drawbacks include the excessive pressure it applies to the user’s head during use exceeding 30 min, the inability to adjust the device to fit different head sizes, electrode oxidation, and continuous drying. Furthermore, the device’s sensor contact points cannot be altered, leading to variability in contact points for individuals with different head sizes. Despite these issues, the device is preferred considering its advantages, such as affordability, portability, long battery life, ease of use and setup, and no need for any cleaning after use.
In BCI systems, using fewer EEG channels makes the system more efficient. According to the literature [61,65], researchers often increase the number of channels to enhance accuracy rates. In contrast, the current study utilized only channels F7, AF3, AF4, and F8 of the Emotiv EPOC X EEG device. Future research could further reduce the number of channels by evaluating the performance of effective waveforms for each channel.
In this study, the number of visual stimuli was reduced to 1, eliminating the need for the user to focus on the stimulus. However, the monitor used to display the stimuli both emits light and is uncomfortable and impractical due to its necessity to be constantly in front of patients. Moreover, almost all studies reviewed consist of single-session or daily recordings. However, systems should be capable of producing consistent responses for the same user on different days. In summary, the model created from data recorded on the first day for Person A should be compatible with data recorded on the second day for the same person. Another problem is that the designed system has not been tested in real time. Stating that they are at the beginning of the study, the researchers stated that the system will be designed in real time in the next study. Future research will address these identified issues.

6. Conclusions

This study proposes a hybrid BCI system that uses SSVEP and EOG methods to minimize the negative effects of visual stimulus-based BCI systems on the user. Visual stimuli can be quite detrimental to the user’s eye health, as they require direct focus during system use. Additionally, this negatively affects system comfort and reduces usage time. The study proposes an innovative approach that uses objects (white balls) moving in different trajectories instead of visual stimuli to provide solutions to the mentioned problems. The system produces EOG artifacts potentials within EEG signals using white balls, and moving objects are classified using these EOG artifacts. However, the study also acknowledges the major drawback of the proposed approach. Since such systems are based on eye movements, they can also remain active in the user’s independent movements. In the study, this problem is solved through the SSVEP method by using a single 7Hz frequency LED with a low brightness level, which is placed in the upper middle part of the screen, where the user does not need to focus. The purpose of using the LED is to detect whether the system user is looking at the screen containing moving objects. When the user looks at the screen, SSVEP is triggered and the system is activated. If the screen is not looked at, the system is not triggered and is disabled. This ensures that the user can use the system safely.
In the study, a two-stage classification process was applied using 10 healthy subjects. In the first classification stage, SSVEP was triggered via LED and it was checked whether the LED was active or not. While the system does not react when the LED is not active, when it is active, the second classification stage is started. In the second classification stage, recorded EOG signals were classified using machine learning algorithms. As a result of the experiments, the RF machine learning algorithm achieved an accuracy rate of 99.57% for the first classification stage and 97.83% for the second classification, and an ITR value of 36.75 bits/minute. Considering the overall performance of the system, including classification-1 and classification-2 stages, the RF machine learning algorithm gave the best result with an accuracy rate of 97.42% and an ITR value of 35.75 bits/minute. Additionally, effective channels and wavebands for the proposed system were identified in the study. The proposed hybrid BCI system eliminates the need for focusing for visual stimuli, reducing the number of visual stimuli in VEP-based BCI systems to a minimum and manageable level. Additionally, the study offers an innovative perspective to the field by proving that visual stimulus-based BCI systems can be used in a different way by using moving balls, without requiring direct focusing on the LED.

Funding

This work was supported by “Tokat Gaziosmanpasa University Scientific Research Projects (2023/86)”.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Data Availability

Data used in this study was recorded at Gumushane University Vocational School with ethical approval number 23618724 from the Trabzon Kanuni Training and Research Hospital Medical Faculty. Upon request, the data may be shared with the requesting party by the corresponding author.

Appendix A

Figure A1. Stepwise ANOVA score accuracy rates for all subjects.
Figure A1. Stepwise ANOVA score accuracy rates for all subjects.
Preprints 149266 g0a1

References

  1. R. T. Hammond, “Intense ultrashort electromagnetic pulses and the equation of motion,” arXiv physics/0607285., 2006.
  2. X. Zhang et al., “Class 1 integronase gene and tetracycline resistance genes tetA and tetC in different water environments of Jiangsu Province, China,” Springer, vol. 18, no. 6, pp. 652–660, Aug. 2009. [CrossRef]
  3. G. Rangaswamy, S. Raghuwanshi, and G. Rout, “Optimum borehole gamma ray log signal processing-a cubic spline based weighting,” 2009.
  4. D. Regan, “Evoked Potentials and Evoked Magnetic Fields,” 1989.
  5. S. M. Fernandez-Fraga, M. A. Aceves-Fernandez, and J. C. Pedraza-Ortega, “EEG data collection using visual evoked, steady state visual evoked and motor image task, designed to brain computer interfaces (BCI) development,” Data Brief, vol. 25, Aug. 2019. [CrossRef]
  6. M. N. Fakhruzzaman, E. Riksakomara, and H. Suryotrisongko, “EEG Wave Identification in Human Brain with Emotiv EPOC for Motor Imagery,” Procedia Comput Sci, vol. 72, pp. 269–276, 2015. [CrossRef]
  7. M. Duvinage, T. Castermans, M. Petieau, T. Hoellinger, G. Cheron, and T. Dutoit, “Performance of the Emotiv Epoc headset for P300-based applications,” Biomed Eng Online, vol. 12, no. 1, Jun. 2013. [CrossRef]
  8. T. Kayikcioglu, M. Maleki, and N. Manshouri, “A New Brain-Computer Interface System Based on Classification of the Gaze on Four Rotating Vanes,” Article in International Journal of Computer Science and Information Security, vol. 15, no. 2, pp. 437–443, 2017.
  9. Y. Dong and S. Tian, “A large database towards user-friendly SSVEP-based BCI,” Brain Science Advances, vol. 9, no. 4, pp. 297–309, Dec. 2023. [CrossRef]
  10. B. Allison, “The i of BCIs: Next generation interfaces for brain-computer interface systems that adapt to individual users,” Lecture Notes in Computer Science, vol. 5611, no. 2, pp. 558–568, 2009. [CrossRef]
  11. Bashashati, M. Fatourechi, R. K. Ward, and G. E. Birch, “A survey of signal processing algorithms in brain-computer interfaces based on electrical brain signals,” J Neural Eng, vol. 4, no. 2, Jun. 2007. [CrossRef]
  12. H. Bashashati, R. K. Ward, G. E. Birch, and A. Bashashati, “Comparing Different Classifiers in Sensory Motor Brain Computer Interfaces,” PLoS One, vol. 10, no. 6, Jun. 2015. [CrossRef]
  13. M. Fatourechi, A. Bashashati, R. K. Ward, and G. E. Birch, “EMG and EOG artifacts in brain computer interface systems: A survey,” Clinical Neurophysiology, vol. 118, no. 3, pp. 480–494, Mar. 2007. [CrossRef]
  14. Y. Zhu, Y. Li, J. Lu, and P. Li, “A Hybrid BCI Based on SSVEP and EOG for Robotic Arm Control,” Front Neurorobot, vol. 14, Nov. 2020. [CrossRef]
  15. X. Liu, B. Hu, Y. Si, and Q. Wang, “The role of eye movement signals in non-invasive brain-computer interface typing system.,” Med Biol Eng Comput, vol. 62, no. 7, pp. 1981–1990, Jul. 2024. [CrossRef]
  16. D. Kamińska, K. Smółka, and G. Zwoliński, “Detection of Mental Stress through EEG Signal in Virtual Reality Environment,” Electronics 2021, Vol. 10, Page 2840, vol. 10, no. 22, p. 2840, Nov. 2021. [CrossRef]
  17. F. Akram, S. M. Han, and T. S. Kim, “An efficient word typing P300-BCI system using a modified T9 interface and random forest classifier,” Comput Biol Med, vol. 56, pp. 30–36, Jan. 2015. [CrossRef]
  18. N. Williams, G. McArthur, and N. B. BioRxiv, “10 years of EPOC: A scoping review of Emotiv’s portable EEG device,” biorxiv.org, 2020. [CrossRef]
  19. S. Ge, Y. Jiang, P. Wang, H. Wang, and W. Zheng, “Training -Free Steady-State Visual Evoked Potential Brain-Computer Interface Based on Filter Bank Canonical Correlation Analysis and Spatiotemporal Beamforming Decoding,” IEEE Trans Neural Syst Rehabil Eng, vol. 27, no. 9, pp. 1714–1723, Sep. 2019. [CrossRef]
  20. K. Friganovic, M. Medved, and M. Cifrek, “Brain-computer interface based on steady-state visual evoked potentials,” 2016 39th International Convention on Information and Communication Technology, Electronics and Microelectronics, MIPRO 2016 - Proceedings, pp. 391–395, Jul. 2016. [CrossRef]
  21. X. Chen, Y. Wang, M. Nakanishi, X. Gao, T. P. Jung, and S. Gao, “High-speed spelling with a noninvasive brain-computer interface,” Proc Natl Acad Sci U S A, vol. 112, no. 44, pp. E6058–E6067, Nov. 2015. [CrossRef]
  22. X. Mai, J. Ai, M. Ji, X. Zhu, and J. Meng, “A hybrid BCI combining SSVEP and EOG and its application for continuous wheelchair control,” Biomed Signal Process Control, vol. 88, p. 105530, Feb. 2024. [CrossRef]
  23. Kubacki, “Use of Force Feedback Device in a Hybrid Brain-Computer Interface Based on SSVEP, EOG and Eye Tracking for Sorting Items,” Sensors 2021, Vol. 21, Page 7244, vol. 21, no. 21, p. 7244, Oct. 2021. [CrossRef]
  24. J. Zhang, S. Gao, K. Zhou, Y. Cheng, and S. Mao, “An online hybrid BCI combining SSVEP and EOG-based eye movements,” Front Hum Neurosci, vol. 17, p. 1103935, Feb. 2023. [CrossRef]
  25. D. Saravanakumar and M. Ramasubba Reddy, “A virtual speller system using SSVEP and electrooculogram,” Advanced Engineering Informatics, vol. 44, p. 101059, Apr. 2020. [CrossRef]
  26. S. Sadeghi and A. Maleki, “A comprehensive benchmark dataset for SSVEP-based hybrid BCI,” Expert Syst Appl, vol. 200, Aug. 2022. [CrossRef]
  27. V. Asanza et al., “SSVEP-EEG Signal Classification based on Emotiv EPOC BCI and Raspberry Pi,” IFAC-PapersOnLine, vol. 54, no. 15, pp. 388–393, Jan. 2021. [CrossRef]
  28. Chiuzbaian, J. Jakobsen, and S. Puthusserypady, “Mind Controlled Drone: An Innovative Multiclass SSVEP based Brain Computer Interface,” 7th International Winter Conference on Brain-Computer Interface, BCI 2019, Feb. 2019. [CrossRef]
  29. C. P. Brennan, P. J. McCullagh, L. Galway, and G. Lightbody, “Promoting autonomy in a smart home environment with a smarter interface,” Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, vol. 2015-November, pp. 5032–5035, Nov. 2015. [CrossRef]
  30. S. Kondo and H. Tanaka, “High-frequency SSVEP–BCI with less flickering sensation using personalization of stimulus frequency,” Artif Life Robot, vol. 28, no. 4, pp. 803–811, Nov. 2023. [CrossRef]
  31. M. Melek, N. Manshouri, and T. Kayikcioglu, “Low-cost brain-computer interface using the emotiv epoc headset based on rotating vanes,” Traitement du Signal, vol. 37, no. 5, pp. 831–837, 2020, doi: DOI: 10.18280/ts.370516.
  32. T. Saichoo, P. Boonbrahm, and Y. Punsawad, “Investigating User Proficiency of Motor Imagery for EEG-Based BCI System to Control Simulated Wheelchair,” Sensors 2022, Vol. 22, Page 9788, vol. 22, no. 24, p. 9788, Dec. 2022. [CrossRef]
  33. B. V. Ngo, T. H. Nguyen, and T. N. Nguyen, “EEG Signal-Based Eye Blink Classifier Using Convolutional Neural Network for BCI Systems,” Proceedings - 2021 15th International Conference on Advanced Computing and Applications, ACOMP 2021, pp. 176–180, 2021. [CrossRef]
  34. G. Dimitrov et al., “Increasing the Classification Accuracy of EEG based Brain-computer Interface Signals,” Proceedings - International Conference on Advanced Computer Information Technologies, ACIT, pp. 386–390, 2020. [CrossRef]
  35. O. Selvi, A. Ferikoglu, and D. Guzel, “Comparing the stimulus time of the P300 Based Brain Computer Interface Systems with the Deep Learning Method,” ISMSIT 2018 - 2nd International Symposium on Multidisciplinary Studies and Innovative Technologies, Proceedings, Dec. 2018. [CrossRef]
  36. S. Mouli and R. Palaniappan, “DIY hybrid SSVEP-P300 LED stimuli for BCI platform using EMOTIV EEG headset,” HardwareX, vol. 8, Oct. 2020. [CrossRef]
  37. Yayik and Y. Kutlu, “Beyin Bilgisayar Arayüzü Tabanli Görsel Tespit Sistemi,” 2017 25th Signal Processing and Communications Applications Conference, SIU 2017, Jun. 2017. [CrossRef]
  38. T. Mwata-Velu, J. Ruiz-Pinales, H. Rostro-Gonzalez, M. A. Ibarra-Manzano, J. M. Cruz-Duarte, and J. G. Avina-Cervantes, “Motor Imagery Classification Based on a Recurrent-Convolutional Architecture to Control a Hexapod Robot,” Mathematics 2021, Vol. 9, Page 606, vol. 9, no. 6, p. 606, Mar. 2021. [CrossRef]
  39. E. Kaya and I. Saritas, “Identifying optimal channels and features for multi-participant motor imagery experiments across a participant’s multi-day multi-class EEG data,” Cogn Neurodyn, vol. 18, no. 3, pp. 987–1003, Jun. 2024. [CrossRef]
  40. Trigui, W. Zouch, and M. Ben Messaoud, “A comparison study of SSVEP detection methods using the Emotiv Epoc headset,” 16th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering, STA 2015, pp. 48–53, Jul. 2016. [CrossRef]
  41. D.-M. Ancau, M. Ancau, and M. Ancau, “Deep-learning online EEG decoding brain-computer interface using error-related potentials recorded with a consumer-grade headset,” Biomed Phys Eng Express, vol. 8, no. 2, p. 025006, Jan. 2022. [CrossRef]
  42. M. J. H. Faruk, M. Valero, and H. Shahriar, “An Investigation on Non-Invasive Brain-Computer Interfaces: Emotiv Epoc+ Neuroheadset and Its Effectiveness,” Proceedings - 2021 IEEE 45th Annual Computers, Software, and Applications Conference, COMPSAC 2021, pp. 580–589, Jun. 2022. [CrossRef]
  43. S. Mouli and R. Palaniappan, “Eliciting higher SSVEP response from LED visual stimulus with varying luminosity levels,” 2016 International Conference for Students on Applied Engineering, ICSAE 2016, pp. 201–206, Jan. 2017. [CrossRef]
  44. J. Jiang, Z. Zhou, E. Yin, and Y. Yu, “Hybrid Brain-Computer Interface (BCI) based on the EEG and EOG signals,” Bio-medical materials and engineering, 2014, 2014. [CrossRef]
  45. F. Sharbrough, G. E. Chatrian, and R. Lesser, “(PDF) American Electroencephalographic Society guidelines for standard electrode position nomenclature,” 1991. [CrossRef]
  46. S. Tiwari, S. Goel, and A. Bhardwaj, “MIDNN-a classification approach for the EEG based motor imagery tasks using deep neural network,” Applied Intelligence, vol. 52, no. 5, pp. 4824–4843, Mar. 2022. [CrossRef]
  47. L. St and S.- Wold, “Analysis of variance (ANOVA),” Chemometrics and intelligent laboratory, Elsevier, 1989.
  48. J. Wolpaw, H. Ramoser, and M. DJ, “EEG-based communication: improved accuracy by response verification,” IEEE transactions on Rehabilitation Engineering, 1998.
  49. P. Mulak and N. T.-Int. J. Res, “Analysis of distance measures using k-nearest neighbor algorithm on kdd dataset,” International Journal of Science and Research (IJSR), vol. 4, no. 7, pp. 2101–2104, 2015.
  50. C. Cortes and V. Vapnik, “Support-vector networks,” Mach Learn, vol. 20, no. 3, pp. 273–297, Sep. 1995. [CrossRef]
  51. L. Breiman, “Bagging predictors,” Mach Learn, vol. 24, no. 2, pp. 123–140, 1996. [CrossRef]
  52. C. Zhang and Y. Ma, “Ensemble Machine Learning,” 2005. [CrossRef]
  53. C. Bishop, “Neural networks for pattern recognition,” 1995.
  54. S. Ioffe, “Probabilistic linear discriminant analysis,” Lecture Notes in Computer Science, vol. 3954 LNCS, pp. 531–542, 2006. [CrossRef]
  55. M. J. Simpson, “Mini-review: Far peripheral vision,” Vision Res, vol. 140, pp. 96–105, Nov. 2017. [CrossRef]
  56. T. Roescher, B. Randles, and J. Welcher, “Estimation of Seated Driver Eye Height based on Standing Height, Weight, Seatback Angle, and Seat Bottom Angle,” SAE Technical Papers, Apr. 2023. [CrossRef]
  57. F. Fang and T. Shinozaki, “Electrooculography-based continuous eye-writing recognition system for efficient assistive communication systems,” PLoS One, vol. 13, no. 2, Feb. 2018. [CrossRef]
  58. P. Bobrov, A. Frolov, C. Cantor, I. Fedulova, M. Bakhnyan, and A. Zhavoronkov, “Brain-computer interface based on generation of visual images,” PLoS One, vol. 6, no. 6, 2011. [CrossRef]
  59. T. Saichoo, P. Boonbrahm, and Y. Punsawad, “Facial-machine interface-based virtual reality wheelchair control using EEG artifacts of emotiv neuroheadset,” ECTI-CON 2021 - 2021 18th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology: Smart Electrical System and Technology, Proceedings, pp. 781–784, May 2021. [CrossRef]
  60. K. Värbu, N. Muhammad, and Y. Muhammad, “Past, Present, and Future of EEG-Based BCI Applications,” Sensors, vol. 22, no. 9, May 2022. [CrossRef]
  61. S. M. Sarhan, M. Z. Al-Faiz, and A. M. Takhakh, “EEG-Based Control of a 3D-Printed Upper Limb Exoskeleton for Stroke Rehabilitation,” International Journal of Online and Biomedical Engineering (iJOE), vol. 20, no. 09, pp. 99–112, Jun. 2024. [CrossRef]
  62. S. Tiwari, S. Goel, and A. Bhardwaj, “Classification of imagined speech of vowels from EEG signals using multi-headed CNNs feature fusion network,” Digit Signal Process, vol. 148, p. 104447, May 2024. [CrossRef]
  63. K. Glavas, K. D. Tzimourta, A. T. Tzallas, N. Giannakeas, and M. G. Tsipouras, “Empowering Individuals With Disabilities: A 4-DoF BCI Wheelchair Using MI and EOG Signals,” IEEE Access, vol. 12, pp. 95417–95433, 2024. [CrossRef]
  64. A. Al-Hamadani, M. J. Mohammed, and S. M. Tariq, “Normalized deep learning algorithms based information aggregation functions to classify motor imagery EEG signal,” Neural Comput Appl, vol. 35, no. 30, pp. 22725–22736, Oct. 2023. [CrossRef]
  65. S. A. Kabir, F. Farhan, A. A. Siddiquee, O. L. Baroi, T. Marium, and J. Rahimi, “Effect of Input Channel Reduction on EEG Seizure Detection,” Przeglad Elektrotechniczny, vol. 2023, no. 12, pp. 195–200, 2023. [CrossRef]
  66. S. N. Syakiylla Sayed Daud, R. Sudirman, and T. Wee Shing, “Safe-level SMOTE method for handling the class imbalanced problem in electroencephalography dataset of adult anxious state,” Biomed Signal Process Control, vol. 83, p. 104649, May 2023. [CrossRef]
  67. M. T. Irshad et al., “Wearable-based human flow experience recognition enhanced by transfer learning methods using emotion data,” Comput Biol Med, vol. 166, Nov. 2023. [CrossRef]
  68. Ç. UYULAN, A. E. GÜMÜŞ, and Z. GÜLEKEN, “EEG-induced Fear-type Emotion Classification Through Wavelet Packet Decomposition, Wavelet Entropy, and SVM,” Hittite Journal of Science and Engineering, vol. 9, no. 4, pp. 241–251, Dec. 2022. [CrossRef]
Figure 1. The basic scheme of BCI systems.
Figure 1. The basic scheme of BCI systems.
Preprints 149266 g001
Figure 2. General schematic of the designed BCI system.
Figure 2. General schematic of the designed BCI system.
Preprints 149266 g002
Figure 3. Emotiv Epoc X EEG device.
Figure 3. Emotiv Epoc X EEG device.
Preprints 149266 g003
Figure 4. Electrode locations of the Emotiv Epoc X EEG device.
Figure 4. Electrode locations of the Emotiv Epoc X EEG device.
Preprints 149266 g004
Figure 5. Representative maximum gaze angles formed along the subject’s X(a) and Y(b) axes.
Figure 5. Representative maximum gaze angles formed along the subject’s X(a) and Y(b) axes.
Preprints 149266 g005
Figure 6. Stages of applying the approach.
Figure 6. Stages of applying the approach.
Preprints 149266 g006
Figure 7. The EEG signal recording stage under light on (a) and light off (b) conditions.
Figure 7. The EEG signal recording stage under light on (a) and light off (b) conditions.
Preprints 149266 g007
Figure 8. Up-down movement (left) and left-cross movement (right) raw signals of AF3, F7, F8, AF4 channels caused by EOG artifacts in EEG signal.
Figure 8. Up-down movement (left) and left-cross movement (right) raw signals of AF3, F7, F8, AF4 channels caused by EOG artifacts in EEG signal.
Preprints 149266 g008
Figure 9. Right-left movement (left) and right-cross movement (right) raw signals of AF3, F7, F8, AF4 channels caused by EOG artifacts in EEG signals.
Figure 9. Right-left movement (left) and right-cross movement (right) raw signals of AF3, F7, F8, AF4 channels caused by EOG artifacts in EEG signals.
Preprints 149266 g009
Figure 10. (a) Unfiltered and (b) filtered data examples for channel AF3.
Figure 10. (a) Unfiltered and (b) filtered data examples for channel AF3.
Preprints 149266 g010
Figure 11. Classification accuracy rates of illuminated and non-illuminated data using all channels with an RF classifier.
Figure 11. Classification accuracy rates of illuminated and non-illuminated data using all channels with an RF classifier.
Preprints 149266 g011
Figure 12. The locations of channels AF3, F7, F8, and AF4 on the head.
Figure 12. The locations of channels AF3, F7, F8, and AF4 on the head.
Preprints 149266 g012
Figure 13. Algorithm of the designed BCI system.
Figure 13. Algorithm of the designed BCI system.
Preprints 149266 g013
Figure 14. SSVEP potentials occurring in the LED off position (a) and LED on (b) positions.
Figure 14. SSVEP potentials occurring in the LED off position (a) and LED on (b) positions.
Preprints 149266 g014
Figure 15. 3D feature scatter plot created using channels AF3, F8, and AF4.
Figure 15. 3D feature scatter plot created using channels AF3, F8, and AF4.
Preprints 149266 g015
Figure 16. Classification accuracy rate results for illuminated and non-illuminated data.
Figure 16. Classification accuracy rate results for illuminated and non-illuminated data.
Preprints 149266 g016
Figure 17. 3D scatter plot of entropy, skewness, and mean features for illuminated data from channel AF3.
Figure 17. 3D scatter plot of entropy, skewness, and mean features for illuminated data from channel AF3.
Preprints 149266 g017
Figure 18. Classification results of illuminated data.
Figure 18. Classification results of illuminated data.
Preprints 149266 g018
Figure 19. When the 160 features are ranked according to their ANOVA scores, the number of occurrences of the top 20 features with the highest ANOVA scores in the participants.
Figure 19. When the 160 features are ranked according to their ANOVA scores, the number of occurrences of the top 20 features with the highest ANOVA scores in the participants.
Preprints 149266 g019
Figure 20. Comparison of the results of the illuminated (moving objects) classification stage before and after ANOVA accuracy rate and ITR value.
Figure 20. Comparison of the results of the illuminated (moving objects) classification stage before and after ANOVA accuracy rate and ITR value.
Preprints 149266 g020
Table 1. Characteristics of EEG wave bands.
Table 1. Characteristics of EEG wave bands.
Wave Frequency Range (Hz) Amplitude Range (μV)
Delta (δ) 0.5-4 1-120
Theta (θ) 4-7 20-100
Alfa (α) 7-12 30-50
Beta (β) 12-30 5-30
Gamma (γ) 30+ variable
Table 2. Maximum eye angles of the subject on the X-Y axes as a function of Z distance.
Table 2. Maximum eye angles of the subject on the X-Y axes as a function of Z distance.
Axis X Y Z
Distance (cm) 61 34 120
Gaze Angle (°) 28.5 16.1 -
Table 3. Angle changes caused by movement trajectories in the eye axes.
Table 3. Angle changes caused by movement trajectories in the eye axes.
Trajectory X axis Y axis
Right-left right left right left
+14,2 -14,2 0 0
Up-down up down up down
0 0 +8,1 -8,1
Right-cross top right bottom left top right bottom left
+14,2 -14,2 +8,1 -8,1
Left-cross top left bottom right top left bottom right
-14,2 +14,2 +8,1 -8,1
Table 4. Classification accuracy rate results of Holdout test and training data of a randomly selected subject (subject-2) to measure the robustness of the system.
Table 4. Classification accuracy rate results of Holdout test and training data of a randomly selected subject (subject-2) to measure the robustness of the system.
Classifier Holdout data Trial -1 Trial-2 Trial-3 Trial-4 Trial-5 Trial-6 Trial-7 Trial-8 Trial-9 Trial-10 Avg. (%)
SVM train 100.0 99.34 100.0 100.0 100.0 100.0 99.34 100.0 100.0 99.34 99.80
test 100.0 100.0 97.92 97.92 97.92 100.0 97.92 97.92 100.0 97.92 98.75
k-NN train 100.0 98.02 95.83 98.02 98.02 98.02 100.0 100.0 100.0 99.34 98.72
test 95.83 89.58 91.67 93.75 93.75 93.75 93.75 95.83 95.83 97.92 94.16
LDA train 98.68 99.34 99.34 100.0 99.34 99.34 98.68 99.34 100.0 99.34 99.34
test 97.92 100.0 100.0 100.0 100.0 97.92 97.92 97.92 97.92 97.92 98.75
RF train 99.34 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 99.93
test 100.0 100.0 95.83 100.0 95.83 97.92 97.92 97.92 100.0 100.0 98.54
Table 5. Classification accuracy rates of illuminated and non-illuminated data using all features channels AF3, F7, F8, and AF4.
Table 5. Classification accuracy rates of illuminated and non-illuminated data using all features channels AF3, F7, F8, and AF4.
Classifier Classes Sub-1 Sub-2 Sub-3 Sub-4 Sub-5 Sub-6 Sub-7 Sub-8 Sub-9 Sub-10 Avg. (%)
SVM Class 1 100.0 100.0 99.37 100.0 99.68 97.50 100.0 100.0 100.0 99.37 99.59
Class 2 97.08 100.0 100.0 100.0 100.0 100.0 95.20 95.41 98.75 100.0 98.64
Acc. 98.54 100.0 99.68 100.0 99.37 98.75 97.60 97.70 99.37 99.68 99.11
k-NN Class 1 100.0 100.0 98.95 100.0 98.95 100.0 100.0 98.95 100.0 98.95 99.58
Class 2 96.45 98.95 100.0 99.79 100.0 96.45 98.95 100.0 99.79 100.0 99.03
Acc. 98.22 99.47 99.79 99.89 99.47 98.22 99.47 99.79 99.89 99.47 99.30
LDA Class 1 96.12 97.50 99.11 100.0 100.0 99.37 100.0 98.95 98.95 100.0 99.00
Class 2 92.00 91.66 94.37 96.04 89.58 92.29 91.45 89.37 95.20 92.91 92.48
Acc. 94.06 94.58 96.74 98.02 94.79 95.83 95.72 94.16 97.07 96.45 95.74
RF Class 1 100.0 99.37 100.0 99.79 100.0 99.68 99.79 98.95 100.0 98.95 99.65
Class 2 100.0 100.0 98.54 100.0 99.37 98.75 100.0 100.0 98.33 100.0 99.49
Acc. 100.0 99.68 99.27 99.89 99.68 99.21 99.89 99.47 99.16 99.47 99.57
Table 6. Illuminated data classification accuracy results using channels AF3, F7, F8, and AF4.
Table 6. Illuminated data classification accuracy results using channels AF3, F7, F8, and AF4.
Classifier Classes Sub-1 Sub-2 Sub-3 Sub-4 Sub-5 Sub-6 Sub-7 Sub-8 Sub-9 Sub-10 Avg. (%)
SVM Class 1 93.33 100.0 97.50 95.83 97.50 94.16 99.16 100.0 97.50 97.50 97.24
Class 2 100.0 100.0 92.50 99.16 95.00 91.66 100.0 98.33 100.0 90.00 96.66
Class 3 100.0 100.0 100.0 100.0 99.16 100.0 100.0 100.0 100.0 100.0 99.91
Class 4 91.66 98.33 95.00 93.33 95.00 99.16 95.00 97.50 100.0 91.66 95.66
Acc. 96.25 99.58 96.25 97.08 96.66 96.25 98.54 98.95 99.37 94.79 97.37
ITR 34.44 39.28 34.46 35.52 35.04 34.58 37.66 38.20 38.92 32.58 36.06
k-NN Class 1 89.58 98.33 83.33 100.0 91.67 91.67 91.67 70.83 78.33 98.33 89.37
Class 2 93.64 90.83 91.67 91.67 83.33 75.00 83.33 100.0 91.25 75.83 87.65
Class 3 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0
Class 4 95.83 87.50 83.33 75.00 75.00 91.67 91.67 83.33 77.08 85.00 84.54
Acc. 94.76 94.16 89.58 91.67 87.50 89.58 91.67 88.54 86.66 89.79 90.39
ITR 32.54 32.04 27.05 29.08 25.46 27.05 29.08 26.10 24.43 27.25 28.01
LDA Class 1 83.33 100.0 100.0 95.00 100.0 89.16 90.0 96.66 100.0 94.16 94.83
Class 2 93.33 100.0 91.66 92.50 96.66 83.33 95.83 97.50 98.33 70.00 91.91
Class 3 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0
Class 4 90.00 95.00 100.0 95.00 81.66 95.83 95.83 97.50 91.66 95.00 93.74
Acc. 91.66 98.75 97.91 95.62 94.58 92.08 95.41 97.91 97.50 89.79 95.12
ITR 29.32 37.93 36.58 33.98 32.29 29.97 33.44 36.78 36.18 27.59 33.40
RF Class 1 95.83 100.0 100.0 98.33 97.50 95.00 98.33 97.50 100.0 96.66 97.91
Class 2 100.0 100.0 97.50 98.33 98.33 93.33 98.33 96.66 98.33 95.00 97.58
Class 3 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0
Class 4 93.33 95.83 98.33 96.66 96.66 91.66 95.00 99.16 99.16 95.00 96.07
Acc. 97.29 98.96 98.95 98.33 98.12 95.00 97.91 98.33 99.37 96.66 97.89
ITR 35.88 38.20 38.20 37.30 37.06 32.91 36.78 37.42 39.00 34.80 36.75
Table 7. Classification results of features obtained from illuminated data using ANOVA with channels AF3, F7, F8, and AF4.
Table 7. Classification results of features obtained from illuminated data using ANOVA with channels AF3, F7, F8, and AF4.
Classifier Classes Sub-1 Sub-2 Sub-3 Sub-4 Sub-5 Sub-6 Sub-7 Sub-8 Sub-9 Sub-10 Avg. (%)
SVM Class 1 93.33 98.33 93.33 90.0 94.16 95.0 99.16 96.66 100.0 99.16 95.91
Class 2 100.0 100.0 85.83 96.66 96.66 91.66 96.66 94.16 99.16 90.0 95.07
Class 3 100.0 100.0 100.0 100.0 99.16 100.0 100.0 100.0 100.0 100.0 99.91
Class 4 95.0 96.66 93.33 93.33 98.33 95.0 97.5 96.66 99.16 87.5 95.24
Acc (%) 97.08 98.75 93.12 95.0 97.08 95.41 98.33 96.87 99.58 94.16 96.53
ITR 35.44 37.93 30.79 32.91 35.79 33.43 37.21 35.25 39.28 31.81 34.98
k-NN Class 1 97.5 100.0 96.66 97.5 91.66 92.5 90.0 93.33 92.5 99.16 95.08
Class 2 79.16 99.16 91.66 99.16 91.66 80.0 87.5 90.0 95.83 85.83 89.99
Class 3 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0
Class 4 90.0 95.83 95.83 95.83 73.33 91.66 90.0 90.83 96.66 86.66 90.66
Acc (%) 91.66 98.75 96.04 98.12 89.16 91.04 91.87 93.54 96.25 92.91 93.93
ITR 29.35 37.93 34.10 37.20 26.84 28.73 29.85 31.33 34.43 30.69 32.04
LDA Class 1 86.66 97.5 98.33 92.5 95.83 90.0 93.33 90.0 99.16 91.66 93.49
Class 2 95.83 98.33 85.0 95.0 99.16 86.66 97.5 92.5 96.66 83.33 92.99
Class 3 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0
Class 4 88.33 99.16 85.83 92.5 86.66 90.0 95.83 98.33 100.0 100.0 93.66
Acc (%) 92.70 98.75 92.29 95.0 95.41 91.66 96.66 95.20 98.95 93.75 95.03
ITR 30.39 37.93 30.11 32.84 33.50 29.18 35.04 33.07 38.29 31.27 33.15
RF Class 1 93.33 98.33 98.33 94.16 98.33 94.16 99.16 95.83 100.0 95.0 96.66
Class 2 100.0 100.0 95.83 100.0 97.5 96.66 95.83 99.16 100.0 98.33 98.33
Class 3 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0
Class 4 93.33 97.5 96.66 95.83 94.16 96.66 97.5 96.66 100.0 95.0 96.33
Acc (%) 96.66 98.95 97.70 97.5 97.5 96.87 98.12 97.91 100.0 97.08 97.83
ITR 35.01 38.20 36.22 36.04 36.15 35.28 36.94 36.58 40.0 35.35 36.57
Table 8. General performance table of the system including all classification stages.
Table 8. General performance table of the system including all classification stages.
Classifier Parameter Sub-1 Sub-2 Sub-3 Sub-4 Sub-5 Sub-6 Sub-7 Sub-8 Sub-9 Sub-10 Avg.
SVM Acc. 95.63 98.75 92.80 95.0 96.46 94.20 95.94 94.64 98.95 93.85 95.62
ITR 33.43 37.66 30.25 32.68 34.46 31.77 33.81 32.27 37.98 31.38 33,56
k-NN Acc. 90.00 98.22 95.83 98.01 88.68 89.41 91.38 93.34 96.14 92.41 93.34
ITR 27.45 36.85 33.67 36.55 26.22 26.89 28.79 30.82 34.05 29.84 31,11
LDA Acc. 87.19 93.39 89.28 93.11 90.43 78.95 87.68 85.37 91.62 88.07 88.49
ITR 24.89 30.88 26.77 30.57 27.86 18.47 25.32 23.32 29.03 25.67 26.27
RF Acc. 96.66 98.63 96.98 97.39 97.18 96.10 98.01 97.39 99.16 96.56 97.42
ITR 34.71 37.47 35.13 35.68 35.40 34.01 36.55 35.68 38.33 34.58 35.75
Table 9. Comparison of the current study with previous studies using the Emotiv EPOC EEG device.
Table 9. Comparison of the current study with previous studies using the Emotiv EPOC EEG device.
Ref. Year Method Datasets Number of participants Acc(%) ITR(bits/min)
[61] 2024 SVM Own data 3 90.1 27.3
[62] 2024 CNN Own data 16 97.6 -
[63] 2024 SVM Own data 28 88.4 27.7
[64] 2023 R-CNN Own data 4 84.0 15.8
[65] 2023 CNN-LSTM Guinea-Bissau and Nigeria epilepsy - 92.5 -
[66] 2023 k-NN arXiv (1901.02942) 23 89.5 -
[67] 2023 CNN Own data 25 75.1 17.1
[68] 2022 SVM Own data 15 91.0 28.4
[35] 2021 LSTM Own data 5 96.9 40.3
Our Work 2024 RF Our data 10 97.9 36.7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated