Preprint Article Version 2 Preserved in Portico This version is not peer-reviewed

Imagined Speech Classification Using EEG and Deep Learning

Version 1 : Received: 18 April 2023 / Approved: 19 April 2023 / Online: 19 April 2023 (08:45:48 CEST)
Version 2 : Received: 6 May 2023 / Approved: 8 May 2023 / Online: 8 May 2023 (05:30:02 CEST)
Version 3 : Received: 13 May 2023 / Approved: 15 May 2023 / Online: 15 May 2023 (05:43:54 CEST)

A peer-reviewed article of this Preprint also exists.

Abdulghani, M.M.; Walters, W.L.; Abed, K.H. Imagined Speech Classification Using EEG and Deep Learning. Bioengineering 2023, 10, 649, doi:10.3390/bioengineering10060649. Abdulghani, M.M.; Walters, W.L.; Abed, K.H. Imagined Speech Classification Using EEG and Deep Learning. Bioengineering 2023, 10, 649, doi:10.3390/bioengineering10060649.

Abstract

In this paper, we propose an imagined speech-based brain wave pattern recognition using deep learning. Multiple features were extracted concurrently from eight-channel Electroencephalography (EEG) signals. To obtain classifiable EEG data with fewer number of sensors, we placed the EEG sensors on carefully selected spots on the scalp. To decrease the dimensions and complexity of the EEG dataset and to avoid overfitting during the deep learning algorithm, we utilized the wavelet scattering transformation. A low-cost 8-channel EEG headset was used with MATLAB 2023a to acquire the EEG data. The Long-Short Term Memory Recurrent Neural Network (LSTM-RNN) was used to decode the identified EEG signals into four audio commands: Up, Down, Left, and Right. Wavelet scattering transformation was applied to extract the most stable features by passing the EEG dataset through a series of filtration processes. Filtration has been implemented for each individual command in the EEG datasets. The proposed imagined speech-based brain wave pattern recognition approach achieved a 92.50% overall classification accuracy. This accuracy is promising for designing a trustworthy imagined speech-based Brain-Computer Interface (BCI) future real-time systems. For better evaluation of the classification performance, other metrics were considered, and we obtained 92.74%, 92.50% and 92.62% for precision, recall, and F1-score, respectively.

Keywords

Inner Speech; Imagined Speech; EEG Decoding; Brain-Computer Interface; BCI; LSTM; Wavelet Scattering Transformation; WST.

Subject

Engineering, Bioengineering

Comments (1)

Comment 1
Received: 8 May 2023
Commenter: Khalid Abed
Commenter's Conflict of Interests: Author
Comment: Inner changed to Imagined

In the revised paper, we compared our method with others such as [42]. We wrote: “Using the auditory stimuli by asking a question to the participants showed that more accuracy in an offline BCI system could be achieved to classify an imagined speech, and we were able to obtain better results than what was achieved in [42] where a mixed visual and auditory stimuli were used.”
+ Respond to this comment

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 1
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.