Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Classification of EEG Motor Imagery Using Deep Learning for Brain-Computer Interface Systems

Version 1 : Received: 1 June 2022 / Approved: 6 June 2022 / Online: 6 June 2022 (03:23:36 CEST)

How to cite: Gallo, A.; Phung, M.D. Classification of EEG Motor Imagery Using Deep Learning for Brain-Computer Interface Systems. Preprints 2022, 2022060053. https://doi.org/10.20944/preprints202206.0053.v1 Gallo, A.; Phung, M.D. Classification of EEG Motor Imagery Using Deep Learning for Brain-Computer Interface Systems. Preprints 2022, 2022060053. https://doi.org/10.20944/preprints202206.0053.v1

Abstract

Objective A trained T1 class Convolutional Neural Network (CNN) model will be used to examine its ability to successfully identify motor imagery when fed pre-processed electroencephalography (EEG) data. In theory, and if the model has been trained accurately, it should be able to identify a class and label it accordingly. The CNN model will then be restored and used to try and identify the same class of motor imagery data using much smaller sampled data in an attempt to simulate live data. Approach PyCharm, a Python platform, will be used to house and process the CNN. The raw data used for the training of the CNN will be sourced from the PhysioBank website. The EEG signal data will then be pre-processed using Brainstorm software that is a toolbox used in conjunction with MATLAB. The sample data used to validate and test the trained CNN, will be also be extracted from Brainstorm but in a much smaller size compared to the training data which is comprised of thousands of images. The sample size would be comparable to a person wearing a Brain Computer Interface (BCI), offering approximately 20 seconds of motor imagery signal data. Results The raw EEG data was successfully extracted and pre-processed. The deep learning model was trained using the extracted image data along with their corresponding labels. After training, it was able to accurately identify the T1 class label at 100 percent. The python coding was then modified to restore the trained model and feed it test sample data in which it was found to recognise 6 out of 10 lines of T1 signal image data. The result suggested that the initial training of the model required a different, more varying approach, so that it would be able to detect varying sample signal image data. The outcome of which could mean that the model could be used in applications for multiple patients wearing the same BCI hardware to control a device or interface.

Keywords

Brain-Computer Interface Systems; Convolutional Neural Network; Deep Learning

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.