Working Paper Article Version 1 This version is not peer-reviewed

Deep Convolutional Neural Network Regularization for Alcoholism Detection Using EEG Signals

Version 1 : Received: 19 July 2021 / Approved: 20 July 2021 / Online: 20 July 2021 (09:34:53 CEST)

A peer-reviewed article of this Preprint also exists.

Mukhtar, H.; Qaisar, S.M.; Zaguia, A. Deep Convolutional Neural Network Regularization for Alcoholism Detection Using EEG Signals. Sensors 2021, 21, 5456. Mukhtar, H.; Qaisar, S.M.; Zaguia, A. Deep Convolutional Neural Network Regularization for Alcoholism Detection Using EEG Signals. Sensors 2021, 21, 5456.

Abstract

Alcoholism is attributed to regular or excessive drinking of alcohol and leads to the disturbance of the neuronal system in the human brain. This results in certain malfunctioning of neurons that can be detected by an electroencephalogram (EEG) using several electrodes on a human skull at appropriate positions. It is of great interest to be able to classify an EEG activity as that of a normal person or an alcoholic person using data from the minimum possible electrodes (or channels). Due to the complex nature of EEG signals, accurate classification of alcoholism using only a small data is a challenging task. Artificial neural networks, specifically convolutional neural networks (CNN), provide efficient and accurate results in various pattern-based classification problems. In this work, we apply CNN on raw EEG data, and demonstrate how we achieved 98% average accuracy by optimizing a baseline CNN model and outperforming its results in a range of performance evaluation metrics on the UCI-KDD EGG dataset. This article explains the step-wise improvement of the baseline model using the dropout, batch normalization, and kernel regularization techniques, and provides a comparison of the two models that can be beneficial for aspiring practitioners who aim to develop similar classification models in CNN. A performance comparison is also provided with other approaches using the same dataset.

Keywords

classification; optimization; batch normalization; kernel regularization; convolution; pooling; dropout layer; learning rate

Subject

Computer Science and Mathematics, Algebra and Number Theory

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.