Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Four-Layer ConvNet to Facial Emotion Recognition with Minimal Epochs and the Significance of Data Diversity

Version 1 : Received: 16 May 2021 / Approved: 18 May 2021 / Online: 18 May 2021 (11:34:19 CEST)

How to cite: Debnath, T.; Reza, M.M.; Rahman, A.; Band, S.; Alinejad Rokny, H. Four-Layer ConvNet to Facial Emotion Recognition with Minimal Epochs and the Significance of Data Diversity. Preprints 2021, 2021050424. https://doi.org/10.20944/preprints202105.0424.v1 Debnath, T.; Reza, M.M.; Rahman, A.; Band, S.; Alinejad Rokny, H. Four-Layer ConvNet to Facial Emotion Recognition with Minimal Epochs and the Significance of Data Diversity. Preprints 2021, 2021050424. https://doi.org/10.20944/preprints202105.0424.v1

Abstract

Emotion recognition defined as identifying human emotion and is directly related to different fields such as human-computer interfaces, human emotional processing, irrational analysis, medical diagnostics, data-driven animation, human-robot communi- cation and many more. The purpose of this study is to propose a new facial emotional recognition model using convolutional neural network. Our proposed model, “ConvNet”, detects seven specific emotions from image data including anger, disgust, fear, happiness, neutrality, sadness, and surprise. This research focuses on the model’s training accuracy in a short number of epoch which the authors can develop a real-time schema that can easily fit the model and sense emotions. Furthermore, this work focuses on the mental or emotional stuff of a man or woman using the behavioral aspects. To complete the training of the CNN network model, we use the FER2013 databases, and we test the system’s success by identifying facial expressions in the real-time. ConvNet consists of four layers of convolution together with two fully connected layers. The experimental results show that the ConvNet is able to achieve 96% training accuracy which is much better than current existing models. ConvNet also achieved validation accuracy of 65% to 70% (considering different datasets used for experiments), resulting in a higher classification accuracy compared to other existing models. We also made all the materials publicly accessible for the research community at: https://github.com/Tanoy004/Emotion-recognition-through-CNN.

Keywords

Convolutional Neural Network (CNN); Emotion Recognition; Facial Expression; Classification; Accuracy

Subject

Computer Science and Mathematics, Algebra and Number Theory

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.