Submitted:
16 April 2024
Posted:
23 April 2024
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. Related Works
3. Methodology
3.1. Histogram Equalization
3.2. Cosine Annealing
3.3. Evaluation Matrices
4. Implementation
4.1. Datasets & Augmentation Techniques
KDEF
CK+
The Filtered FER2013
Data Augmentation
4.2. Experimental Setup
4.3. VGG Architectures
4.4. Hyper Parameters
5. Results
5.1. KDEF
5.2. CK+
5.3. Filtered FER2013
| Class | Precision | Recall | F1-score |
|---|---|---|---|
| Angry | 0.62 | 0.62 | 0.62 |
| Disgust | 0.81 | 0.71 | 0.76 |
| Fear | 0.58 | 0.51 | 0.54 |
| Happy | 0.87 | 0.88 | 0.87 |
| Neutral | 0.63 | 0.66 | 0.64 |
| Sad | 0.57 | 0.61 | 0.59 |
| Surprise | 0.81 | 0.80 | 0.80 |
| Model | Dataset | Accuracy (%) | Weighted-F1 (%) | AUC-ROC (%) | AUC-PRC (%) |
|---|---|---|---|---|---|
| VGG19 with all layer freezed + No Histogram | CK+ | 90.91 | 91.00 | 99.00 | 96.00 |
| Finetuned VGG19 + No Histogram | CK+ | 97.98 | 98.00 | 100.00 | 99.00 |
| Finetuned VGG19 + Histogram | CK+ | 98.99 | 99.00 | 100.00 | 100.00 |
| VGG16 with all layer freezed + No Histogram | CK+ | 96.97 | 97.00 | 100.00 | 99.00 |
| Finetuned VGG16 + No Histogram | CK+ | 97.98 | 98.00 | 100.00 | 100.00 |
| Finetuned VGG16 + Histogram | CK+ | 100 | 100.00 | 100.00 | 100.00 |
| VGG19 with all layer freezed + No Histogram | KDEF | 54.76 | 53.93 | 86.65 | 59.29 |
| Finetuned VGG19 + No Histogram | KDEF | 94.22 | 94.20 | 99.64 | 98.25 |
| Finetuned VGG19 + Histogram | KDEF | 95.92 | 95.90 | 99.60 | 98.53 |
| VGG16 with all layer freezed and + No Histogram | KDEF | 56.80 | 56.62 | 88.97 | 62.80 |
| Finetuned VGG16 + No Histogram | KDEF | 92.18 | 92.12 | 99.62 | 98.08 |
| Finetuned VGG16 + Histogram | KDEF | 92.86 | 92.87 | 99.69 | 98.34 |
| VGG19 with all layer freezed + No Histogram | FER2013 | 35.99 | 29 | 57.70 | 19.73 |
| Finetuned VGG19 + No Histogram | FER2013 | 69.06 | 68.57 | 80.87 | 52.26 |
| Finetuned VGG19 + Histogram | FER2013 | 69.44 | 68.34 | 80.20 | 51.58 |
| VGG16 with all + freezed with No Histogram | FER2013 | 41.20 | 35.53 | 60.36 | 22.22 |
| Finetuned VGG16 + No Histogram | FER2013 | 68.8 | 69.29 | 81.34 | 52.37 |
| Finetuned VGG16 + Histogram | FER2013 | 69.65 | 69.65 | 80.75 | 51.83 |
5.4. Comparison of Methods
6. Discussion
7. Conclusion
Author Contributions
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Ekman, P. Cross-cultural studies of facial expression. Darwin and facial expression: A century of research in review 1973, 169222.
- Ramsay, R.W. Speech patterns and personality. Language and Speech 1968, 11, 54–63. [CrossRef]
- Fast, J. Body language; Vol. 82348, Simon and Schuster, 1970.
- Newmark, C. Charles Darwin: the expression of the emotions in man and animals. In Schlüsselwerke der Emotionssoziologie; Springer, 2022; pp. 111–115.
- Ragsdale, J.W.; Van Deusen, R.; Rubio, D.; Spagnoletti, C. Recognizing patients’ emotions: teaching health care providers to interpret facial expressions. Academic Medicine 2016, 91, 1270–1275. [CrossRef]
- Suhaimi, N.S.; Mountstephens, J.; Teo, J. EEG-Based Emotion Recognition: A State-of-the-Art Review of Current Trends and Opportunities. Computational Intelligence and Neuroscience 2020, 2020. [CrossRef]
- Fernández-Caballero, A.; Martínez-Rodrigo, A.; Pastor, J.M.; Castillo, J.C.; Lozano-Monasor, E.; López, M.T.; Zangróniz, R.; Latorre, J.M.; Fernández-Sotos, A. Smart environment architecture for emotion detection and regulation. Journal of biomedical informatics 2016, 64, 55–73. [CrossRef]
- Mattavelli, G.; Pisoni, A.; Casarotti, A.; Comi, A.; Sera, G.; Riva, M.; Bizzi, A.; Rossi, M.; Bello, L.; Papagno, C. Consequences of brain tumour resection on emotion recognition. Journal of Neuropsychology 2019, 13, 1–21. [CrossRef]
- Suja, P.; Tripathi, S.; others. Real-time emotion recognition from facial images using Raspberry Pi II. 2016 3rd International Conference on Signal Processing and Integrated Networks (SPIN). IEEE, 2016, pp. 666–670.
- Ojala, T.; Pietikäinen, M.; Harwood, D. A comparative study of texture measures with classification based on featured distributions. Pattern recognition 1996, 29, 51–59. [CrossRef]
- Jolliffe, I.T.; Cadima, J. Principal component analysis: a review and recent developments. Philosophical transactions of the royal society A: Mathematical, Physical and Engineering Sciences 2016, 374, 20150202. [CrossRef]
- Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. Journal of machine learning research 2008, 9.
- Cortes, C.; Vapnik, V. Support-vector networks. Machine learning 1995, 20, 273–297. [CrossRef]
- Breiman, L. Random forests. Machine learning 2001, 45, 5–32. [CrossRef]
- Payal, P.; Goyani, M.M. A comprehensive study on face recognition: methods and challenges. The Imaging Science Journal 2020, 68, 114–127. [CrossRef]
- O’shea, K.; Nash, R. An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458 2015.
- Hochreiter, S. The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 1998, 6, 107–116. [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 2014. arXiv:1409.1556 2014.
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. CoRR 2015, abs/1512.03385, [1512.03385].
- Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks, 2018, [arXiv:cs.CV/1608.06993]. arXiv:cs.CV/1608.06993].
- Lundqvist, D.; Flykt, A.; Öhman, A. Karolinska directed emotional faces. PsycTESTS Dataset 1998, 91, 630.
- Białek, C.; Matiolański, A.; Grega, M. An Efficient Approach to Face Emotion Recognition with Convolutional Neural Networks. Electronics 2023, 12, 2707. [CrossRef]
- Lucey, P.; Cohn, J.F.; Kanade, T.; Saragih, J.; Ambadar, Z.; Matthews, I. The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. 2010 ieee computer society conference on computer vision and pattern recognition-workshops. IEEE, 2010, pp. 94–101.
- Xie, Y.; Ning, L.; Wang, M.; Li, C. Image Enhancement Based on Histogram Equalization. Journal of Physics: Conference Series 2019, 1314, 012161. doi:10.1088/1742-6596/1314/1/012161. [CrossRef]
- Gotmare, A.; Keskar, N.S.; Xiong, C.; Socher, R. A closer look at deep learning heuristics: Learning rate restarts, warmup and distillation. arXiv preprint arXiv:1810.13243 2018. arXiv:1810.13243 2018.
- Hanley, J.A.; McNeil, B.J. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 1982, 143, 29–36. [CrossRef]
- Powers, D.M. Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv preprint arXiv:2010.16061 2020.
- Xiao-Xu, Q.; Wei, J. Application of wavelet energy feature in facial expression recognition. 2007 International Workshop on Anti-Counterfeiting, Security and Identification (ASID). IEEE, 2007, pp. 169–174.
- Lyons, M.; Kamachi, M.; Gyoba, J. The Japanese female facial expression (JAFFE) dataset. The images are provided at no cost for non-commercial scientific research only. If you agree to the conditions listed below, you may request access to download 1998.
- Tyagi, M. Hog (histogram of oriented gradients): An overview. Towards Data Science 2021, 4.
- Ahonen, T.; Rahtu, E.; Ojansivu, V.; Heikkila, J. Recognition of blurred faces using local phase quantization. 2008 19th international conference on pattern recognition. IEEE, 2008, pp. 1–4.
- Lloyd, S. Least squares quantization in PCM. IEEE transactions on information theory 1982, 28, 129–137. [CrossRef]
- Lee, H.; Kim, S. SSPNet: Learning spatiotemporal saliency prediction networks for visual tracking. Information Sciences 2021, 575, 399–416. [CrossRef]
- Yang, S.; Bhanu, B. Facial expression recognition using emotion avatar image. 2011 IEEE International Conference on Automatic Face & Gesture Recognition (FG). IEEE, 2011, pp. 866–871.
- Dhall, A.; Asthana, A.; Goecke, R.; Gedeon, T. Emotion recognition using PHOG and LPQ features. 2011 IEEE International Conference on Automatic Face & Gesture Recognition (FG). IEEE, 2011, pp. 878–883.
- Cootes, T.F.; Edwards, G.J.; Taylor, C.J. Active appearance models. Computer Vision—ECCV’98: 5th European Conference on Computer Vision Freiburg, Germany, June 2–6, 1998 Proceedings, Volume II 5. Springer, 1998, pp. 484–498.
- Sharmin, N.; Brad, R. Optimal filter estimation for Lucas-Kanade optical flow. Sensors 2012, 12, 12694–12709. [CrossRef]
- Pu, X.; Fan, K.; Chen, X.; Ji, L.; Zhou, Z. Facial expression recognition from image sequences using twofold random forest classifier. Neurocomputing 2015, 168, 1173–1180. [CrossRef]
- Golzadeh, H.; Faria, D.R.; Manso, L.J.; Ekárt, A.; Buckingham, C.D. Emotion recognition using spatiotemporal features from facial expression landmarks. 2018 International Conference on Intelligent Systems (IS). IEEE, 2018, pp. 789–794.
- Aifanti, N.; Papachristou, C.; Delopoulos, A. The MUG facial expression database. 11th International Workshop on Image Analysis for Multimedia Interactive Services WIAMIS 10. IEEE, 2010, pp. 1–4.
- Viola, P.; Jones, M. Rapid object detection using a boosted cascade of simple features. Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, 2001, Vol. 1, pp. I–I. [CrossRef]
- Freeman, W.T.; Roth, M. Orientation histograms for hand gesture recognition. International workshop on automatic face and gesture recognition. Citeseer, 1995, Vol. 12, pp. 296–301.
- Liew, C.F.; Yairi, T. Facial expression recognition and analysis: a comparison study of feature descriptors. IPSJ transactions on computer vision and applications 2015, 7, 104–120. [CrossRef]
- Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, 2016, pp. 785–794.
- Thakare, C.; Chaurasia, N.K.; Rathod, D.; Joshi, G.; Gudadhe, S. Comparative analysis of emotion recognition system. Int. Res. J. Eng. Technol. 2019, 6, 380–384.
- Goodfellow, I.J.; Erhan, D.; Carrier, P.L.; Courville, A.; Mirza, M.; Hamner, B.; Cukierski, W.; Tang, Y.; Thaler, D.; Lee, D.H.; others. Challenges in representation learning: A report on three machine learning contests. Neural Information Processing: 20th International Conference, ICONIP 2013, Daegu, Korea, November 3-7, 2013. Proceedings, Part III 20. Springer, 2013, pp. 117–124.
- Jalal, A.; Tariq, U. The LFW-gender dataset. Computer Vision–ACCV 2016 Workshops: ACCV 2016 International Workshops, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part III 13. Springer, 2017, pp. 531–540.
- Zhang, W.; He, X.; Lu, W. Exploring discriminative representations for image emotion recognition with CNNs. IEEE Transactions on Multimedia 2019, 22, 515–523. [CrossRef]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 2017.
- Badrulhisham, N.A.S.; Mangshor, N.N.A. Emotion Recognition Using Convolutional Neural Network (CNN). Journal of Physics: Conference Series 2021, 1962, 012040. [CrossRef]
- Puthanidam, R.V.; Moh, T.S. A hybrid approach for facial expression recognition. Proceedings of the 12th International Conference on Ubiquitous Information Management and Communication, 2018, pp. 1–8.
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Communications of the ACM 2017, 60, 84–90. [CrossRef]
- Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size, 2016, [arXiv:cs.CV/1602.07360].
- Dhall, A.; Goecke, R.; Lucey, S.; Gedeon, T. Static facial expression analysis in tough conditions: Data, evaluation protocol and benchmark. 2011 IEEE international conference on computer vision workshops (ICCV workshops). IEEE, 2011, pp. 2106–2112.
- Sahoo, G.K.; Das, S.K.; Singh, P. Performance Comparison of Facial Emotion Recognition: A Transfer Learning-Based Driver Assistance Framework for In-Vehicle Applications. Circuits, Systems, and Signal Processing 2023, pp. 1–28.
- Chandrasekaran, G.; Antoanela, N.; Andrei, G.; Monica, C.; Hemanth, J. Visual sentiment analysis using deep learning models with social media data. Applied Sciences 2022, 12, 1030. [CrossRef]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, inception-ResNet and the impact of residual connections on learning. Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence. AAAI Press, 2017, AAAI’17, p. 4278–4284.
- Zagoruyko, S.; Komodakis, N. Wide residual networks. arXiv preprint arXiv:1605.07146 2016.
- Subudhiray, S.; Palo, H.K.; Das, N. Effective recognition of facial emotions using dual transfer learned feature vectors and support vector machine. International Journal of Information Technology 2023, 15, 301–313. [CrossRef]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4510–4520.
- Kaur, S.; Kulkarni, N. FERFM: An Enhanced Facial Emotion Recognition System Using Fine-tuned MobileNetV2 Architecture. IETE Journal of Research 2023, pp. 1–15.
- Mollahosseini, A.; Hasani, B.; Mahoor, M.H. Affectnet: A database for facial expression, valence, and arousal computing in the wild. IEEE Transactions on Affective Computing 2017, 10, 18–31. [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1–9.
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- Zavarez, M.V.; Berriel, R.F.; Oliveira-Santos, T. Cross-Database Facial Expression Recognition Based on Fine-Tuned Deep Convolutional Network. 2017 30th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), 2017, pp. 405–412. [CrossRef]
- Gonzalez, R.C.; Woods, R.E. Digital Image Processing (3rd Edition); Prentice-Hall, Inc.: USA, 2006.
- Zhang, C.; Shao, Y.; Sun, H.; Xing, L.; Zhao, Q.; Zhang, L. The WuC-Adam algorithm based on joint improvement of Warmup and cosine annealing algorithms. Mathematical Biosciences and Engineering 2024, 21, 1270–1285. [CrossRef]
- Hall, M.; Frank, E.; Holmes, G.; Pfahringer, B.; Reutemann, P.; Witten, I.H. The WEKA data mining software: an update. ACM SIGKDD explorations newsletter 2009, 11, 10–18. [CrossRef]
- Barhoumi, C.; Ayed, Y.B. Unlocking the Potential of Deep Learning and Filter Gabor for Facial Emotion Recognition. International Conference on Computational Collective Intelligence. Springer, 2023, pp. 97–110.
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. ImageNet: A large-scale hierarchical image database. 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248–255. [CrossRef]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. International Journal of Computer Vision 2019, 128, 336–359. [CrossRef]
- Chen, Y.; Liu, Z.; Wang, X.; Xue, S.; Yu, J.; Ju, Z. Combating Label Ambiguity with Smooth Learning for Facial Expression Recognition. International Conference on Intelligent Robotics and Applications. Springer, 2023, pp. 127–136.
- Liu, X.; Vijaya Kumar, B.; You, J.; Jia, P. Adaptive deep metric learning for identity-aware facial expression recognition. Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2017, pp. 20–29.
- Dar, T.; Javed, A.; Bourouis, S.; Hussein, H.S.; Alshazly, H. Efficient-SwishNet based system for facial emotion recognition. IEEE Access 2022, 10, 71311–71328. [CrossRef]
- Zahara, L.; Musa, P.; Wibowo, E.P.; Karim, I.; Musa, S.B. The facial emotion recognition (FER-2013) dataset for prediction system of micro-expressions face using the convolutional neural network (CNN) algorithm based Raspberry Pi. 2020 Fifth international conference on informatics and computing (ICIC). IEEE, 2020, pp. 1–9.
- Minaee, S.; Minaei, M.; Abdolrashidi, A. Deep-emotion: Facial expression recognition using attentional convolutional network. Sensors 2021, 21, 3046. [CrossRef]
- Fei, Z.; Yang, E.; Yu, L.; Li, X.; Zhou, H.; Zhou, W. A novel deep neural network-based emotion analysis system for automatic detection of mild cognitive impairment in the elderly. Neurocomputing 2022, 468, 306–316. [CrossRef]
- Mahesh, V.G.; Chen, C.; Rajangam, V.; Raj, A.N.J.; Krishnan, P.T. Shape and texture aware facial expression recognition using spatial pyramid Zernike moments and law’s textures feature set. IEEE Access 2021, 9, 52509–52522. [CrossRef]





| Dataset | Training Size | Testing Size | Validation Size |
|---|---|---|---|
| KDEF | 2350 | 294 | 294 |
| CK+ | 784 | 99 | 98 |
| Filtered FER2013 | 27310 | 3410 | 3420 |
| Layer (type) | Output Shape | Trainable Parameters | Non-Trainable Parameters |
|---|---|---|---|
| VGG16 Base Model⁎ | 7,635,264 | 7,079,424 | |
| Global AveragePooling 2D | (None, 512) | 0 | 0 |
| Dropout | (None, 512) | 0 | 0 |
| (Dense+ReLU) | (None, 1024) | 525312 | 0 |
| Dropout | (None, 1024) | 0 | 0 |
| (Dense+Softmax) | (None, 7) | 7175 | 0 |
| Trainable Parameters | 7611911 | 0 | |
| Non-trainable Parameters | 0 | 7635264 | |
| Total Parameters | 15247175 |
| Layer (type) | Output Shape | Trainable Parameters | Non-Trainable Parameters |
|---|---|---|---|
| VGG19 Base Model⁎ | 10,585,152 | 9,439,232 | |
| Global AveragePooling 2D | (None, 512) | 0 | 0 |
| Dropout | (None, 512) | 0 | 0 |
| (Dense+ReLU) | (None, 1024) | 525312 | 0 |
| Dropout | (None, 1024) | 0 | 0 |
| (Dense+Softmax) | (None, 7) | 7175 | 0 |
| Trainable Parameters | 9,971,719 | 0 | |
| Non-trainable Parameters | 0 | 10,585,152 | |
| Total Parameters | 20,556,871 |
| Hyper parameters | CK+ | KDEF | FER2013 |
|---|---|---|---|
| Input Size | (224,224,3) | (224,224,3) | (224,224,3) |
| (48,48,3) | (48,48,3) | (48,48,3) | |
| (144,144,3) | (144,144,3) | (144,144,3) | |
| Batch Size | 16, 32,64 | 16, 32,64 | 16, 32,64 |
| Epochs | 300 | 300 | 30 |
| Learning Rate | 0.01,0.001,0.0001 | 10.01,0.001,0.0001 | 0.01,0.001,0.0001 |
| Early Stop | Monitor Validation Accuracy Patience=5 | Monitor Validation Accuracy Patience=5 | Monitor Validation Accuracy Patience=5 |
| Learning Rate Scheduler | - | Monitor Validation Accuracy Patience=3, Factor=0.5 | Cosine Annealing |
| Dropout Rate | 0.1,0.3,0.5 | 0.1,0.3,0.5 | 0.1,0.3,0.5 |
| L2 Regularization | - | 0.01,0.1,0.2 | 0.01,0.1,0.2 |
| Class | Precision | Recall | F1-score |
|---|---|---|---|
| Angry | 0.95 | 0.93 | 0.94 |
| Disgust | 0.94 | 0.94 | 0.94 |
| Fear | 1.00 | 0.90 | 0.95 |
| Happy | 1.00 | 1.00 | 1.00 |
| Neutral | 0.95 | 0.95 | 0.95 |
| Sad | 0.95 | 1.00 | 0.97 |
| Surprise | 0.93 | 1.00 | 0.97 |
| Literature | Dataset | Type | Accuracy |
|---|---|---|---|
| Puthanidam [51] | KDEF | Hybrid CNN | 89.58% |
| Chen et al. [72] | KDEF, CK+, FER2013 | IACNN | 67%, 95%, 68% |
| Liu et al. [73] | KDEF, CK+, FER2013 | 2B (N + M)Softmax | 81%,87%,67% |
| Dar et al. [74] | KDEF, CK+, FER2013 | Efficient-SwishNet | 88.3%, 100%, 63.4% |
| Zahara et al. [75] | FER2013 | Xception | 65.97% |
| Minaaee et al. [76] | CK+, FER2013 | CNN+Attention | 98%, 70.02% |
| Fei et al. [77] | KDEF, CK+,FER2013 | MobileNet + SVM | 86.4%, 89.8%, 51.7% |
| Mahesh et al. [78] | KDEF | Feed Forward Network | 88.7% |
| Sahoo et al. [55] | KDEF, CK+, FER2013 | Pre-trained VGG19 | 93%,98.98%, 66.6% |
| Bialek et al. [22] | FER2013 | 4 Ensemble Model | 75.06% |
| Proposed Model | KDEF, CK+, FER2013 | Histogram + Pretrained VGG16 | 95.92%**, 100%*, 69.65%* |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).