Article
Version 1
Preserved in Portico This version is not peer-reviewed
Optimizing Few-Shot Learning based on Variational Autoencoders
Version 1
: Received: 21 September 2021 / Approved: 22 September 2021 / Online: 22 September 2021 (16:04:22 CEST)
A peer-reviewed article of this Preprint also exists.
Wei, R.; Mahmood, A. Optimizing Few-Shot Learning Based on Variational Autoencoders. Entropy 2021, 23, 1390. Wei, R.; Mahmood, A. Optimizing Few-Shot Learning Based on Variational Autoencoders. Entropy 2021, 23, 1390.
Abstract
Despite the importance of few-shot learning, the lack of labeled training data in the real world, makes it extremely challenging for existing machine learning methods as this limited data set does not represent the data variance well. In this research, we suggest employing a generative approach using variational autoencoders (VAEs), which can be used specifically to optimize few-shot learning tasks by generating new samples with more intra-class variations. The purpose of our research is to increase the size of the training data set using various methods to improve the accuracy and robustness of the few-shot face recognition. Specifically, we employ the VAE generator to increase the size of the training data set, including the basic and the novel sets while utilizing transfer learning as the backend. Based on extensive experimental research, we analyze various data augmentation methods to observe how each method affects the accuracy of face recognition. We conclude that the face generation method we proposed can effectively improve the recognition accuracy rate to 96.47% using both the base and the novel sets.
Keywords
Deep learning; Variational Autoencoders (VAEs); data representation learning; generative models; unsupervised learning; few shot learning; latent space; transfer learning
Subject
Engineering, Control and Systems Engineering
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment