Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Optimizing Few-Shot Learning based on Variational Autoencoders

Version 1 : Received: 21 September 2021 / Approved: 22 September 2021 / Online: 22 September 2021 (16:04:22 CEST)

A peer-reviewed article of this Preprint also exists.

Wei, R.; Mahmood, A. Optimizing Few-Shot Learning Based on Variational Autoencoders. Entropy 2021, 23, 1390. Wei, R.; Mahmood, A. Optimizing Few-Shot Learning Based on Variational Autoencoders. Entropy 2021, 23, 1390.

Abstract

Despite the importance of few-shot learning, the lack of labeled training data in the real world, makes it extremely challenging for existing machine learning methods as this limited data set does not represent the data variance well. In this research, we suggest employing a generative approach using variational autoencoders (VAEs), which can be used specifically to optimize few-shot learning tasks by generating new samples with more intra-class variations. The purpose of our research is to increase the size of the training data set using various methods to improve the accuracy and robustness of the few-shot face recognition. Specifically, we employ the VAE generator to increase the size of the training data set, including the basic and the novel sets while utilizing transfer learning as the backend. Based on extensive experimental research, we analyze various data augmentation methods to observe how each method affects the accuracy of face recognition. We conclude that the face generation method we proposed can effectively improve the recognition accuracy rate to 96.47% using both the base and the novel sets.

Keywords

Deep learning; Variational Autoencoders (VAEs); data representation learning; generative models; unsupervised learning; few shot learning; latent space; transfer learning

Subject

Engineering, Control and Systems Engineering

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.