Article
Version 1
Preserved in Portico This version is not peer-reviewed
Semi-supervised Adversarial Variational Autoencoder
Version 1
: Received: 1 August 2020 / Approved: 2 August 2020 / Online: 2 August 2020 (18:11:27 CEST)
A peer-reviewed article of this Preprint also exists.
Zemouri, R. Semi-Supervised Adversarial Variational Autoencoder. Mach. Learn. Knowl. Extr. 2020, 2, 361-378. Zemouri, R. Semi-Supervised Adversarial Variational Autoencoder. Mach. Learn. Knowl. Extr. 2020, 2, 361-378.
Abstract
We present a method to improve the reconstruction and generation performance of variational autoencoder (VAE) by injecting an adversarial learning. On the other hand, instead of comparing the reconstructed with the original data to calculate the reconstruction loss, we use a consistency principle for deep features. The training process of the VAE is then divided into two steps, training the encoder and then training the decoder. By using this two-step learning process, our method can be more widely used in applications other than image processing. While training the encoder, the label information is integrated to better structure the latent space in a supervised way. The adversarial constraints allow the decoder to generate data with better authenticity and more realistic than the conventional VAE. We present experimental results to show that our method gives better performance than the original VAE.
Keywords
Variational autoencoder; Adversarial learning; Deep feature consistent; Data generation
Subject
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment