Version 1
: Received: 29 July 2023 / Approved: 31 July 2023 / Online: 2 August 2023 (03:39:21 CEST)
Version 2
: Received: 16 April 2024 / Approved: 16 April 2024 / Online: 16 April 2024 (14:04:40 CEST)
How to cite:
Nguyen, L. Adversarial Variational Autoencoders to Extend and Improve Generative Model. Preprints2023, 2023080131. https://doi.org/10.20944/preprints202308.0131.v1
Nguyen, L. Adversarial Variational Autoencoders to Extend and Improve Generative Model. Preprints 2023, 2023080131. https://doi.org/10.20944/preprints202308.0131.v1
Nguyen, L. Adversarial Variational Autoencoders to Extend and Improve Generative Model. Preprints2023, 2023080131. https://doi.org/10.20944/preprints202308.0131.v1
APA Style
Nguyen, L. (2023). Adversarial Variational Autoencoders to Extend and Improve Generative Model. Preprints. https://doi.org/10.20944/preprints202308.0131.v1
Chicago/Turabian Style
Nguyen, L. 2023 "Adversarial Variational Autoencoders to Extend and Improve Generative Model" Preprints. https://doi.org/10.20944/preprints202308.0131.v1
Abstract
Generative artificial intelligence (GenAI) has been developing with many incredible achievements like ChatGPT and Bard. Deep generative model (DGM) is a branch of GenAI, which is preeminent in generating raster data such as image and sound due to strong points of deep neural network (DNN) in inference and recognition. The built-in inference mechanism of DNN, which simulates and aims to synaptic plasticity of human neuron network, fosters generation ability of DGM which produces surprised results with support of statistical flexibility. Two popular approaches in DGM are Variational Autoencoders (VAE) and Generative Adversarial Network (GAN). Both VAE and GAN have their own strong points although they share and imply underline theory of statistics as well as incredible complex via hidden layers of DNN when DNN becomes effective encoding/decoding functions without concrete specifications. In this research, I try to unify VAE and GAN into a consistent and consolidated model called Adversarial Variational Autoencoders (AVA) in which VAE and GAN complement each other, for instance, VAE is good at generator by encoding data via excellent ideology of Kullback-Leibler divergence and GAN is a significantly important method to assess reliability of data which is realistic or fake. In other words, AVA aims to improve accuracy of generative models, besides AVA extends function of simple generative models. In methodology this research focuses on combination of applied mathematical concepts and skillful techniques of computer programming in order to implement and solve complicated problems as simply as possible.
Keywords
deep generative model (DGM); Variational Autoencoders (VAE); Generative Adversarial Network (GAN)
Subject
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.