Version 1
: Received: 15 February 2023 / Approved: 16 February 2023 / Online: 16 February 2023 (06:59:43 CET)
Version 2
: Received: 10 March 2023 / Approved: 16 March 2023 / Online: 16 March 2023 (02:21:19 CET)
How to cite:
Lu, W.; Ding, D.; Wu, F.; Yuan, G. An Efficient Gaussian Mixture Model and Its Application to Neural Network. Preprints2023, 2023020275. https://doi.org/10.20944/preprints202302.0275.v2
Lu, W.; Ding, D.; Wu, F.; Yuan, G. An Efficient Gaussian Mixture Model and Its Application to Neural Network. Preprints 2023, 2023020275. https://doi.org/10.20944/preprints202302.0275.v2
Lu, W.; Ding, D.; Wu, F.; Yuan, G. An Efficient Gaussian Mixture Model and Its Application to Neural Network. Preprints2023, 2023020275. https://doi.org/10.20944/preprints202302.0275.v2
APA Style
Lu, W., Ding, D., Wu, F., & Yuan, G. (2023). An Efficient Gaussian Mixture Model and Its Application to Neural Network. Preprints. https://doi.org/10.20944/preprints202302.0275.v2
Chicago/Turabian Style
Lu, W., Fengyan Wu and Gangnan Yuan. 2023 "An Efficient Gaussian Mixture Model and Its Application to Neural Network" Preprints. https://doi.org/10.20944/preprints202302.0275.v2
Abstract
While modeling data in reality, uncertainties in both data (aleatoric) and model (epistemic) are not necessarily Gaussian. The uncertainties could appear with multimodality or any other particular forms which need to be captured. Gaussian mixture models (GMMs) are powerful tools that allow us to capture those features from a unknown density with a peculiar shape. Inspired by Fourier expansion, we propose a GMM model structure to decompose arbitrary unknown densities and prove the property of convergence. A simple learning method is introduced to learn GMMs through sampled datasets. We applied GMMs as well as our learning method to two classic neural network applications. The first one is learning encoder output of autoencoder as GMMs for handwritten digits images generation. The another one is learning gram matrices as GMMs for style-transfer algorithm. Comparing with the classic Expectation maximization (EM) algorithm, our method does not involve complex formulations and is capable of achieving applicable accuracy.
Keywords
Neural Network, Uncertainties , GMM, Density approximation
Subject
Computer Science and Mathematics, Computer Science
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Commenter: weiguo lu
Commenter's Conflict of Interests: Author