Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

InvMap and Witness Simplicial Variational Auto-Encoders

Version 1 : Received: 30 December 2022 / Approved: 5 January 2023 / Online: 5 January 2023 (02:47:49 CET)

A peer-reviewed article of this Preprint also exists.

Medbouhi, A.A.; Polianskii, V.; Varava, A.; Kragic, D. InvMap and Witness Simplicial Variational Auto-Encoders. Mach. Learn. Knowl. Extr. 2023, 5, 199-236. Medbouhi, A.A.; Polianskii, V.; Varava, A.; Kragic, D. InvMap and Witness Simplicial Variational Auto-Encoders. Mach. Learn. Knowl. Extr. 2023, 5, 199-236.

Abstract

Variational Auto-Encoders (VAEs) are deep generative models used for unsupervised learning, however their standard version is not topology-aware in practice since the data topology may not be taken into consideration. In this paper, we propose two different approaches with the aim to preserve the topological structure between the input space and the latent representation of a VAE. Firstly, we introduce InvMap-VAE as a way to turn any dimensionality reduction technique, given an embedding it produces, into a generative model within a VAE framework providing an inverse mapping into original space. Secondly, we propose the Witness Simplicial VAE as an extension of the Simplicial Auto-Encoder to the variational setup using a witness complex for computing the simplicial regularization, and we motivate this method theoretically using tools from algebraic topology. The Witness Simplicial VAE is independent of any dimensionality reduction technique and together with its extension, Isolandmarks Witness Simplicial VAE, preserves the persistent Betti numbers of a data set better than a standard VAE.

Keywords

Variational Auto-Encoder; topological machine learning; nonlinear dimensionality reduction; Topological Data Analysis; data visualization; representation learning; Betti number; persistence homology; simplicial complex; simplicial regularization

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.