Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Parallel Guiding Sparse Auto-Encoder with Wasserstein Regularization for Efficient Classification

Version 1 : Received: 28 April 2023 / Approved: 29 April 2023 / Online: 29 April 2023 (04:14:20 CEST)

A peer-reviewed article of this Preprint also exists.

Lee, H.; Hur, C.; Ibrokhimov, B.; Kang, S. Interactive Guiding Sparse Auto-Encoder with Wasserstein Regularization for Efficient Classification. Appl. Sci. 2023, 13, 7055. Lee, H.; Hur, C.; Ibrokhimov, B.; Kang, S. Interactive Guiding Sparse Auto-Encoder with Wasserstein Regularization for Efficient Classification. Appl. Sci. 2023, 13, 7055.

Abstract

In the era of big data, feature engineering has proved its efficiency and importance in dimensionality reduction and useful information extraction from original features. Feature engineering can be expressed as dimensionality reduction and is divided into two types of methods such as feature selection and feature extraction. Each method has its pros and cons. There are a lot of studies to combine these methods. Sparse autoencoder (SAE) is a representative deep feature learning method that combines feature selection with feature extraction. However, existing SAEs do not consider the feature importance during training. It causes extracting irrelevant information. In this paper, we propose a parallel guiding sparse autoencoder (PGSAE) to guide the information by two parallel guiding layers and sparsity constraints. The parallel guiding layers keep the main distribution using Wasserstein distance which is a metric of distribution difference, and it suppresses the leverage of guiding features to prevent overfitting. We perform our experiments using four datasets that have different dimensionality and number of samples. The proposed PGSAE method produces a better classification performance compared to other dimensionality reduction methods.

Keywords

dimensionality reduction; autoencoder; feature extraction; feature selection; guiding layer; regularization

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.