Preprint
Article

This version is not peer-reviewed.

Accelerated Gradient Descent Using Instance Eliminating Back Propagation

Submitted:

06 August 2020

Posted:

07 August 2020

You are already at the latest version

Abstract
Artificial Intelligence is dominated by Artificial Neural Networks (ANNs). Currently, the Batch Gradient Descent (BGD) is the only solution to train ANN weights when dealing with large datasets. In this article, a modification to the BGD is proposed which significantly reduces the training time and improves the convergence. The modification, called Instance Eliminating Back Propagation (IEBP), eliminates correctly-predicted-instances from the Back Propagation. The speedup is due to the elimination of unnecessary matrix multiplication operations from the Back Propagation. The proposed modification does not add any training hyperparameter to the existing ones and reduces the memory consumption during the training.
Keywords: 
;  ;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated