Version 1
: Received: 6 August 2020 / Approved: 7 August 2020 / Online: 7 August 2020 (09:29:54 CEST)
How to cite:
Hosseinali, F. Accelerated Gradient Descent Using Instance Eliminating Back Propagation. Preprints2020, 2020080181. https://doi.org/10.20944/preprints202008.0181.v1
Hosseinali, F. Accelerated Gradient Descent Using Instance Eliminating Back Propagation. Preprints 2020, 2020080181. https://doi.org/10.20944/preprints202008.0181.v1
Hosseinali, F. Accelerated Gradient Descent Using Instance Eliminating Back Propagation. Preprints2020, 2020080181. https://doi.org/10.20944/preprints202008.0181.v1
APA Style
Hosseinali, F. (2020). Accelerated Gradient Descent Using Instance Eliminating Back Propagation. Preprints. https://doi.org/10.20944/preprints202008.0181.v1
Chicago/Turabian Style
Hosseinali, F. 2020 "Accelerated Gradient Descent Using Instance Eliminating Back Propagation" Preprints. https://doi.org/10.20944/preprints202008.0181.v1
Abstract
Artificial Intelligence is dominated by Artificial Neural Networks (ANNs). Currently, the Batch Gradient Descent (BGD) is the only solution to train ANN weights when dealing with large datasets. In this article, a modification to the BGD is proposed which significantly reduces the training time and improves the convergence. The modification, called Instance Eliminating Back Propagation (IEBP), eliminates correctly-predicted-instances from the Back Propagation. The speedup is due to the elimination of unnecessary matrix multiplication operations from the Back Propagation. The proposed modification does not add any training hyperparameter to the existing ones and reduces the memory consumption during the training.
Computer Science and Mathematics, Computer Science
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.