Preprint
Article

This version is not peer-reviewed.

Optimization Methods for Solving Deep Learning Problems: A Case Study of Adaptive Learning Rate Optimizers

Submitted:

05 April 2026

Posted:

08 April 2026

You are already at the latest version

Abstract
In this project, we study the optimization methods impacts on deep learning tasks, where we particularly focus on adaptive learning rate optimizers (e.g., AdaGrad, RMSProp, and Adam) and describe them, stating their strengths, weaknesses, and scenarios where they excel or underperform. We employ an experimental approach to analyze their performance, generalization, computational efficiency, and hyperparameter sensitivity. The study compares the performance of adaptive optimizers against a traditional method (SGD) and a non-tuning machine learning model (LDA). Our empirical results show that Adam performs best both on the train and test set in terms of accuracy, speed, generalization, and computational efficiency.
Keywords: 
;  ;  ;  ;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated