Okada, H. Evolutionary Reinforcement Learning of Neural Network Controller for Acrobot Task — Part2: Genetic Algorithm. Preprints2023, 2023100852. https://doi.org/10.20944/preprints202310.0852.v1
APA Style
Okada, H. (2023). Evolutionary Reinforcement Learning of Neural Network Controller for Acrobot Task — Part2: Genetic Algorithm. Preprints. https://doi.org/10.20944/preprints202310.0852.v1
Chicago/Turabian Style
Okada, H. 2023 "Evolutionary Reinforcement Learning of Neural Network Controller for Acrobot Task — Part2: Genetic Algorithm" Preprints. https://doi.org/10.20944/preprints202310.0852.v1
Abstract
Evolutionary algorithms find applicability in reinforcement learning of neural networks due to their independence from gradient-based methods. To achieve successful training of neural networks using evolutionary algorithms, careful considerations must be made to select appropriate algorithms due to the availability of various algorithmic variations. The author previously reported experimental evaluations on Evolution Strategy for reinforcement learning of neural networks, utilizing the Acrobot control task. In this study, Genetic Algorithm is adopted as another instance of major evolutionary algorithms. Experimental results demonstrate that there was no statistically significant difference between the experimental performances of GA and ES, but the priority of generations and offsprings was different; GA performed better with a greater number of generations while ES performed better with a greater number of offsprings. Eight hidden units were the best among four variations (4, 8, 16 or 32 units), which aligns with previous study using ES.
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.