PreprintArticleVersion 1Preserved in Portico This version is not peer-reviewed
Experimental Research on Avoidance Obstacle Control for Mobile Robots Using Q-Learning (QL) and Deep Q-Learning (DQL) Algorithms in Dynamic Environments
Ha, V.T.; Vinh, V.Q. Experimental Research on Avoidance Obstacle Control for Mobile Robots Using Q-Learning (QL) and Deep Q-Learning (DQL) Algorithms in Dynamic Environments. Actuators2024, 13, 26.
Ha, V.T.; Vinh, V.Q. Experimental Research on Avoidance Obstacle Control for Mobile Robots Using Q-Learning (QL) and Deep Q-Learning (DQL) Algorithms in Dynamic Environments. Actuators 2024, 13, 26.
Ha, V.T.; Vinh, V.Q. Experimental Research on Avoidance Obstacle Control for Mobile Robots Using Q-Learning (QL) and Deep Q-Learning (DQL) Algorithms in Dynamic Environments. Actuators2024, 13, 26.
Ha, V.T.; Vinh, V.Q. Experimental Research on Avoidance Obstacle Control for Mobile Robots Using Q-Learning (QL) and Deep Q-Learning (DQL) Algorithms in Dynamic Environments. Actuators 2024, 13, 26.
Abstract
This paper presents experimental results of static and dynamic obstacle avoidance methods for a two-wheel mobile robot with independent control using a deep Q-learning (DQL) reinforcement learning algorithm. This is an alternative method that combines the Q-learning (QL) algorithm with a neural network. Neural networks in the DQL algorithm act as Q matrix table approximators for each pair (State - Action). The effectiveness of the proposed solution was verified through simulations, programming, and experimentation. This DQL algorithm is compared to the QL algorithm. First, the mobile robot communicated with the control script using the robot operating system (ROS). The mobile robot is code programmed using a Python in the ROS operating system combined with the DQL controller in Gazebo software. The mobile robot then performed experiments in a workshop with many different experimental scenarios. This DQL controller is improved in term of the computation time, convergence time, planning trajectory accuracy, and obstacles avoidance. Therefore, the DQL controller solved the path optimization problems for mobile robots better than the Q-learning controller.
Keywords
Autonomous Mobile Robot; ROS; DQL; QL
Subject
Engineering, Control and Systems Engineering
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.