Preprint
Article

Robot Navigation in Crowded Environments: a Reinforcement Learning Approach

Altmetrics

Downloads

153

Views

91

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

12 December 2022

Posted:

13 December 2022

You are already at the latest version

Alerts
Abstract
For a mobile robot, navigation in a densely crowded space can be a challenging and sometimes impossible task, especially with traditional techniques. In this paper, we present a framework to train neural controllers for differential drive mobile robots which must safely navigate a crowded environment while trying to reach a target location. To learn the robot’s policy, we train a convolutional neural network using two reinforcement learning algorithms, Deep Q-Networks (DQN) and Asynchronous Advantage Actor Critic (A3C), and develop a training pipeline that allows to scale the process to several compute nodes. We show that the asynchronous training procedure in A3C can be leveraged to quickly train neural controllers and test them on a real robot in a crowded environment.
Keywords: 
Subject: Engineering  -   Control and Systems Engineering
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated