Preprint Article Version 2 Preserved in Portico This version is not peer-reviewed

Just Don’t Fall: An AI Agent’s Learning Journey Towards Posture Stabilisation

Version 1 : Received: 4 June 2020 / Approved: 5 June 2020 / Online: 5 June 2020 (13:58:07 CEST)
Version 2 : Received: 8 June 2020 / Approved: 8 June 2020 / Online: 8 June 2020 (10:25:54 CEST)

A peer-reviewed article of this Preprint also exists.

Hossny, M.; Iskander, J. Just Don’t Fall: An AI Agent’s Learning Journey Towards Posture Stabilisation. AI 2020, 1, 286-298. Hossny, M.; Iskander, J. Just Don’t Fall: An AI Agent’s Learning Journey Towards Posture Stabilisation. AI 2020, 1, 286-298.

Journal reference: AI 2020, 1
DOI: 10.3390/ai1020019

Abstract

Learning to maintain postural balance while standing requires a significant fine coordination effort between the neuromuscular system and the sensory system. It is one of the key contributing factors towards fall prevention, especially in the older population. Using artificial intelligence (AI), we can similarly teach an agent to maintain a standing posture, and thus teach the agent not to fall. In this paper, we investigate the learning progress of an AI agent and how it maintains a stable standing posture through reinforcement learning. During training, the AI agent learnt three policies. First, it learnt to maintain the Centre-of-Gravity and Zero-Moment-Point in front of the body. Then, it learnt to shift the load of the entire body on one leg while using the other leg for fine tuning the balancing action. Finally, it started to learn the coordination between the two pre-trained policies. This study shows the potentials of using deep reinforcement learning in human movement studies. The learnt AI behaviour also exhibited attempts to achieve an unplanned goal because it correlated with the set goal (e.g. walking in order to prevent falling). The failed attempts to maintain a standing posture is an interesting by-product which can enrich the fall detection and prevention research efforts.

Subject Areas

Postural Balance; Deep Reinforcement Learning; Postural Stabilisation; Biomechanics

Comments (1)

Comment 1
Received: 8 June 2020
Commenter: Mohammed Hossny
Commenter's Conflict of Interests: Author
Comment: 1. Updated Fig. 3, and abbreviation list (sorted). 
2. Added more explanation on the scope of the papre (Section 1)
3. Added a paragraph on thee NEAT algorithm as a biologically inspired network topology augmentation.
4. Title of Section 4.1 changeed from "surprising behaviour" to "interesting behaviour"
5. Proof read and addressing typos.

All modifications are listed as {\color{black} ...}.
+ Respond to this comment

We encourage comments and feedback from a broad range of readers. See criteria for comments and our diversity statement.

Leave a public comment
Send a private comment to the author(s)
Views 0
Downloads 0
Comments 1
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.