Preprint Review Version 1 Preserved in Portico This version is not peer-reviewed

RGB-D Data-based Action Recognition: A Review

Version 1 : Received: 18 January 2021 / Approved: 19 January 2021 / Online: 19 January 2021 (09:14:30 CET)

A peer-reviewed article of this Preprint also exists.

Shaikh, M. B.; Chai, D. RGB-D Data-Based Action Recognition: A Review. Sensors, 2021, 21, 4246. Shaikh, M. B.; Chai, D. RGB-D Data-Based Action Recognition: A Review. Sensors, 2021, 21, 4246.


Classification of human actions from uni-modal and multi-modal datasets is an ongoing research problem in computer vision. This review is aimed to scope current literature on data-fusion and action-recognition techniques and to identify gaps and future research direction. Success in producing cost-effective and portable vision-based sensors has dramatically increased the number and size of datasets. The rise in number of action recognition datasets intersects with advances in deep-learning architectures and computational support, both of which offer significant research opportunities. Naturally, each action-data modality - such as RGB, depth, skeleton, and infrared - has distinct characteristics; therefore, it is important to exploit the value of each modality for better action recognition. In this article we will focus solely on areas such as data fusion and recognition techniques in the context of vision with a uni-modal and multi-modal perspective. We conclude by discussing research challenges, emerging trends, and possible future research directions.


Action Recognition; Deep Learning; Data Fusion


Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0

Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.