Preserved in Portico This version is not peer-reviewed
RGB-D Data-based Action Recognition: A Review
: Received: 18 January 2021 / Approved: 19 January 2021 / Online: 19 January 2021 (09:14:30 CET)
A peer-reviewed article of this Preprint also exists.
Journal reference: Sensors 2021, 21, 4246
Classification of human actions from uni-modal and multi-modal datasets is an ongoing research problem in computer vision. This review is aimed to scope current literature on data-fusion and action-recognition techniques and to identify gaps and future research direction. Success in producing cost-effective and portable vision-based sensors has dramatically increased the number and size of datasets. The rise in number of action recognition datasets intersects with advances in deep-learning architectures and computational support, both of which offer significant research opportunities. Naturally, each action-data modality - such as RGB, depth, skeleton, and infrared - has distinct characteristics; therefore, it is important to exploit the value of each modality for better action recognition. In this article we will focus solely on areas such as data fusion and recognition techniques in the context of vision with a uni-modal and multi-modal perspective. We conclude by discussing research challenges, emerging trends, and possible future research directions.
Action Recognition; Deep Learning; Data Fusion
MATHEMATICS & COMPUTER SCIENCE, Artificial Intelligence & Robotics
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We encourage comments and feedback from a broad range of readers. See criteria for comments and our diversity statement.