Preprint
Article

This version is not peer-reviewed.

Computer Vision for Elderly Care Based on Hand Gestures

A peer-reviewed article of this preprint also exists.

Submitted:

24 July 2020

Posted:

26 July 2020

You are already at the latest version

Abstract
Hand gestures may play an important role in medical applications for health care of elderly people, where providing a natural interaction for different requests can be executed by making specific gestures. In this study we explored three different scenarios using a Microsoft Kinect V2 depth sensor then evaluated the effectiveness of the outcomes. The first scenario utilized the default system embedded in the Kinect V2 sensor, which depth metadata gives 11 parameters related to the tracked body with five gestures for each hand. The second scenario used joint tracking provided by Kinect depth metadata and depth threshold together to enhance hand segmentation and efficiently recognize the number of fingers extended. The third scenario used a simple convolutional neural network with joint tracking by depth metadata to recognize five categories of gestures. In this study, deaf-mute elderly people execute five different hand gestures to indicate a specific request, such as needing water, meal, toilet, help and medicine. Then, the requests were sent to the care provider’s smartphone because elderly people could not execute any activity independently. The system transferred these requests as a message through the global system for mobile communication (GSM) using a microcontroller.
Keywords: 
;  ;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated