Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Heart Rate Estimation by Video-Based Reflec-tance Photoplethysmography

Version 1 : Received: 23 February 2023 / Approved: 24 February 2023 / Online: 24 February 2023 (02:52:59 CET)

How to cite: King, O. Heart Rate Estimation by Video-Based Reflec-tance Photoplethysmography. Preprints 2023, 2023020415. https://doi.org/10.20944/preprints202302.0415.v1 King, O. Heart Rate Estimation by Video-Based Reflec-tance Photoplethysmography. Preprints 2023, 2023020415. https://doi.org/10.20944/preprints202302.0415.v1

Abstract

Non-invasive heart rate (HR) monitoring is important in clinical settings as it plays a critical role in diagnosing a range of health conditions and assessing well-being. Presently, the gold standards for HR measurement are all based on sensors which require skin contact. Apart from inconvenience, contact sensors have proven problematic in certain scenarios – they cannot be used when mechanical isolation of the patient is imperative (burn victims, patients with shaky hands and feet), cause skin damage to premature babies in the ICU and increase the risk of spreading infections. Non-contact HR monitoring using a camera has been recently shown to be a viable alternative. It is now possible to record cardiac-synchronous blood volume variations from facial videos of human subjects under ambient lighting. These variations produce corresponding changes in skin reflectance which can be extracted as a raw reflectance photoplethysmography (rPPG) signal and processed to reveal HR. In this project, an algorithmic framework for webcam-based HR detection was successfully implemented in MATLAB. The investigation was based on 100 self-captured videos (dark-skinned subject) and 48 videos (from 12 subjects, all but one fair-skinned) obtained from COHFACE – an online database of facial videos and corresponding physiological signals. While the performance metrics (mean error, SNR) of the rPPG signals obtained from the self-captured videos were poor (best case mean error of 22%), they were good enough to demonstrate the success of the implementation. The poor results were primarily imputed to skin tone as rPPG SNR is known to be particularly low for dark tones. The results of the COHFACE videos were far superior, with mean error ranging from 3% to 15% (among 8 different rPPG signals) and 0% to 9% under ambient and dedicated lighting, respectively. This investigation sets the foundation for future research directed at optimizing rPPG performance metrics for dark-skinned subjects.

Keywords

reflectance PPG; reflectance photoplethysmography; heart rate estimation; video

Subject

Biology and Life Sciences, Biology and Biotechnology

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.