Background:
Machine learning can analyze vast amounts of data and make predictions for events in the future. Our group created machine learning models for vital sign predictions. To transport the information of these predictions without numbers and numerical values and make them easily usable for human caregivers, we aimed to integrate them into the Philips Visual-Patient-avatar, an avatar-based visualization of patient monitoring.
Methods:
We conducted a computer-based simulation study with 70 participants in three European university hospitals. We validated the vital sign prediction visualizations by testing their identification by anesthesiologists and intensivists. Each prediction visualization consisted of a condition (e.g., blood pressure low) and an urgency (a visual indication of the timespan in which the condition is expected to occur). To obtain qualitative user feedback, we also conducted standardised interviews and derived statements that participants later rated in an online survey.
Results:
The mixed logistic regression model showed 77.9% (95%CI 73.2-82.0%) correct identification of prediction visualizations (i.e. condition and urgency both correctly identified) and 93.8% (95%CI 93.7-93.8%) for conditions only (i.e. without considering urgencies). Forty-nine of 70 participants completed the online survey. The online survey participants agreed that the prediction visualizations were fun to use (32/49, 65.3%), and that they could imagine working with them in the future (30/49, 61.2%). They also agreed that identifying the urgencies was difficult (32/49, 65.3%).
Conclusions:
This study found that care providers correctly identified >90% of the conditions (i.e. without considering urgencies). The accuracy of identification decreased when considering urgencies in addition to conditions. Therefore, in future development of the technology, we will focus on either only displaying conditions (without urgencies) or improve the visualizations of urgency to enhance usability for human users.