1. Introduction
As social creatures, humans engage in a multitude of daily interactions and communications through the exchange of information. Facial expressions play a pivotal role in sending and receiving messages during these interactions, and are evoked by specific facial muscle movements to act as powerful non-verbal cues in communication, such as non-verbal emotions. Face-to-face interactions occur from the beginning of existence between parents and infants and are important for parents to evaluate the affective state of their infants, whose verbal communication channels haven’t yet developed [
1]. Often, observing the entire face is not necessary to understand the emotions conveyed through facial expressions; individual facial regions are adequate to detect emotions, for example, sad or angry eyes and a happy mouth [
2].
Facial expressions can be controlled voluntarily; the communicator can suppress or conceal the actual emotion to show a different emotion, or involuntary expressions; which can show the genuine emotion in forms of subtle micro-expressions [
3]. Studies suggest that facial expressions are a behavioral phenotype, which, under similar circumstances, activates the same facial muscles and facial movements [
4]. This universal similarity of facial expressions has contributed vastly towards social intelligence and efficient social interactions. However, expression as well perception and interpretation of facial expressions can be altered under various disease conditions. Therefore, tracking these facial expression deviations from the baseline conditions can be used as an objective tool in health assessment, especially in conditions where the patient loses the ability to verbally convey the disease or the pain associated with it.
This review focuses on the feasibility and current research on using facial expression cues and facial features in health assessment. It summarizes several widely used tools available to examine facial expression frequencies and their magnitudes, and discusses a broader range of applications of monitoring facial expressions under several health impairments and finally, the advantages and limitations of this approach.
2. Tools for Objective Measure of Facial Expressions
2.1. Facial Action Coding System (FACS)
Originally developed by psychologists Paul Ekman and Wallace Friesen in 1978 [
5], the Facial Action Coding System (FACS) identifies a standard set of facial muscle movements corresponding to different facial expressions. FACS combines the activation and relaxation of 46 facial muscles known as action units (AUs) to objectively measure the changes in facial expressions. For example, AU6 and AU12 respectively represent the activation of Orbicularis oculi (check raiser) and Zygomatic Major (lip corner puller) facial muscles, both of which are activated in happiness and joy. Alternatively, the combined activation of Levator labii superioris alaquae nasi (nose wrinkler), Depressor anguli oris (lip corner depressor) and Depressor labii inferioris (lower lip depressor) muscles is associated with facial expression displaying disgust. In FACS, the activation intensity of AUs is indicated using a scale between 0 (complete relaxation) to 5 (complete activation).
Due to its objective facial expression measurement capabilities and capability of supporting high temporal resolution facial videos which allows the study of subtle micro-expressions, FACS is widely used in fields such as human-computer interaction and psychology to analyze human behavior, social interactions, and evoked emotions in response to triggers and stimuli. Although in the past the accuracy of FACS was dependent on training experienced FACS coders and their inter-individual perceptual differences, modern machine learning-based automated FACS tools such as OpenFace [
6], Py-Feat [
7], Nodulus FaceReader [
8] and Affdex [
9] provide efficient measurements of AU activations and facial expressions without manual intervention.
2.2. Facial Electromyography (EMG)
Facial electromyography (EMG) provides an objective method to record the activity of facial muscles with a high temporal resolution and sensitivity, and therefore is capable of capturing the slightest muscle contractions which might not be visible to the naked eye. The discovery of EMG is a result of the consecutive work of multiple neurophysiologists, from the concept of animal electricity (i.e., animal muscle contractions due to electrical stimulations) in 1771 to the invention of techniques to amplify bio-electrical signals at the beginning of the 20th century [
10]. EMG activity, which can be recorded using surface electrodes, is the electrical signals related to muscle fiber depolarizations which evoke action potentials causing muscle contractions [
11]. Relaxation and contraction of facial muscles during different facial expressions can therefore be captured using facial surface EMG electrodes.
Despite many advantages of facial EMG, it carries several limitations, including the requirement of specialized equipment for data collection, higher inter-subject signal variability and the electrode placements visually obscuring the face, making it less applicable in social contexts. However, the use of EMG to analyze facial expressions and emotions dates back to the 1970s. Based on recorded facial EMG activity from corrugator and zygomaticus muscle sites, researchers were able to differentiate and quantify the intensity of happy and sad emotions imagined by human subjects [
12,
13]. Recently, the studies in EMG-based emotion recognition applying machine learning techniques with facial EMG features are able to reach higher accuracies in distinguishing between multiple emotions [
14,
15]. Hence the latest models of virtual reality headsets have started integrated EMG sensors to for emotion detection during VR simulations [
16].
2.3. Computer-Vision Based Techniques
Computer vision, a branch of artificial intelligence and computer science that trains computers to understand and derive meaningful information from visual data inputs, including images and videos, has shown ground-breaking advances in recent years. It has introduced several techniques, including Face Mesh and Histogram of Oriented Gradients (HOG), that are used to extract facial features. Face mesh represents the 3-dimensional geometry of the face, consisting of a collection of interconnected points arranged in a mesh-like pattern and interconnected vertices to approximate the shape of the face [
17]. Recent advances in computer vision and machine learning have introduced several software tools to automatically extract the face mesh using images and videos, for example, MediaPipe, computes a face mesh consisting of 478 vertices located on the key points on the human face [
18]. Tracking the face mesh can be used to capture facial expressions [
19], without capturing personal identity, hence protecting human privacy.
Histogram of Oriented Gradients (HOG) has been widely used to detect objects in computer vision applications [
20]. It computes the gradient in small cells of an image, providing a compact representation of an object. Hence, HOG can be used to model the shape of facial muscles using edge analysis. HOG features have been effectively used in multiple facial expression recognition studies along with machine learning techniques [
21]. HOG descriptors are capable of extracting features without being affected by illumination changes, proving this technique an added value. Various automated software tools are available to compute HOG, including OpenFace [
6], which also simultaneously computes other facial features including, AUs, facial landmarks, head pose and gaze tracking.
3. Facial Expressions in Health Assessment
Various health impairments, including cognitive decline, pain experience, stroke, inflammations, migraines, and non-psychotic disorders, can cause deviations in facial expressions of patients compared to healthy controls. Hence, recent years have witnessed an increasing interest and advancements in medical assessments using the aforementioned techniques and tools, especially in the initial screening phase and in remote settings including telemedicine applications. Multiple studies have focused on the application of facial expressions and facial features-based technologies for early detection of various health conditions. These will be discussed below in three main application sections: 1. detection of cognitive impairments, 2. detection of pain, and 3. detection of other health conditions.
3.1. Facial Expressions in Cognitive Impairments
Cognitive impairments, which can be mild or severe, affect a person’s cognitive abilities and daily functioning, such as thinking, memory, learning and decision making. This has become a common condition among older adults, for example, in Canada, over 770,000 individuals are living with Alzheimer’s disease (AD) or other forms of dementia in 2025, and it is projected to reach 1.7 million in 2050. Dementia is a clinical diagnosis represented by the basis of progressive cognitive decline and it can occur due to different pathophysiological processes including AD (50-75%), vascular dementia (20%), dementia with Lewy bodies (5%), and frontotemporal lobar dementia (5%) [
22]. AD, which is the most prevalent type of dementia, is named after the German psychiatrist Alois Alzheimer, who observed massive loss of neurons in the cerebral cortex of his patient suffering from memory loss [
23]. The state between normal aging and dementia, known as mild cognitive impairment (MCI), progresses to dementia at an annual rate between 8-15% and subsequently to AD [
24]. Therefore, early screening for cognitive decline has become increasingly important.
Conventional cognitive screening methods, such as brain scans (CT, MRI, PET) are used widely to study early structural and metabolic changes in multiple brain areas, including the hippocampus, entorhinal cortex and gray matter in the medial temporal lobe [
25]. However, brain imaging tends to be resource-intensive, costly, and stressful for patients. Apart from brain scans, there exist multiple psychiatric tests to conduct cognitive assessment, such as the Mini-Mental Status Exam (MMSE) or Montreal Cognitive Assessment (MoCA). MMSE measures five areas of cognitive function, including orientation, registration, attention and calculation, recall and language [
26]. In comparison, MoCA covers more domains, including visuospatial skills, attention, language, abstract reasoning, delayed recall, executive function, and orientation with a higher specificity and sensitivity [
27]. However, these tests have statistical limitations; for example, they might not be strong enough to detect mild cognitive decline and might have a ceiling effect with high IQ patients [
28]. Therefore, developing efficient and accessible alternatives to cognitive assessment is crucial.
The study of facial expression deviations during cognitive decline have emerged as an interesting technique for cognitive impairment detection. Previous research in this field can be broadly divided into two main areas: 1) exploratory studies that examine changes in facial expressions among cognitively impaired patients compared to healthy individuals, and 2) studies that assess facial expressions and features as detection tool for distinguishing cognitively impaired patients from healthy controls.
3.1.1. Exploratory studies
There is a vast majority of research studying the facial expressions of cognitively impaired patients dating back to the 1990s. Asplund et al.,1991 who studied the ability of facial expression exhibition in four Alzheimer’s type dementia patients using FACS, found that these patients show decreased complex facial expressions associated with emotions under pleasant and unpleasant stimulus conditions [
29]. This phenomenon known as hypomimia (i.e., reduced facial expressions and facial cues) in cognitively impaired patients were reported in subsequent studies with severely demented patients using FACS while the facial videos were recorded during caregiving activities [
30]. A separate study with AD patients and Parkinson’s disease (PD) patients videotaped during neutral and posed facial emotions assessed using MDS-UPDRS 3.2 score (scale for facial expression ranging from score 0 (“normal facial mimic”) to score 4 (“Fixed facial expressions with lips open most of the time when the mouth is still”) [
31] showed similar patterns of reduced facial expressions in AD and PD compared to healthy controls [
32].
Contrary to the concept of hypomimia in cognitive impairments, Seidl et al., 2012 showed that cognitive deficits are associated with an increased rate of total facial expressions. This was observed in recorded facial videos of AD patients analyzed using emotional FACS (EMFACS) after controlling for apathy, in response to emotion-eliciting or neutral images [
33]. A study conducted with dementia of Alzheimer’s type showed an increase in facial expressions associated with negative affect in reaction to sad vignette stimuli compared to the control group. These facial videos were also scored for facial expressions using the FACS system [
34]. Additionally, a facial EMG signal-based study conducted with AD subjects observed inverted patterns of zygomatic activity in comparison to healthy controls in response to emotion-eliciting images [
35].
The aforementioned studies demonstrated that no consensus has been achieved that AD or dementia patients universally show reduced or increased facial expressions. Instead, the literature suggests a spectrum of changes; for some individuals in some contexts, they may show reduced emotional expressivity or flattened affect, and others may show increased facial expressions which can be caused by various pathological reasons, such as less control over facial muscles, disinhibition, or emotional dysregulation, especially under stress, pain, or negative emotional stimuli.
3.1.2. Detection Tools
Differences in facial expressions discovered in aforementioned exploratory studies between cognitively impaired patients and healthy subjects are used in detection studies to distinctly identify cognitive impairments. Various studies have been conducted to perform such classifications of cognitively impaired patients and healthy controls using facial features, using psychiatric test scores as ground truth (
Table 1). These studies have focused on detecting AD, MCI and dementia patient groups classified based on MMSE, MoCA and other cognitive tests. Based on extracted facial features or complete facial images in facial videos classified with machine learning techniques, these studies reported higher predictive performances, demonstrating the feasibility of using facial expressions in the automated screening of cognitive impairments.
3.2. Facial Expressions in Pain Assessment
Pain is an unpleasant sensory and subjective emotional experience associated with actual or potential tissue or nerve damage, where individuals feel pain in different ways, even if the sources of pain are the same [
46]. Depending on the duration and frequency of pain, it can take three primary patterns; 1) short duration ‘acute pain’ which can start and end suddenly, 2) ‘episodic pain’ which can occur time to time with regular or irregular intervals, and 3) ‘chronic pain’ which can last for longer durations of more than three months. Additionally, pain is categorized based on its source; 1) ‘nociceptive pain’ caused by tissue damage or inflammation, 2) ‘neuropathic pain’ caused by nerve damage due to injury or disease and, 3) ‘nociplastic pain’ describes the pain that is caused by changes in how the nervous system process pain [
47,
48].
Due to its subjective nature in perception between individuals, self-reporting is considered the primary method of pain assessment [
49]. These includes, verbal rating scale (VRS) where subjects verbally rate their pain on a scale, visual analog scale (VAS) where subjects mark their pain on a 100mm visual scale [
50], and faces scale for reporting pain in children [
51]. Despite its popularity and consideration as the ‘gold standard’ in pain assessment, these subjectively reported scales carry several limitations, including their less feasibility in cognitively impaired, unconscious and non-verbal subjects. Therefore, objective measurements of pain have become important.
Multiple behavioral changes occur during pain experiences, including posture changes [
52], vocalizations [
53], and facial expression changes [
54]. Pain evokes a ‘universal facial expression’ where a consistent set of facial AUs are activated across different painful stimuli [
55]. Due to the observed changes in facial expressions during pain experience, several studies have attempted to detect pain in subjects by studying their facial expressions, and these studies are reported below (
Table 2), with varying accuracies using ML based techniques for pain detection and pain intensity estimations. One outcome from these research works is the introduction of a Prkachin and Solomon pain intensity score (PSPI). PSPI is based on a linear additive scoring of four AUs, i.e., brow lowering (AU04), orbit tightening (AU06 and AU07), upper-lip raising/nose wrinkling (AU9 and AU10), and eye closing (AU43) [
56] and has shown limited applications in certain clinical settings.
The majority of the above work on pain detection and pain intensity estimation has used publicly available UNBC-McMaster database with self-reported shoulder pain and the MIntPAIN database with subjects experiencing evoked pain using electrical stimulation. Although most of these studies focused on cognitively intact patients capable of self-reporting their pain, several studies involving cognitively impaired patients have shown that their facial expressions reflect pain as effectively as, or even more clearly than, those of healthy controls [
72,
73]. These findings support the feasibility of automated objective pain assessment using facial expressions in healthy subjects as well as in cognitively impaired subjects. Consistent with this concept, a recent study has developed a commercial application called ‘PainCheck’ [
74], which uses deep learning methods (i.e., automated facial recognition and analysis) to identify facial micro-expressions for detection of the presence of pain, particularly aiming for pain assessment in people living with dementia.
Apart from the above-reported studies focusing on pain vs no-pain detection, there are a few studies focused on detecting genuine and faked pain using facial expressions. One such study conducted with healthy subjects, in which pain was evoked using a cold-pressure task (CPT), reported an accuracy of 85% in distinguishing genuine from fake pain using a support vector machine (SVM) classifier, compared to only 55% accuracy achieved by trained human observers [
75]. In another similar study involving 26 participants undergoing pain evoked with CPT, a machine learning based classifier achieved an accuracy of 88% based on extracted AU facial features, whereas human performance reached only 49% accuracy [
76]. These studies demonstrate the capabilities of machine learning-based methods in outperforming humans in fake vs genuine pain detection and in overall pain intensity estimation.
3.3. Facial Expressions in Other Health Assessments
3.3.1. Chest Pain and Cardiac Diseases
Beyond the changes in facial expressions associated with cognitive decline and pain perception, it has been shown that various other health conditions can also evoke measurable alterations in facial expressions in subjects compared to baseline conditions. One such condition is chest pain, which can result from life-threatening events such as myocardial infarction. Efficient and early detection of chest pain is therefore critical, as it can reduce the risk of severe or potentially fatal outcomes. Dalton et al.,1999 focused on discovering the facial expressions exhibited by 278 patients admitted to the emergency department complaining of chest pain who were given a possible diagnosis of acute ischemic heart disease (AIHD) [
76]. By analyzing the videotapes using FACS, they identified 4 AUs (lowering the brow (AU04), parting the lips (AU25), pressing the lips (AU24) and turning the head left (AU51)) that showed significant association with positive creatine kinase (CK) enzyme levels, a known biomarker and predictor of acute AIHD [
77]. Another study conducted in an emergency department with 50 patients presenting with dyspnoea and chest pain, classified into disease+ (patients with serious diseases diagnosed with cardiopulmonary diagnosis) and disease- (patients that were well on telephone follow-up with no serious diagnosis) groups, found that disease+ group exhibited lower facial expression variability and surprise effect compared to the disease- group, when the stimuli evoked facial expressions were analyzed with FACS scores [
78]. A recent study adopted deep learning models (YOLO) with higher accuracies (80-100%) to carry out real-time identification of chest pain conditions to assist the clinician–patient consultations and to reduce the extent of cardiac damage in patients [
79]. A recent study by Khedkar, R., et al., 2024 further confirmed the feasibility of using facial features in detecting cardiac diseases. The researchers demonstrated that a machine learning based model was able to achieve an accuracy of 88% in predicting real-time coronary artery heart disease (CAD) through facial features extracted from facial images with characteristics commonly associated to CAD [
80].
3.3.2. Stroke
Stroke is a medical emergency which requires prompt detection and treatment to reduce the damage to brain cells, and it occurs when blood flow to part of the brain is interrupted (ischemic) or when a blood vessel ruptures (hemorrhagic). Facial expression changes are among the most visible and important signs during a stroke, such as drooping of the face on one side. Therefore, tracking the changes in facial features is an important method to detect the onset of the stroke. Several studies have explored an automated approach focusing on these characteristics. Focusing on the facial expressional asymmetry and mouth skew of stroke patients, a previous study proposed the use of asymmetry indices (area ratio and distance ratio between the left and right side of the eye and mouth) to classify stroke patients with higher accuracies (100% with SVM, 95.45% with Random Forest and 100% with Bayes) [
81]. A similar study calculated facial features such as wrinkles on the forehead area, eye movement, mouth drooping and cheek line detection to identify early symptoms of stroke using a dataset of 100 images and achieved an accuracy of 91% [
82]. Using videos recorded from patients, Mohamed et al., 2025 reported an accuracy of 98.43% in detecting stroke patients. They carried out face detection using YOLO8 model followed by facial feature extraction and feature selection using active appearance model (AAM) and binary booby bird optimization (B3O) and finally a Naive Bayes (NB) classifier to detect stroke patients [
83]. Another study involving 185 patients with acute ischemic stroke and 551 age and sex matched healthy controls reported an area under the curve (AUC) of 0.91 for stroke recognition based on facial images using an ensemble convolutional neural network (CNN) based model [
84]. These studies demonstrated the applicability and feasibility of the facial expression-based approach for stroke detection.
3.3.3. Non-Psychotic Mental Disorders
Non-psychotic mental disorders, which are typically less severe than psychotic disorders, affect an individual’s emotions and behavior without causing psychosis like delusions or hallucinations. Some examples are depression, obsessive-compulsive disorder (OCD), autism spectrum disorder (ASD), and borderline personality disorder (BPD). Overall, multiple studies have shown that individuals with non-psychotic disorders exhibit attenuated facial expressions in response to emotional or sensory stimuli [
85].
OCD, which is characterized by recurrent thoughts or obsessions and compulsive activities, is often underdiagnosed regardless of its worldwide spread [
86]. A study conducted with 10 OCD patients and 10 healthy controls by using FACS to analyze emotional expression in response to film clips showed reduced congruent emotional expression in OCD group compared to healthy controls [
87]. Similar results were observed in a study with both OCD and mild OCD patients in comparison to healthy controls [
88]. Depression is another example of non-psychotic disorders characterized by persistent feelings of sadness and reduced interest. Consequently, the expression of positive emotions is often impaired in depressed patients. This has been demonstrated in studies which used emotional stimuli, such as pictures or film clips, and analyzed using EMFACS. These studies consistently reported reduced positive expressions in terms of both frequency and intensity [
89,
90]. Similarly, BPD patients exhibited reduced positive expressions, and also diminished negative expressions compared to healthy controls when analyzed with EMFACS in response to film clips [
89]. In ASD patients, reduced positive and negative dynamic expressions were also observed in response to film clips analyzed using FACS [
91]. Collectively, these findings highlight that individuals with non-psychotic disorders display impaired emotional expression in response to stimuli, suggesting potential applications for these facial markers in both detection and diagnostic evaluation of such disorders.
3.3.4. Migraine and Infections
Migraine, which is characterized by recurrent painful headaches, globally affects more than one billion people. A recent study analyzed changes in facial activity in individuals with migraine under calm and resting conditions using camera-based recordings. The study included healthy subjects, patients with episodic migraine (EM) and chronic migraine (CM). They found that the lid-tightener action unit was a reliable indicator of headache intensity [
92], denoting the potential of facial feature-based approaches for migraine detection.
Although the human immune system has evolved to adapt to the threats of infections, the probability of being infected is still high and can potentially cause serious complications, hence, the early detection of infections is important. Using an experimental model of sickness with 22 volunteers intravenously injected with either endotoxin (lipopolysaccharide; 2 ng/kg body weight) or placebo, a study found that the faces of subjects injected with endotoxin (after two hours of the injection) were perceived as more sick and less healthy. They also expressed more negative emotions (sadness and disgust), and less happiness and surprise, demonstrating the feasibility of using emotional expressions in detecting infectious individuals [
93].
4. Discussion
In the present review, we have explored various facial expression analysis tools and their applicability in health assessment. As discussed in the above sections, facial expressions can be used in screening and detection of several health conditions, including cognitive impairments, pain assessment, stroke, migraine and a multitude of other disorders. With the current advancements in computer vision and machine learning techniques, automated facial expression analysis has become efficient and fast in computation [
94], and therefore, the applicability of this technique in health care settings has become extremely feasible. In addition to the computation efficiency, automated facial expression-based health monitoring carries several advantages due to its non-invasive nature. Unlike traditional brain scans or blood tests, which can be costly, resource-consuming and overwhelming for patients, these techniques utilize facial video data and extracted facial features, which can be easily captured during routine interactions and interviews with healthcare professionals.
Most studies discussed in this review have used facial images or videos in controlled optimum conditions; however, in practice, lighting conditions, background complexity and image quality can affect the performance and accuracy of facial expression-based health assessment. Therefore, extended research should be conducted to evaluate the performance under diverse backgrounds and facial video quality settings. Additionally, extra steps should be employed to protect the patient’s privacy in recorded facial images and videos. Another common drawback of these current studies lies in the selection of the ground truth parameter used to train the computational models. For example, studies related to cognitive decline have consumed psychiatric tests as ground truth, which themselves can carry several limitations (e.g., they are not strong enough to detect mild cognitive conditions and have a ceiling effect with high IQ patients), and pain assessment studies have used self-reported pain scores as the ground truth which are subjective. Hence, selection of more objective parameters as the reference in model development can significantly improve the objective health assessment nature of the studies. These could include vital signs (e.g., heart rate, blood pressure, etc.) and brain signals (e.g., EEG) that can potentially co-vary with facial expression changes in different health conditions. Therefore, incorporating further improvements in the proposed technologies can ensure their accuracy, usability and efficiency in applying them in health care settings as an assisting tool for healthcare professions.
Overall, because of their automated and computational nature, the technologies developed in the studies reviewed here can be readily integrated into the existing healthcare system to assist healthcare professionals in making accurate and objective assessments of health conditions. Most importantly, this approach can be deployed in telemedicine applications to serve patients in remote areas with limited access to healthcare services and professionals. The real-world value of these technologies lies in their capabilities for early health screening in both onsite and remote settings, thereby, enhancing the quality of care for individuals while helping to alleviate the growing demand on the current healthcare system.
Author Contributions
JS: Conceptualization, Project administration, Validation, writing original draft, Writing review & editing. DJ: Conceptualization, Project administration, Supervision, Writing review & editing.
Funding
The author(s) declare financial support was received for the research, authorship, and/or publication
of this article.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
No new data were created or analyzed in this study.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Rosenstein, D.; Oster, H. Differential facial responses to four basic tastes in newborns. Child Dev 1988, 59(6), 1555–68. [Google Scholar] [CrossRef]
- Beaudry, O.
Featural processing in recognition of emotional facial expressions
. Cognition and Emotion 2014, 28(3), 416–432. [Google Scholar] [CrossRef]
- Klingner, C.M. O. Guntinas-Lichius, Facial expression and emotion. Laryngorhinootologie 2023, 102(S 01), S115–s125. [Google Scholar] [PubMed]
- Schmidt, K.L.; Cohn, J.F.
Human facial expressions as adaptations: Evolutionary questions in facial expression research
. American Journal of Physical Anthropology 2001, 116(S33), 3–24. [Google Scholar] [CrossRef] [PubMed]
- Ekman, P.; Friesen, W.V.
Facial action coding system
. In Environmental Psychology & Nonverbal Behavior; 1978. [Google Scholar]
- Baltrusaitis, T.; et al. OpenFace 2.0: Facial Behavior Analysis Toolkit. in 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), 2018. [Google Scholar]
- Cheong, J.H.
Py-Feat: Python Facial Expression Analysis Toolbox
. Affective Science 2023, 4(4), 781–796. [Google Scholar] [CrossRef] [PubMed]
- Den Uyl, M.J.; et al. The FaceReader: Online facial expression recognition; 2006. [Google Scholar]
- Bishay, M.; et al. Affdex 2.0: A real-time facial expression analysis toolkit. in 2023 IEEE 17th international conference on automatic face and gesture recognition (FG), 2023; IEEE. [Google Scholar]
- Kazamel, M. P.P. Warren, History of electromyography and nerve conduction studies: A tribute to the founding fathers. Journal of Clinical Neuroscience 2017, 43, 54–60. [Google Scholar] [CrossRef]
- Hof, A.L.
EMG and muscle force: An introduction
. Human Movement Science 1984, 3(1), 119–153. [Google Scholar] [CrossRef]
- Cacioppo, J.T. Electromyographic activity over facial muscle regions can differentiate the valence and intensity of affective reactions. J Pers Soc Psychol 1986, 50(2), 260–8. [Google Scholar] [CrossRef]
- Schwartz, G.E. Facial expression and imagery in depression: an electromyographic study. Psychosom Med 1976, 38(5), 337–47. [Google Scholar] [CrossRef]
- Kołodziej, M.; Majkowski, A.; Jurczak, M. Acquisition and Analysis of Facial Electromyographic Signals for Emotion Recognition. Sensors 2024, 24(15), 4785. [Google Scholar] [CrossRef]
- Rutkowska, J.M. Optimal processing of surface facial EMG to identify emotional expressions: A data-driven approach. Behavior Research Methods 2024, 56(7), 7331–7344. [Google Scholar] [CrossRef] [PubMed]
- Gjoreski, M.
Facial EMG sensing for monitoring affect using a wearable device
. Scientific Reports 2022, 12(1), 16876. [Google Scholar] [CrossRef] [PubMed]
- Kartynnik, Y. Real-time Facial Surface Geometry from Monocular Video on Mobile GPUs. ArXiv 2019, abs/1907.06724. [Google Scholar]
- Lugaresi, C.
Mediapipe: A framework for building perception pipelines
. arXiv 2019, arXiv:1906.08172. [Google Scholar] [CrossRef]
- Ciraolo, D. Facial expression recognition based on emotional artificial intelligence for tele-rehabilitation. Biomedical Signal Processing and Control 2024, 92, 106096. [Google Scholar] [CrossRef]
- Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 2005. [Google Scholar]
- Carcagnì, P. Facial expression recognition and histograms of oriented gradients: a comprehensive study. SpringerPlus 2015, 4(1), 645. [Google Scholar] [CrossRef]
- Cunningham, E.L.
Dementia
. Ulster Med J 2015, 84(2), 79–87. [Google Scholar]
- Breijyeh, Z.; Karaman, R. Comprehensive Review on Alzheimer’s Disease: Causes and Treatment. Molecules 2020, 25(24). [Google Scholar] [CrossRef]
- Petersen, R.C.
Mild Cognitive Impairment
. Continuum (Minneap Minn) 2016, 22(2 Dementia), 404–18. [Google Scholar] [CrossRef]
- Yin, C. Brain imaging of mild cognitive impairment and Alzheimer’s disease. Neural Regen Res 2013, 8(5), 435–44. [Google Scholar]
- Folstein, M.F.; Folstein, S.E.; McHugh, P.R. “Mini-mental state”: a practical method for grading the cognitive state of patients for the clinician. Journal of psychiatric research 1975, 12(3), 189–198. [Google Scholar] [CrossRef] [PubMed]
- Nasreddine, Z.S. The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment. J Am Geriatr Soc 2005, 53(4), 695–9. [Google Scholar] [CrossRef] [PubMed]
- Spencer, R.J. Psychometric limitations of the mini-mental state examination among nondemented older adults: an evaluation of neurocognitive and magnetic resonance imaging correlates. Exp Aging Res 2013, 39(4), 382–97. [Google Scholar] [CrossRef] [PubMed]
- Asplund, K. Facial expressions in severely demented patients—a stimulus–response study of four patients with dementia of the Alzheimer type. International Journal of Geriatric Psychiatry 1991, 6(8), 599–606. [Google Scholar] [CrossRef]
- Asplund, K.; Jansson, L.; Norberg, A. Facial expressions of patients with dementia: a comparison of two methods of interpretation. Int Psychogeriatr 1995, 7(4), 527–34. [Google Scholar] [CrossRef]
- Goetz, C.G. Movement Disorder Society-sponsored revision of the Unified Parkinson’s Disease Rating Scale (MDS-UPDRS): scale presentation and clinimetric testing results. Mov Disord 2008, 23(15), 2129–70. [Google Scholar] [CrossRef]
- Cannavacciuolo, A. Facial emotion expressivity in patients with Parkinson’s and Alzheimer’s disease. J Neural Transm (Vienna) 2024, 131(1), 31–41. [Google Scholar] [CrossRef]
- Seidl, U. Facial expression in Alzheimer’s disease: impact of cognitive deficits and neuropsychiatric symptoms. Am J Alzheimers Dis Other Demen 2012, 27(2), 100–6. [Google Scholar] [CrossRef]
- Smith, M.C.
Facial expression in mild dementia of the Alzheimer type
. Behavioural Neurology 1995, 8(3-4), 149–156. [Google Scholar] [CrossRef]
- Burton, K.W.; Kaszniak, A.W.
Emotional experience and facial expression in Alzheimer’s disease
. Neuropsychol Dev Cogn B Aging Neuropsychol Cogn 2006, 13(3-4), 636–51. [Google Scholar] [CrossRef]
- Tanaka, H. Detecting Dementia from Face in Human-Agent Interaction, in Adjunct of the 2019 International Conference on Multimodal Interaction; Association for Computing Machinery: Suzhou, China, 2019; p. Article 5. [Google Scholar]
- Umeda-Kameyama, Y. Screening of Alzheimer’s disease by facial complexion using artificial intelligence. Aging (Albany NY) 2021, 13(2), 1765–1772. [Google Scholar] [CrossRef] [PubMed]
- Jiang, Z. Automated analysis of facial emotions in subjects with cognitive impairment. PLoS One 2022, 17(1), e0262527. [Google Scholar]
- Jiang, Z. Classifying Major Depressive Disorder and Response to Deep Brain Stimulation Over Time by Analyzing Facial Expressions. IEEE Trans Biomed Eng 2021, 68(2), 664–672. [Google Scholar] [CrossRef] [PubMed]
- Fei, Z. A Novel deep neural network-based emotion analysis system for automatic detection of mild cognitive impairment in the elderly. Neurocomputing 2022, 468, 306–316. [Google Scholar] [CrossRef]
- Zheng, C. Detecting Dementia from Face-Related Features with Automated Computational Methods. Bioengineering (Basel) 2023, 10(7). [Google Scholar] [CrossRef]
- Alsuhaibani, M.; Dodge, H.H.; Mahoor, M.H. Mild cognitive impairment detection from facial video interviews by applying spatial-to-temporal attention module. Expert Syst Appl 2024. [Google Scholar] [CrossRef]
- Sun, J.; Dodge, H.H.; Mahoor, M.H. MC-ViViT: Multi-branch Classifier-ViViT to detect Mild Cognitive Impairment in older adults using facial videos. Expert Systems with Applications 2024, 238, 121929. [Google Scholar] [CrossRef]
- Takeshige-Amano, H. Digital detection of Alzheimer’s disease using smiles and conversations with a chatbot. Scientific Reports 2024, 14(1), 26309. [Google Scholar] [CrossRef]
- Okunishi, T. Dementia and MCI Detection Based on Comprehensive Facial Expression Analysis From Videos During Conversation. IEEE J Biomed Health Inform 2025, 29(5), 3537–3548. [Google Scholar] [CrossRef]
- Raja, S.N. The revised International Association for the Study of Pain definition of pain: concepts, challenges, and compromises. Pain 2020, 161(9), 1976–1982. [Google Scholar] [CrossRef]
-
Abd‐Elsayed, A. and T.R. Deer, Different Types of Pain, in Pain: A Review Guide; Abd-Elsayed, A., Ed.; Springer International Publishing: Cham, 2019; pp. 15–16. [Google Scholar]
- Nijs, J.; De Baets, L.; Hodges, P.
Phenotyping nociceptive, neuropathic, and nociplastic pain: who, how, & why?
Braz J Phys Ther 2023, 27(4), p. 100537. [Google Scholar]
- Kang, Y.; Demiris, G. Self-report pain assessment tools for cognitively intact older adults: Integrative review. Int J Older People Nurs 2018, 13(2), p. e12170. [Google Scholar] [CrossRef] [PubMed]
- Haefeli, M. A. Elfering, Pain assessment. Eur Spine J 2006, 15 Suppl 1(Suppl 1), S17–24. [Google Scholar] [CrossRef] [PubMed]
- Tomlinson, D. A systematic review of faces scales for the self-report of pain intensity in children. Pediatrics 2010, 126(5), e1168–98. [Google Scholar] [CrossRef]
- Walsh, J.; Eccleston, C.; Keogh, E. Pain communication through body posture: The development and validation of a stimulus set. PAIN® 2014, 155(11), 2282–2290. [Google Scholar] [CrossRef]
- Helmer, L.M.L. Crying out in pain-A systematic review into the validity of vocalization as an indicator for pain. Eur J Pain 2020, 24(9), 1703–1715. [Google Scholar] [CrossRef]
- Prkachin, K.M. Assessing pain by facial expression: facial expression as nexus. Pain Res Manag 2009, 14(1), 53–8. [Google Scholar] [CrossRef]
- Prkachin, K.M. The consistency of facial expressions of pain: a comparison across modalities. PAIN 1992, 51(3). [Google Scholar] [CrossRef]
- Prkachin, K.M.; Solomon, P.E. The structure, reliability and validity of pain expression: evidence from patients with shoulder pain. Pain 2008, 139(2), 267–274. [Google Scholar] [CrossRef]
- Lucey, P.
Automatically Detecting Pain in Video Through Facial Action Units
. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 2011, 41(3), 664–674. [Google Scholar] [CrossRef]
- Rathee, N.; Ganotra, D.
A novel approach for pain intensity detection based on facial feature deformations
. Journal of Visual Communication and Image Representation 2015, 33, 247–254. [Google Scholar] [CrossRef]
- Rathee, N.; Ganotra, D. Multiview Distance Metric Learning on facial feature descriptors for automatic pain intensity detection. Computer Vision and Image Understanding 2016, 147, 77–86. [Google Scholar] [CrossRef]
- Bargshady, G. Enhanced deep learning algorithm development to detect pain intensity from facial expression images. Expert Systems with Applications 2020, 149, 113305. [Google Scholar] [CrossRef]
- Bargshady, G. The modeling of human facial pain intensity based on Temporal Convolutional Networks trained with video frames in HSV color space. Applied Soft Computing 2020, 97, 106805. [Google Scholar] [CrossRef]
- Bargshady, G.
Ensemble neural network approach detecting pain intensity from facial expressions. Artificial Intelligence in Medicine 2020, 109, 101954. [Google Scholar] [CrossRef]
- Casti, P. Metrological Characterization of a Pain Detection System Based on Transfer Entropy of Facial Landmarks. IEEE Transactions on Instrumentation and Measurement 2021, 70, 1–8. [Google Scholar] [CrossRef]
- Barua, P.D. Automated detection of pain levels using deep feature extraction from shutter blinds-based dynamic-sized horizontal patches with facial images. Scientific Reports 2022, 12(1), 17297. [Google Scholar] [CrossRef]
- Rodriguez, P. Deep Pain: Exploiting Long Short-Term Memory Networks for Facial Expression Classification. IEEE Transactions on Cybernetics 2022, 52(5), 3314–3324. [Google Scholar] [CrossRef]
- Fontaine, D. Artificial intelligence to evaluate postoperative pain based on facial expression recognition. European Journal of Pain 2022, 26(6), 1282–1291. [Google Scholar] [CrossRef]
- Alghamdi, T.; Alaghband, G.
Facial Expressions Based Automatic Pain Assessment System. Applied Sciences 2022, 12(13), 6423. [Google Scholar] [CrossRef]
- Alphonse, S.; Abinaya, S.; Kumar, N. Pain assessment from facial expression images utilizing Statistical Frei-Chen Mask (SFCM)-based features and DenseNet. Journal of Cloud Computing 2024, 13(1), p. 142. [Google Scholar] [CrossRef]
- Tan, C.W. Automated pain detection using facial expression in adult patients with a customized spatial temporal attention long short-term memory (STA-LSTM) network. Scientific Reports 2025, 15(1), 13429. [Google Scholar] [CrossRef] [PubMed]
- Lucey, P.; et al. Painful data: The UNBC-McMaster shoulder pain expression archive database. 2011 IEEE International Conference on Automatic Face & Gesture Recognition (FG), 2011. [Google Scholar]
- Haque, M.A.; et al. Deep Multimodal Pain Recognition: A Database and Comparison of Spatio-Temporal Visual Modalities. in 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), 2018. [Google Scholar]
- Chapman, C.R. Progress in pain assessment: the cognitively compromised patient. Curr Opin Anaesthesiol 2008, 21(5), 610–5. [Google Scholar] [CrossRef] [PubMed]
- Kunz, M. Observing Pain in Individuals with Cognitive Impairment: A Pilot Comparison Attempt across Countries and across Different Types of Cognitive Impairment. Brain Sci 2021, 11(11). [Google Scholar] [CrossRef]
- Atee, M.; Hoti, K.; Hughes, J.D. A Technical Note on the PainChek™ System: A Web Portal and Mobile Medical Device for Assessing Pain in People With Dementia. In Frontiers in Aging Neuroscience; 2018; Volume 10—2018. [Google Scholar]
- Bartlett, Marian S. Automatic Decoding of Facial Movements Reveals Deceptive Pain Expressions. Current Biology 2014, 24(7), 738–743. [Google Scholar] [CrossRef]
- Littlewort, G.C.; Bartlett, M.S.; Lee, K.
Automatic coding of facial expressions displayed during posed and genuine pain. Image and Vision Computing 2009, 27(12), 1797–1803. [Google Scholar] [CrossRef]
- Dalton, J.A. An evaluation of facial expression displayed by patients with chest pain. Heart Lung 1999, 28(3), 168–74. [Google Scholar] [CrossRef]
- Kline, J.A. Decreased facial expression variability in patients with serious cardiopulmonary disease in the emergency care setting. Emerg Med J 2015, 32(1), 3–8. [Google Scholar] [CrossRef]
- Kao, H. The Potential for High-Priority Care Based on Pain Through Facial Expression Detection with Patients Experiencing Chest Pain. Diagnostics (Basel) 2024, 15(1). [Google Scholar] [CrossRef]
- Khedkar, R.; et al. Coronary Artery Disease Prediction Using Facial Features. 2024 OPJU International Technology Conference (OTCON) on Smart Computing for Innovation and Advancement in Industry 4.0, 2024. [Google Scholar]
- Chang, C.Y.; Cheng, M.J.; Ma, M.H.M. Application of Machine Learning for Facial Stroke Detection. 2018 IEEE 23rd International Conference on Digital Signal Processing (DSP), 2018. [Google Scholar]
- Umirzakova, S.; Whangbo, T.K. STUDY ON DETECT STROKE SYMPTOMS USING FACE FEATURES. 2018 International Conference on Information and Communication Technology Convergence (ICTC), 2018. [Google Scholar]
- Mohamed, A.M. Real time brain stroke identification using face images based on machine learning and booby bird optimization. Expert Systems with Applications 2025, 282, 127719. [Google Scholar] [CrossRef]
- Wang, Y. Prediagnosis recognition of acute ischemic stroke by artificial intelligence from facial images. Aging Cell 2024, 23(8), p. e14196. [Google Scholar] [CrossRef] [PubMed]
- Davies, H. Facial expression to emotional stimuli in non-psychotic disorders: A systematic review and meta-analysis. Neurosci Biobehav Rev 2016, 64, 252–71. [Google Scholar] [CrossRef] [PubMed]
- Singh, A.; Anjankar, V.P.; Sapkale, B. Obsessive-Compulsive Disorder (OCD): A Comprehensive Review of Diagnosis, Comorbidities, and Treatment Approaches. Cureus 2023, 15(11), p. e48960. [Google Scholar] [CrossRef]
- Bersani, G. Comparison of facial expression in patients with obsessive-compulsive disorder and schizophrenia using the Facial Action Coding System: a preliminary study. Neuropsychiatr Dis Treat 2012, 8, 537–47. [Google Scholar] [CrossRef] [PubMed]
- Valeriani, G. Generalized and specific emotion impairments as potential markers of severity in obsessive-Compulsive disorder: A preliminary study using Facial Action Coding System (FACS). Psychiatria Danubina 2015, 27(2), 159–167. [Google Scholar]
- Renneberg, B.
Facial expression of emotions in borderline personality disorder and depression. Journal of Behavior Therapy and Experimental Psychiatry 2005, 36(3), 183–196. [Google Scholar] [CrossRef]
- Sloan, D.M.; Strauss, M.E.; Wisner, K.L.
Diminished response to pleasant stimuli by depressed women. J Abnorm Psychol 2001, 110(3), 488–93. [Google Scholar] [CrossRef]
- Yoshimura, S. Impaired Overt Facial Mimicry in Response to Dynamic Facial Expressions in High-Functioning Autism Spectrum Disorders. Journal of Autism and Developmental Disorders 2015, 45(5), 1318–1328. [Google Scholar] [CrossRef]
- Chen, W.T. Decoding pain through facial expressions: a study of patients with migraine. J Headache Pain 2024, 25(1), p. 33. [Google Scholar] [CrossRef]
- Sarolidou, G.
Emotional expressions of the sick face. Brain Behav Immun 2019, 80, 286–291. [Google Scholar] [CrossRef]
- Kopalidis, T. Advances in Facial Expression Recognition: A Survey of Methods, Benchmarks, Models, and Datasets. Information 2024, 15(3), 135. [Google Scholar] [CrossRef]
Table 1.
Studies using the facial expressions in detecting cognitive impairments.
Table 1.
Studies using the facial expressions in detecting cognitive impairments.
| Citation |
Subject population |
Facial features |
Ground truth |
Classification method |
Prediction performance |
| Tanaka et al., 2019 [36] |
12 dementia, 12 healthy subjects |
2D facial landmarks, face pose, gaze angles, AUs, lip movements |
MMSE |
L1 regularized logistic regression |
AUC of ROC 0.82 |
| Umeda-Kameyama et al., 2021 [37] |
121 CI, 117 healthy subjects |
Facial images |
MMSE |
Multiple Deep learning models (e.g., Xception, SENet50, ResNet50, VGG16) |
Accuracy 92.56% AUC of ROC 0.9717 |
| Jiang et al., 2022 [38] |
256 CI, 237 healthy subjects |
Facial images |
MoCA |
CNN framework proposed in Jiang et al., 2021 [39] |
AUC of ROC 0.609 |
| Fei et al., 2022 [40] |
36 CI, 25 healthy subjects |
Frames from facial videos |
MoCA |
DNN (MobileNet and SVM) |
Accuracy 73.3% |
| Zheng et al., 2023 [41] |
117 total subjects (CI and healthy subjects) |
AUs, Face mesh, HOG |
MMSE |
Deep learning-based system (SVM, LSTM) |
Accuracy with AUs 71%, Face mesh 66%, HOG 79% |
| Alsuhaibani et al., 2024 [42] |
68 total subjects (MCI and healthy subjects) |
Facial images from facial videos |
Clinical diagnosis |
Deep learning-based framework |
Accuracy 88% |
| Sun et al., 2024 [43] |
100 MCI, 89 healthy subjects |
Facial video clips |
Clinical diagnosis |
Transformer-based framework |
Accuracy 90.63% |
| Takeshige-Amano et al., 2024 [44] |
93 AD, 99 healthy subjects |
Smile, face orientation, eye opening and blink indices |
MMSE, MoCA |
Multiple Machine learning classifiers (e.g., Random Forest, logistic regression) |
Accuracy 0.72 (with Random Forest classifier) |
| Okunishi et al., 2025 [45] |
110 MCI, 144 Dementia, 161 healthy subjects |
AUs, emotion categories, Valence-Arousal, face embeddings |
MMSE |
Decision tree-based model (Light GBM) |
AUC of ROC dementia: 0.933, MCI: 0.889 |
Table 2.
Studies using the facial expressions in detecting pain and estimating pain intensities.
Table 2.
Studies using the facial expressions in detecting pain and estimating pain intensities.
| Citation |
Subject population |
Facial features |
Ground truth |
Classification method |
Prediction performance |
| Lucey et al., 2011 [57] |
25 subjects with shoulder pain1 |
Facial video frames (Active appearance model (AAM)-based system for feature extraction) |
PSPI |
SVM (pain vs no-pain) |
Accuracy 80.9% |
| Rathee, Ganotra, 2015 [58] |
25 subjects with shoulder pain1
|
Facial video frames |
PSPI |
Distance Metric Learning (DML) + SVM for 16 level pain intensity classification |
Accuracy 96% |
| Rathee, Ganotra, 2016 [59] |
25 subjects with shoulder pain1
|
Facial video frames (extracted Gabor, HOG and local binary pattern features) |
PSPI |
Multiview DML + SVM for pain detection and 4 level pain intensity classification |
Accuracy for Pain detection: 89.59% Pain intensity: 75% |
| Bargshdy et al., 2020a [60] |
25 subjects with shoulder pain1
|
Facial video frames |
PSPI |
Deep learning—based framework for 4 level pain intensity classification |
Accuracy 85% |
| Bargshdy et al., 2020c [61] |
25 subjects with shoulder pain1, 20 subjects with electrically evoked pain2
|
Facial video frames in Hue, Saturation, Value (HSV) color space |
PSPI and stimuli-based pain scale |
Temporal CNN for 4 level pain intensity classification for Dataset 1 and 5 level pain intensity for Dataset2 |
Accuracy for Dataset1: 94.14% Dataset2: 89% |
| Bargshdy et al., 2020b [62] |
25 subjects with shoulder pain1, 20 subjects with electrically evoked pain2
|
Facial video frames |
PSPI and stimuli-based pain scale |
CNN-RNN for 5 level pain intensity classification for Dataset 1 and 5 level pain intensity for Dataset2 |
Accuracy for Dataset1: 86% Dataset2: 92.26% |
| Casti et al., 2021 [63] |
25 subjects with shoulder pain1
|
Facial video frames |
VAS |
Linear discriminant analysis (LDA) for pain detection (VAS>0 vs VAS=0) and Pain intensity (VAS) estimation |
Accuracy for Pain detection AUC 0.87 Pain intensity estimation MAE 2.44 |
| Barua et al., 2022 [64] |
129 subjects with shoulder pain1
|
Facial video frames (shutter blinds-based deep feature extraction) |
PSPI |
kNN for 4 level pain intensity classification |
Accuracy 95.57% |
| Rodriguez et al., 2022 [65] |
25 subjects with shoulder pain1
|
Facial video frames |
PSPI |
CNN-LSTM based method for pain detection and 6 level pain intensity estimation |
Accuracy pain detection: 83.1% pain intensity estimation: MAE 0.5 |
| Fontaine et al., 2022 [66] |
1189 patients before and after surgery |
Facial images and AUs |
NRS |
CNN based pain intensity estimation for facial images and SVM for AUs |
Accuracy CNN 53% SVM 27.7% |
| Alghamdi, Alaghband, 2022 [67] |
24 subjects with shoulder pain1
|
Facial video frames |
PSPI |
Transfer learning-based approach (InceptionV3 with SGD optimizer) for 4 level pain intensity classification |
Accuracy 90.56% |
| Alphonse et al., 2024 [68] |
25 subjects with shoulder pain1
|
Facial video frames (Statistical Frei-Chen Mask (SFCM)-based features and DenseNet-based features) |
PSPI |
Radial Basis Function Based Extreme Learning Machine (RBF-ELM) classifier for 4 level pain intensity estimation |
Accuracy for pain intensity estimation 98.58% |
| Tan et al., 2025 [69] |
200 patients undergoing surgery or interventional pain procedures |
Facial video frames |
NRS |
Spatial temporal attention long short-term memory (STA-LSTM) deep learning network for 3 level pain intensity |
Accuracy 86.6% |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).