ARTICLE | doi:10.20944/preprints202104.0542.v1
Subject: Behavioral Sciences, Applied Psychology Keywords: action unit; aging; emotion; facial expression; facial recognition
Online: 20 April 2021 (12:47:43 CEST)
The ability to express and recognize emotion via facial expressions is well known to change with age. The present study investigated the differences in the facial recognition and facial expression of the elderly (n = 57) and the young (n = 115) and measure how each group uses different facial muscles for each emotion with Facial Action Coding System (FACS). In facial recognition task, the elderly did not recognize facial expressions better than young people and reported stronger feelings of fear and sad from photographs. In making facial expression task, the elderly rated all their facial expressions as stronger than the younger, but in fact, they expressed strong expressions in fear and anger. Furthermore, the elderly used more muscles in the lower face when making facial expressions than younger people. These results help to understand better how the facial recognition and expression of the elderly change, and show that the elderly do not effectively execute the top-down processing concerning facial expression.
ARTICLE | doi:10.20944/preprints202206.0267.v1
Subject: Behavioral Sciences, Social Psychology Keywords: affective computing; empathy; facial mimicry; facial recognition technology; deep learning
Online: 20 June 2022 (10:08:13 CEST)
Facial expressions play a key role in interpersonal communication when it comes to negotiating our emotions and intentions, as well as interpreting those of others. Research has shown that we can connect to other people better when we exhibit signs of empathy and facial mimicry. However, the relationship between empathy and facial mimicry is still debated. Among the factors contributing to the difference in results across existing studies is the use of different instruments for measuring both empathy and facial mimicry, as well as often ignoring the differences across various demographic groups. This study first looks at the differences in empathetic abilities of people across different demographic groups based on gender, ethnicity and age. The empathetic ability is measured based on the Empathy Quotient capturing a balanced representation of both emotional and cognitive empathy. Using statistical and machine learning methods, the study then investigates the correlation between the empathetic ability and facial mimicry of subjects in response to images portraying different emotions displayed on a computer screen. Unlike the existing studies measuring facial mimicry using electromyography, this study employs a technology detecting facial expressions based on video capture and deep learning. This choice was made in the context of increased online communication during and post the COVID-19 pandemic. The results of this study confirm the previously reported difference in the empathetic ability between females and males. However, no significant difference in the empathetic ability was found across different age and ethnic groups. Furthermore, no strong correlation was found between empathy and facial reactions to faces portraying different emotions shown on a computer screen. Overall, the results of this study can be used to inform the design of online communication technologies and tools for training empathy team leaders, educators, social, and health care providers.
ARTICLE | doi:10.20944/preprints201701.0102.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: facial expression recognition; fusion features; salient facial areas; hand-crafted features; feature correction
Online: 24 January 2017 (03:28:51 CET)
In pattern recognition domain, deep architectures are widely used nowadays and they have achieved fine grades. However, these deep architectures need special demands, especially big datasets and GPU. Aiming to gain better grades without deep networks, we propose a simplified algorithm framework using fusion features extracted from the salient areas of faces. Furthermore, the proposed algorithm has achieved a better result than some deep architectures. For extracting more effective features, this paper firstly defines the salient areas on the faces. This paper normalizes the salient areas of the same location in the faces to the same size, therefore it can gain more similar features from different subjects. LBP and HOG features are extracted from the salient areas, fusion features’ dimensions are reduced by Principal Component Analysis( PCA) and we apply softmax to classify the six basic expressions at once. This paper proposes a salient areas definitude method which uses peak expressions frames to compare with their neutral faces. This paper also proposes and applies the idea of normalizing the salient areas to align the specific areas which express the different expressions. This makes the salient areas found from different subjects have the same size. Besides, gamma correction method is firstly applied on LBP features in our algorithm framework which improves our recognition rates significantly. By applying this algorithm framework, our research has gained state-of-the-art performances on CK+ database and JAFFE database.
ARTICLE | doi:10.20944/preprints202104.0424.v1
Subject: Behavioral Sciences, Applied Psychology Keywords: culture; facial expressions; emotion; posed; spontaneous
Online: 15 April 2021 (16:45:31 CEST)
There is a growing consensus that culture influences the perception of facial expressions of emotion. However, little is known about whether and how culture shapes the production of emotional facial expressions, and even less so about whether culture differentially shapes the production of posed versus spontaneous expressions. Drawing on prior work on cultural differences in emotional communication, we tested the prediction that people from the Netherlands (a historically heterogeneous culture where people are prone to low-context communication) produce facial expressions that are more distinct across emotions compared to people from China (a historically homogeneous culture where people are prone to high-context communication). Furthermore, we examined whether the degree of distinctiveness varies across posed and spontaneous expressions. Dutch and Chinese participants were instructed to either pose facial expressions of anger and disgust, or to share autobiographical events that elicited spontaneous expressions of anger or disgust. Using the complementary approaches of supervised machine learning and information-theoretic analysis of facial muscle movements, we show that posed and spontaneous facial expressions of anger and disgust were more distinct when produced by Dutch compared to Chinese participants. These findings shed new light on the role of culture in emotional communication by demonstrating, for the first time, effects on the distinctiveness of production of facial expressions.
ARTICLE | doi:10.20944/preprints202010.0537.v1
Subject: Physical Sciences, Acoustics Keywords: Cranial Traction Therapy; Correcting Facial Asymmerty
Online: 27 October 2020 (07:58:12 CET)
The purpose of this study is to develop a cranial traction therapy program to help correct facial asymmetry of the hard tissues through the means of the treatment of soft tissues—a non-surgical therapeutic method for the correcting of facial asymmetry. We have formed a group of experts who have agreed to the study. In the primary survey, open questions were used. In the second survey, the results of the first survey were summarized and the degree of agreement was presented to the questions in each category. In the third survey, we conducted a statistical analysis of the degree of agreement on each item of question. All surveys also performed email. The distribution was calculated using the SPSS (ver.23.0) program, and the mean difference between the result and X² was calculated. The significance level was set to p<.05. Most of the questions attained a certain level of consensus by the experts (average of 4.0 or higher), it can be said that most are important and suitable questions. the results regarding the degree of importance for each of the points of evaluation made by the groups of experts in both the second and third stage of the cranial traction therapy program were verified using content validity ratio (CVR 77). The ratio for the cranial traction 13 points of evaluation was within the range of 0.40∼1.00, so the Delphi program for the cranial traction therapy verified that the content was valid.
REVIEW | doi:10.20944/preprints202005.0470.v1
Subject: Medicine & Pharmacology, Other Keywords: coronavirus; COVID-19; facial protection; masks; PAPR
Online: 31 May 2020 (15:02:43 CEST)
We live in extraordinary times, where COVID-19 pandemic has brought the whole world to a screeching halt. Tensions and contradictions that surround the pandemic ridden world include the availability, and the lack thereof, various facial protection measures to mitigate the viral spread. Here, we comprehensively explore the different type of facial protection measures, including masks, needed both for the pubic and the health care workers (HCW). We discuss the anatomy, the critical issues of disinfection and reusability of masks, the alternative equipment available for the protection of the facial region from airborne diseases, such as face shields and powered air purifying respirators (PAPR), and the skin-health impact of prolonged wearing of facial protection by HCW. Clearly, facial protection, either in the form of masks or alternates, appears to have mitigated the pandemic as seen from the minimal COVID-19 spread in countries where public mask wearing is strictly enforced. On the contrary, the healthcare systems, that appear to have been unprepared for emergencies of this nature, should be appropriately geared to handle the imbalance of supply and demand of personal protective equipment including face masks. These are two crucial lessons we can learn from this tragic experience.
REVIEW | doi:10.20944/preprints202009.0101.v1
Subject: Biology, Animal Sciences & Zoology Keywords: facial expressions; pain; grimace scales; mice; rat; rabbit
Online: 4 September 2020 (11:18:42 CEST)
Animals’ facial expressions have been widely used as a readout for emotion. Scientific interest in the facial expressions of laboratory animals has centered primarily on negative experiences, such as pain, experienced as a result of scientific research procedures. Recent attempts to standardize evaluation of facial expressions associated with pain in laboratory animals has culminated in the development of “grimace scales”. In the context of laboratory animals, these have been developed and evaluated for mice, rats, rabbits, sheep, and ferrets. The prevention or relief of pain in laboratory animals is a fundamental requirement for in vivo research to satisfy community expectations. However, to date it appears that the grimace scales have not seen widespread implementation as clinical pain assessment techniques in biomedical research. In this review, we discuss some of the barriers to implementation of the scales in clinical laboratory animal medicine, progress made in automation of collection, and suggest avenues for future research.
REVIEW | doi:10.20944/preprints202102.0400.v1
Subject: Medicine & Pharmacology, Dentistry Keywords: Temporomandibular joint disorders; Temporomandibular Joint; Facial Pain; Craniomandibular Disorders
Online: 17 February 2021 (16:07:10 CET)
Temporomandibular disorders (TMD) is a group of orofacial pain conditions which is the most common non-dental pain complaint in the maxillofacial region. Due to the complexity of the etiology and often cyclical nature of the disease, the diagnosis and management of TMD remain a challenge where consensus is still lacking in many aspects. While clinical examination is considered the most important process in the diagnosis of TMD, imaging may serve as a valuable adjunct in selected cases. Depending on the type of TMD, many treatment modalities have been proposed, ranging from conservative options to open surgical procedures. In this review, the authors discuss the present thinking in the etiology and classification of TMD, followed by the diagnostic approach and the current trend and controversies in management.
REVIEW | doi:10.20944/preprints202007.0604.v1
Subject: Behavioral Sciences, Other Keywords: Parkinson's disease; Emotion; Facial Masking; Dysarthria; Stigma; Dehumanization; Loneliness
Online: 25 July 2020 (11:16:57 CEST)
Parkinson’s disease (PD) is typically well-recognized by its characteristic motor symptoms (e.g., bradykinesia, rigidity, and tremor). The cognitive symptoms of PD are increasingly being acknowledged by clinicians and researchers alike. However, PD also involves a host of emotional and communicative changes which can cause major disruptions to social functioning. These include problems producing emotional facial expressions (i.e., facial masking) and emotional speech (i.e., dysarthria), as well as difficulties recognizing the verbal and non-verbal emotional cues of others. These social symptoms of PD can result in severe negative social consequences, including stigma, dehumanization, and loneliness, which might affect quality of life to an even greater extent than more well-recognized motor or cognitive symptoms. It is therefore imperative that researchers and clinicians become aware of these potential social symptoms and their negative effects, in order to properly investigate and manage the socioemotional aspects of PD. The present review provides an examination of the current research surrounding some of the most common social symptoms of PD and their related social consequences, and argues that proactively and adequately addressing these issues might improve disease outcomes.
ARTICLE | doi:10.20944/preprints202105.0424.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Convolutional Neural Network (CNN); Emotion Recognition; Facial Expression; Classification; Accuracy
Online: 18 May 2021 (11:34:19 CEST)
Emotion recognition defined as identifying human emotion and is directly related to different fields such as human-computer interfaces, human emotional processing, irrational analysis, medical diagnostics, data-driven animation, human-robot communi- cation and many more. The purpose of this study is to propose a new facial emotional recognition model using convolutional neural network. Our proposed model, “ConvNet”, detects seven specific emotions from image data including anger, disgust, fear, happiness, neutrality, sadness, and surprise. This research focuses on the model’s training accuracy in a short number of epoch which the authors can develop a real-time schema that can easily fit the model and sense emotions. Furthermore, this work focuses on the mental or emotional stuff of a man or woman using the behavioral aspects. To complete the training of the CNN network model, we use the FER2013 databases, and we test the system’s success by identifying facial expressions in the real-time. ConvNet consists of four layers of convolution together with two fully connected layers. The experimental results show that the ConvNet is able to achieve 96% training accuracy which is much better than current existing models. ConvNet also achieved validation accuracy of 65% to 70% (considering different datasets used for experiments), resulting in a higher classification accuracy compared to other existing models. We also made all the materials publicly accessible for the research community at: https://github.com/Tanoy004/Emotion-recognition-through-CNN.
ARTICLE | doi:10.20944/preprints201811.0416.v1
Subject: Medicine & Pharmacology, Dentistry Keywords: multilevel principal components analysis; shape and image texture; facial expression
Online: 19 November 2018 (04:57:36 CET)
Single-level Principal Components Analysis (PCA) and multi-level PCA (mPCA) methods are applied here to a set of (2D frontal) facial images from a group of 80 Finnish subjects (34 male; 46 female) with two different facial expressions (smiling and neutral) per subject. Inspection of eigenvalues gives insight into the importance of different factors affecting shapes, including: biological sex, facial expression (neutral versus smiling), and all other variations. Biological sex and facial expression are shown to be reflected in those components at appropriate levels of the mPCA model. Dynamic 3D shape data for all phases of a smile made up a second dataset sampled from 60 adult British subjects (31 male; 29 female). Modes of variation reflected the act of smiling at the correct level of the mPCA model. Seven phases of the dynamic smiles are identified: rest pre-smile, onset 1 (acceleration), onset 2 (deceleration), apex, offset 1 (acceleration), offset 2 (deceleration), and rest post-smile. A clear cycle is observed in standardized scores at an appropriate level for mPCA and in single-level PCA. mPCA can be used to study static shapes and images, as well as dynamic changes in shape. It gave us much insight into the question “what’s in a smile?”
ARTICLE | doi:10.20944/preprints202204.0150.v1
Subject: Life Sciences, Cell & Developmental Biology Keywords: Ttc21a; Ttc21b; mouse knock-out; jaw; facial bone; viscerocranium; Hh signaling
Online: 16 April 2022 (03:36:54 CEST)
Ciliopathies are genetic syndromes that link skeletal dysplasias to dysfunction of primary cilia. Primary cilia are sensory organelles synthesized by intraflagellar transport (IFT) - A and B complexes, which traffic protein cargo along a microtubular core. We have reported that deletion of IFT-A gene, Thm2, together with a null allele of its paralog, Thm1, causes a small skeleton with small mandible or micrognathia in juvenile mice. Using micro-computed tomography, here we quantify the craniofacial defects of Thm2-/-;Thm1aln/- triple allele mutant mice. At postnatal day 14, triple allele mutant mice exhibit micrognathia, maxillary hypoplasia, and a decreased facial angle due to shortened maxilla, premaxilla, and nasal bones, reflecting altered development of facial anterior-posterior elements. In contrast, other ciliopathy-related craniofacial defects, such as cleft lip and/or palate, hypo-/hypertelorism, broad nasal bridge, craniosynostosis, and facial asymmetry, were not observed, suggesting development of the facial transverse dimension is intact. Calvarial-derived osteoblasts of triple allele mutant mice showed reduced bone formation in vitro that was ameliorated by Hedgehog agonist, SAG. Together, these data indicate that Thm2 and Thm1 genetically interact to regulate bone formation and sculpting of the postnatal face. The triple allele mutant mice present as a novel model to study craniofacial bone development.
CASE REPORT | doi:10.20944/preprints202208.0165.v1
Subject: Medicine & Pharmacology, Clinical Neurology Keywords: neuralgia; earache; facial pain; neuropathic pain; geniculate neuralgia; nervus intermedius; otalgia; gabapentin
Online: 9 August 2022 (03:20:29 CEST)
(1) Background: Painful nervus intermedius neuropathy (e.g., geniculate neuralgia) involves continuous or near-continuous pain affecting the distribution of the intermedius nerve(s). The diagnosis of this entity is challenging, particularly when the clinical and demographic features do not resemble the typical presentation of this condition. To the best of our knowledge, only three case reports have described the occurrence of nervus intermedius neuropathy in young patients. (2) Case Description: A 13-year-old female referred to the Orofacial Pain clinic with a complaint of pain located deep in the right ear and mastoid area. The pain was described as a constant throbbing and dull, with an intensity of 7/10 on numerical rating scale, characterized by superimposed brief paroxysms of severe sharp pain. The past treatments included ineffective pharmacological and irreversible surgical approaches. After a comprehensive evaluation, a diagnosis of idiopathic painful nervus intermedius neuropathy was given, which was successfully managed with the use of gabapentin. (3) Conclusions and Practical Implications: The diagnosis and treatment of neuropathic pain affecting the nervus intermedius can be challenging due to the complex nature of the sensory innervation of the ear. The diagnosis can be even more challenging in cases of atypical clinical and demographic presentations, which in turn may result in unsuccessful, unnecessary, and irreversible treatments. Multidisciplinary teams and constant knowledge update are fundamental to provide good quality of care to our patients and to not overlook any relevant signs or symptoms.
ARTICLE | doi:10.20944/preprints202108.0405.v1
Subject: Biology, Anatomy & Morphology Keywords: animal welfare; pigs; deep learning; computer vision; stress detection; facial expression recognition
Online: 19 August 2021 (13:17:08 CEST)
Animal welfare is not only an ethically important consideration in good animal husbandry, but can also have a significant effect on an animal’s productivity. The aim of this paper is to show that a reduction in animal welfare, in the form of increased stress, can be identified in pigs from frontal images of the animals. We train a Convolutional Neural Network (CNN) using a leave-one-out design and show that it is able to discriminate between stressed and unstressed pigs with an accuracy of >90% in unseen animals. Grad-CAM is used to identify the animal regions used, and these support those used in manual assessments such as the Pig Grimace Scale. This innovative work paves the way for further work examining both positive and negative welfare states with a view to the development of an automated system that can be used in precision livestock farming to improve animal welfare.
ARTICLE | doi:10.20944/preprints202012.0113.v1
Subject: Medicine & Pharmacology, Allergology Keywords: parotidectomy; postoperative complications; perioperative complications; salivary gland tumor; facial paralysis; hospital stay
Online: 4 December 2020 (14:02:00 CET)
Background: Perioperative complications after parotidectomy are poorly studied and have a potential impact on hospitalization stay. The Clavien-Dindo classification of postoperative complications used in visceral surgery allows a recording of all complications, including a grading scale related to the severity of complication. Methods: The cohort analyzed for perioperative complications is composed of 436 parotidectomies. classified into three types, four groups and three classes depending on extent of parotid resection, inclusion of additional procedures and pathology, respectively. Results: Using the Clavien-Dindo classification, complications were reported in 77 % of the interventions. In 438 complications, 430 (98.2%) were classified as minor (332 grade I and 98 grade II) and 8 (1.8%) were classified as major (grade III). Independent variables affecting the risk of perioperative complications were duration of surgery (odds ratio = 1.007, p-value = 0.029) and extent of parotidectomy (odds ratio = 4.043, p-value = 0.007). Total/subtotal parotidectomy was associated with an increased risk of grade II-III complications [odds ratio = 2.866 (95% CI: 1.307-6.283), p-value = 0.009]. Hospital stay increased in patients with complications (p= 0.0064). Conclusions: Use of Clavien-Dindo classification shows that parotidectomy is followed by a high rate of perioperative complications. Longer hospital stay is observed in patients with perioperative complications. Almost all complications are minor and have limited consequence on hospital stay.
ARTICLE | doi:10.20944/preprints202209.0220.v1
Subject: Medicine & Pharmacology, Dentistry Keywords: TrueDepth; CBCT; Orthodontics; Face scan; Smartphone; Facial diagnostics; Smartphone-based sensors; Facially driven orthodontics
Online: 15 September 2022 (05:45:29 CEST)
The current paradigm shift in orthodontic treatment planning is based on facially driven diagnostics. This requires an affordable, convenient, and non-invasive solution for face scanning. Therefore, utilization of smartphones` TrueDepth sensors is very tempting. TrueDepth refers to front-facing cameras with a dot projector in Apple devices that provide real-time depth data in addition to visual information. There are several applications that tout themselves as accurate solutions for 3D scanning of the face in dentistry. Their clinical accuracy has been uncertain. This study focuses on evaluating the accuracy of the Bellus3D Dental Pro app, which uses Apple's TrueDepth sensor. The app reconstructs a virtual, high-resolution version of the face, which is available for download as a 3D object. In this paper, sixty TrueDepth scans of the face were compared to sixty corresponding facial surfaces segmented from CBCT. Difference maps were created for each pair and evaluated in specific facial regions. The results confirmed statistically significant differences in some facial regions in amplitudes greater than 3 mm, suggesting that current technology has limited applicability for clinical use. The clinical utilization of facial scanning for orthodontic evaluation, which does not require accuracy in the lip region below 3 mm, can be considered.
ARTICLE | doi:10.20944/preprints202201.0167.v1
Subject: Medicine & Pharmacology, Dentistry Keywords: rapid palatal expander; midpalatal suture; bone density; cone-beam computed tomography, facial patterns, skeletal growth pattern.
Online: 12 January 2022 (13:42:31 CET)
The aim of this paper was to evaluate the changes in the mean bone density values of the midpalatal suture in 392 young patients treated with the Rapid Palatal Expander appliance according to sex, gender, vertical and sagittal skeletal patterns. Materials and Methods. The evaluations were performed using the low-dose protocol cone-beam computed tomography scans at t0 (preoperatively) and t1 (1 year after the beginning of the therapy). The region of interest was used to calculate bone density in Hounsfield units (HU) in the area between the maxillary incisors. Results. CBCT scan data of 196 females and 196 males (mean age of 11,7 years) showed homogeneous and similar density values of the MPS at T0 (547.59 HU - 565.85 HU) and T1 (542.31 - 554.20 HU). Class III skeletal individuals showed a significant higher BD than the II class group at T0, but not at T1. Females showed significantly higher BD than males at t0 and t1. No significant differences were found between the other groups and between two-time points in terms of bone density values of the MPS. Conclusions. Females and III class groups showed significantly higher bone density values than males and II class, respectively. No statistically significant differences were found from T0 to T1 in any groups, suggesting that a similar rate of suture reorganization occurs after the use of the RPE, following reorganization and bone deposition along with the MPS.
ARTICLE | doi:10.20944/preprints202005.0451.v1
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Bilateral Line Local Binary Patterns; Facial matrix; Statistical subspace; Face recognition; Calibrated SVM model; Ensemble learning
Online: 27 May 2020 (12:07:19 CEST)
Local binary pattern is one of the visual descriptors and can be used as a powerful feature extractor for texture classification. In this paper, a novel representation for face recognition is proposed, called it Bilateral Line Local Binary Patterns (BL-LBP). This scheme is an extension of Line Local Binary Patterns descriptors in the statistical learning subspace. The present bilateral descriptors are fused with an ensemble learning of calibrated SVM models. The performance of this scheme is evaluated using 5 standard face databases. It is found that it is robust against illumination variation, diverse facial expressions and head pose variations and its recognition accuracy reaches 98 percent, running on a mobile device with a processing speed of 63 ms per face. Results suggest that our proposed method can be very useful for the vision systems that have limited resources where the computational cost is critical.
ARTICLE | doi:10.20944/preprints202107.0570.v1
Subject: Keywords: Face Detection; Euclidean Distance; Fast Fourier Transformation; Discrete Cosine Transformation; Facial Parts Detection; Frequency domain; Spatial domain
Online: 26 July 2021 (11:47:11 CEST)
In today’s world face detection is the most important task. Due to the chromosomes disorder sometimes a human face suffers from different abnormalities. For example, one eye is bigger than the other, cliff face, different chin-length, variation of nose length, length or width of lips are different, etc. For computer vision currently this is a challenging task to detect normal and abnormal face and facial parts from an input image. In this research paper a method is proposed that can detect normal or abnormal faces from a frontal input image. This method used Fast Fourier Transformation (FFT) and Discrete Cosine Transformation of frequency domain and spatial domain analysis to detect those faces.
ARTICLE | doi:10.20944/preprints202004.0454.v1
Subject: Medicine & Pharmacology, Nutrition Keywords: casein hydrolysate; Val-Pro-Pro; Ile-Pro-Pro; brachial ankle pulse wave velocity; advanced glycation end products; facial pigmentation
Online: 25 April 2020 (02:42:35 CEST)
Casein hydrolysate improves arterial stiffness, as estimated by brachial ankle pulse wave velocity (baPWV), in untreated hypertensive subjects. Facial pigmentation is a useful biomarker for arterial stiffness. This trial evaluated whether casein hydrolysate improves facial pigmentation in association with changes in arterial stiffness. A randomized, double-blind, placebo-controlled trial was conducted in 80 non-hypertensive Japanese participants randomly assigned to receive either active tablets containing casein hydrolysate or placebo for 48 weeks. Facial pigmentation and baPWV were measured at baseline and at the end of the intervention. Other biochemical atherosclerosis-related parameters were also measured, including advanced glycation end products (AGEs). Changes in facial pigmentation showed a significant difference between the groups. Change in baPWV was significantly better in the active than in the placebo group. In contrast, no significant association was seen between changes in facial pigmentation and those in baPWV. Among other atherosclerosis-related factors, changes in advanced glycation products (AGEs) were significantly decreased in the active compared to the placebo group. Further, changes in facial pigmentation were positively correlated with those in AGEs. Changes in AGEs were independently associated with changes in facial pigmentation. Casein hydrolysate improves facial pigmentation in non-hypertensive participants. Casein hydrolysate may have beneficial effects on glycation stress.
ARTICLE | doi:10.20944/preprints202108.0329.v1
Subject: Materials Science, Surfaces, Coatings & Films Keywords: face shield; facial protective equipment; SARS-CoV-2; phi 6; MRSA; MRSE; polyethylene terephthalate; benzalkonium chloride; COVID-19; multidrug-resistant bacteria
Online: 16 August 2021 (11:38:49 CEST)
Transparent materials used for facial protection equipment provide protection against microbial infections caused by viruses and bacteria, including multidrug-resistant strains. However, transparent materials used for this type of application are made of materials that do not possess antimicrobial activity. They just avoid direct contact between the person and the biological agent. Therefore, healthy people can get infected through contact of the contaminated material surfaces and this equipment constitute an increasing source of infectious biological waste. Furthermore, infected people can transmit microbial infections easily because the protective equipment do not inactivate the microbial load generated while breathing, sneezing, or coughing. In this regard, the goal of this work consisted of fabricating a transparent face shield with intrinsic antimicrobial activity that could provide extra-protection against infectious agents and reduce the generation of infectious waste. Thus, a single-use transparent antimicrobial face shield composed of polyethylene terephthalate and an antimicrobial coating of benzalkonium chloride has been developed for the next generation of facial protective equipment. The antimicrobial coating was analyzed by atomic force microscopy and field emission scanning electron microscopy with elemental analysis. This is the first facial transparent protective material capable of inactivating enveloped viruses such as SARS-CoV-2 in less than one minute of contact, and the methicillin-resistant Staphylococcus aureus and Staphylococcus epidermidis. Bacterial infections contribute to severe pneumonia associated with the SARS-CoV-2 infection, and their resistance to antibiotics is increasing. Our extra protective broad-spectrum antimicrobial composite material could also be applied for the fabrication of other facial protective tools such as such as goggles, helmets, plastic masks and space separation screens used for counters or vehicles. This low-cost technology would be very useful to combat the current COVID-19 pandemic and protect health care workers from multidrug-resistant infections in developed and underdeveloped countries.