Subject: Biology, Animal Sciences & Zoology Keywords: convolutional neural networks; horse emotion recognition; horse emotion
Online: 7 June 2021 (12:42:05 CEST)
Creating intelligent systems capable of recognizing emotions is a difficult task, especially when looking at emotions in animals. This paper describes the process of designing a “proof of concept” system to recognize emotions in horses. This system is formed by two elements, a detector and a model. The detector is a fast region-based convolutional neural network that detects horses in an image. The model is a convolutional neural network that predicts the emotions of those horses. These two elements were trained with multiple images of horses until they achieved high accuracy in their tasks. 400 images of horses were collected and labeled to train both the detector and the model while 80 were used to validate the system. Once the two components were validated, they were combined into a testable system that would detect equine emotions based on established behavioral ethograms indicating emotional affect through head, neck, ear, muzzle and eye position. The system showed an accuracy of between 69% and 74% on the validation set, demonstrating that it is possible to predict emotions in animals using autonomous intelligent systems. Such a system has multiple applications including further studies in the growing field of animal emotions as well as in the veterinary field to determine the physical welfare of horses or other livestock.
ARTICLE | doi:10.20944/preprints202007.0379.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Transfer Learning; Convolutional Neural Networks; Emotion Recognition
Online: 17 July 2020 (13:58:18 CEST)
The paper concludes the first research on mouth-based Emotion Recognition (ER), adopting a Transfer Learning (TL) approach. Transfer Learning results paramount for mouth-based emotion ER, because a few data sets are available, and most of them include emotional expressions simulated by actors, instead of adopting a real-world categorization. Using TL we can use fewer training data than training a whole network from scratch, thus more efficiently fine-tuning the network with emotional data and improving the convolutional neural network accuracy in the desired domain. The proposed approach aims at improving the Emotion Recognition dynamically, taking into account not only new scenarios but also modified situations with respect to the initial training phase, because the image of the mouth can be available even when the whole face is visible only in an unfavourable perspective. Typical applications include automated supervision of bedridden critical patients in an healthcare management environment, or portable applications supporting disabled users having difficulties in seeing or recognizing facial emotions. This work takes advantage from previous preliminary works on mouth-based emotion recognition using CNN deep-learning, and has the further benefit of testing and comparing a set of networks on large data sets for face-based emotion recognition well known in literature. The final result is not directly comparable with works on full-face ER, but valorizes the significance of mouth in emotion recognition, obtaining consistent performances on the visual emotion recognition domain.
ARTICLE | doi:10.20944/preprints202105.0441.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: emotion recognition; MLP; SVM; RAVDESS
Online: 19 May 2021 (12:53:55 CEST)
herein, we have compared the performance of SVM and MLP in emotion recognition using speech and song channels of the RAVDESS dataset. We have undertaken a journey to extract various audio features, identify optimal scaling strategy and hyperparameter for our models. To increase sample size, we have performed audio data augmentation and addressed data imbalance using SMOTE. Our data indicate that optimised SVM outperforms MLP with an accuracy of 82 compared to 75%. Following data augmentation, the performance of both algorithms was identical at ~79%, however, overfitting was evident for the SVM. Our final exploration indicated that the performance of both SVM and MLP were similar in which both resulted in lower accuracy for the speech channel compared to the song channel. Our findings suggest that both SVM and MLP are powerful classifiers for emotion recognition in a vocal-dependent manner.
ARTICLE | doi:10.20944/preprints202104.0542.v1
Subject: Behavioral Sciences, Applied Psychology Keywords: action unit; aging; emotion; facial expression; facial recognition
Online: 20 April 2021 (12:47:43 CEST)
The ability to express and recognize emotion via facial expressions is well known to change with age. The present study investigated the differences in the facial recognition and facial expression of the elderly (n = 57) and the young (n = 115) and measure how each group uses different facial muscles for each emotion with Facial Action Coding System (FACS). In facial recognition task, the elderly did not recognize facial expressions better than young people and reported stronger feelings of fear and sad from photographs. In making facial expression task, the elderly rated all their facial expressions as stronger than the younger, but in fact, they expressed strong expressions in fear and anger. Furthermore, the elderly used more muscles in the lower face when making facial expressions than younger people. These results help to understand better how the facial recognition and expression of the elderly change, and show that the elderly do not effectively execute the top-down processing concerning facial expression.
ARTICLE | doi:10.20944/preprints202208.0109.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: speech emotion recognition; affective computing; data augmentations; wav2vec 2.0; SVM
Online: 4 August 2022 (14:09:21 CEST)
Data augmentation techniques recently gained more adoption in speech processing, including speech emotion recognition. Although more data tends to be more effective, there may be a trade-off in which more data will not provide a better model. This paper reports experiments on investigating the effects of data augmentation in speech emotion recognition. The investigation aims at finding the most useful type of data augmentation and the number of data augmentations for speech emotion recognition. The experiments are conducted on the Japanese Twitter-based emotional speech corpus. The results show that for speaker-independent data, two data augmentations with glottal source extraction and silence removal exhibited the best performance among others, even with more data augmentation techniques. For the text-independent data (including speaker and text-independent), more data augmentations tend to improve speech emotion recognition performances. The results highlight the trade-off between the number of data augmentation and the performance of speech emotion recognition showing the necessity to choose a proper data augmentation technique for a specific application.
REVIEW | doi:10.20944/preprints202202.0074.v1
Subject: Behavioral Sciences, Behavioral Neuroscience Keywords: animal emotions; animal welfare; sensors; animal-based measures; affective states; emotion modelling
Online: 4 February 2022 (12:20:22 CET)
Emotions or affective states recognition in farm animals is an underexplored research domain. Despite significant advances in the animal welfare research, the animal affective computing through the development and application of devices and platforms that can not only recognize but interpret and process the emotions, are in nascent stage. By capitalizing on the immense potential of biometric sensors, the artificial intelligence enabled big data methods substantially offers advancement of animal welfare standards and meet the urgent need of caretakers to respond effectively to maintain the wellbeing of their animals. Farm animals, numbering over 70 billion worldwide, are increasingly managed in large-scale, intensive farms. With both public awareness and scientific evidence growing that farm animals experience suffering, as well as affective states such as fear, frustration and distress, there is an urgent need to develop efficient and accurate methods for monitoring their welfare. At present, there are no scientifically validated ‘benchmarks’ for quantifying transient emotional (affective) states in farm animals, and no established measures of good welfare, only indicators of poor welfare, such as injury, pain and fear. Conventional approaches to monitoring livestock welfare are time consuming, interrupt farming processes and involve subjective judgments. Biometric sensors data enabled by Artificial Intelligence are an emerging smart solution to unobtrusively monitoring livestock, but their potential for quantifying affective states and groundbreaking solutions in their application are yet to be realized. This review provides innovative methods for collecting big data on farm animal emotions, which can be used to train artificial intelligence models to classify, quantify and predict affective states in individual pigs and cows. Extending this to the group level, social network analysis can be applied to model emotional dynamics and contagion among animals. Finally, ‘digital twins’ of animals capable of simulating and predicting their affective states and be-havior in real time are a near-term possibility.
ARTICLE | doi:10.20944/preprints202104.0651.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: speech processing, data augmentation, speech emotion recognition, generative adversarial net-works
Online: 26 April 2021 (10:49:55 CEST)
Nowadays, and with the mechanization of life, speech processing has become so crucial for the interaction between humans and machines. Deep neural networks require a database with enough data for training. The more features are extracted from the speech signal, the more samples are needed to train these networks. Adequate training of these networks can be ensured when there is access to sufficient and varied data in each class. If there is not enough data; it is possible to use data augmentation methods to obtain a database with enough samples. One of the obstacles to developing speech emotion recognition systems is the Data sparsity problem in each class for neural network training. The current study has focused on making a cycle generative adversarial network for data augmentation in a system for speech emotion recognition. For each of the five emotions employed, an adversarial generating network is designed to generate data that is very similar to the main data in that class, as well as differentiate the emotions of the other classes. These networks are taught in an adversarial way to produce feature vectors like each class in the space of the main feature, and then they add to the training sets existing in the database to train the classifier network. Instead of using the common cross-entropy error to train generative adversarial networks and to remove the vanishing gradient problem, Wasserstein Divergence has been used to produce high-quality artificial samples. The suggested network has been tested to be applied for speech emotion recognition using EMODB as training, testing, and evaluating sets, and the quality of artificial data evaluated using two Support Vector Machine (SVM) and Deep Neural Network (DNN) classifiers. Moreover, it has been revealed that extracting and reproducing high-level features from acoustic features, speech emotion recognition with separating five primary emotions has been done with acceptable accuracy.
ARTICLE | doi:10.20944/preprints202105.0424.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Convolutional Neural Network (CNN); Emotion Recognition; Facial Expression; Classification; Accuracy
Online: 18 May 2021 (11:34:19 CEST)
Emotion recognition defined as identifying human emotion and is directly related to different fields such as human-computer interfaces, human emotional processing, irrational analysis, medical diagnostics, data-driven animation, human-robot communi- cation and many more. The purpose of this study is to propose a new facial emotional recognition model using convolutional neural network. Our proposed model, “ConvNet”, detects seven specific emotions from image data including anger, disgust, fear, happiness, neutrality, sadness, and surprise. This research focuses on the model’s training accuracy in a short number of epoch which the authors can develop a real-time schema that can easily fit the model and sense emotions. Furthermore, this work focuses on the mental or emotional stuff of a man or woman using the behavioral aspects. To complete the training of the CNN network model, we use the FER2013 databases, and we test the system’s success by identifying facial expressions in the real-time. ConvNet consists of four layers of convolution together with two fully connected layers. The experimental results show that the ConvNet is able to achieve 96% training accuracy which is much better than current existing models. ConvNet also achieved validation accuracy of 65% to 70% (considering different datasets used for experiments), resulting in a higher classification accuracy compared to other existing models. We also made all the materials publicly accessible for the research community at: https://github.com/Tanoy004/Emotion-recognition-through-CNN.
ARTICLE | doi:10.20944/preprints202109.0064.v1
Subject: Behavioral Sciences, Applied Psychology Keywords: callous-unemotional traits; eye-tracking; emotions; conduct problems; emotion recognition.
Online: 3 September 2021 (13:36:40 CEST)
The ability to efficiently recognize the emotions on others’ faces is something that most of us take for granted. Children with callous-unemotional (CU) traits and impulsivity/conduct problems (ICP), such as attention-deficit hyperactivity disorder, have been previously described as being “fear blind”. This is also associated with looking less at the eye regions of fearful faces, which are highly diagnostic. Previous attempts to intervene into emotion recognition strategies have not had lasting effects on participants’ fear recognition abilities. Here we present both (a) additional evidence that there is a two-part causal chain, from personality traits to face recognition strategies using the eyes, then from strategies to rates of recognizing fear in others; and (b) a pilot intervention that had persistent effects for weeks after the end of instruction. Further, the intervention led to more change in those with the highest CU traits. This both clarifies the specific mechanisms linking personality to emotion recognition and shows that the process is fundamentally malleable. It is possible that such training could promote empathy and reduce the rates of antisocial behavior in specific populations in the future.
ARTICLE | doi:10.20944/preprints202108.0433.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Speech emotion recognition; Feature extraction; Heterogeneous parallel network; Spectral features; Prosodic features; Multi-feature fusion
Online: 23 August 2021 (12:16:40 CEST)
Speech emotion recognition remains a heavy lifting in natural language processing. It has strict requirements to the effectiveness of feature extraction and that of acoustic model. With that in mind, a Heterogeneous Parallel Convolution Bi-LSTM model is proposed to address these challenges. It consists of two heterogeneous branches: the left one contains two dense layers and a Bi-LSTM layer, while the right one contains a dense layer, a convolution layer, and a Bi-LSTM layer. It can exploit the spatiotemporal information more effectively, and achieves 84.65%, 79.67%, and 56.50% unweighted average recall on the benchmark databases EMODB, CASIA, and SAVEE, respectively. Compared with the previous research results, the proposed model achieves better performance stably.
ARTICLE | doi:10.20944/preprints202112.0134.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Human Robot Interaction (HRI); social robot; Speech Emotion Recognition (SER); Gender Recognition, affective states
Online: 8 December 2021 (14:31:07 CET)
The real challenge in Human Robot Interaction (HRI) is to build machines capable of perceiving human emotions so that robots can interact with humans in a proper manner. It is well known from the literature that emotion varies accordingly to many factors. Among these, gender represents one of the most influencing one, and so an appropriate gender-dependent emotion recognition system is recommended. In this paper, a two-level hierarchical Speech Emotion Recognition (SER) system is proposed: the first level is represented by the Gender Recognition (GR) module for the speaker’s gender identification; the second is a gender-specific SER block. Specifically for this work, the attention was focused on the optimisation of the first level of the proposed architecture. The system was designed to be installed on social robots for hospitalised and living at home elderly patients monitoring. Hence, the importance of reducing the software computational effort of the architecture also minimizing the hardware bulkiness, in order for the system to be suitable for social robots. The algorithm was executed on the Raspberry Pi hardware. For the training, the Italian emotional database EMOVO was used. Results show a GR accuracy value of 97.8%, comparable with the ones found in literature.
ARTICLE | doi:10.20944/preprints202103.0189.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Flying Social Robot; Autonomous Unmanned Aerial Vehicle (UAV); Emotion Recognition; Convolution Neural Network (CNN); Virtual Reality (VR); Unity; MATLAB/Simulink; Python
Online: 5 March 2021 (11:52:50 CET)
This work is part of an ongoing research project to develop an unmanned flying social robot to monitor dependants at home in order to detect the person’s state and bring the necessary assistance. In this sense, this paper focuses on the description of a virtual reality (VR) simulation platform for the monitoring process of an avatar in a virtual home by a rotatory-wing autonomous unmanned aerial vehicle (UAV). This platform is based on a distributed architecture composed of three modules communicated through the Message Queue Telemetry Transport (MQTT) protocol: the UAV Simulator implemented in MATLAB/Simulink, the VR Visualiser developed in Unity, and the new emotion recognition (ER) System developed in Python. Using a face detection algorithm and a convolutional neural network (CNN), the ER System is able to detect the person’s face in the image captured by the UAV’s on-board camera and classify the emotion among seven possible ones (surprise, fear, happiness, sadness, disgust, anger or neutral expression). The experimental results demonstrate the correct integration of this new computer vision module within the VR platform, as well as the good performance of the designed CNN, with around 85% in the F1-score, a mean of the precision and recall of the model. The developed emotion detection system can be used in the future implementation of the assistance UAV that monitors dependent people in a real environment, since the methodology used is valid for images of real people.
ARTICLE | doi:10.20944/preprints202210.0301.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Emotion prediction; music; music emotion dataset; affective computing
Online: 20 October 2022 (08:33:49 CEST)
Music is capable of conveying many emotions. The level and type of emotion of the music perceived by a listener, however, is highly subjective. In this study, we present the Music Emotion Recognition with Profile information dataset (MERP). This database was collected through Amazon Mechanical Turk (MTurk) and features dynamical valence and arousal ratings of 54 selected full-length songs. The dataset contains music features, as well as user profile information of the annotators. The songs were selected from the Free Music Archive using an innovative method (a Triple Neural Network with the OpenSmile toolkit) to identify 50 songs with the most distinctive emotions. Specifically, the songs were chosen to fully cover the four quadrants of the valence arousal space. Four additional songs were selected from DEAM to act as a benchmark in this study and filter out low quality ratings. A total of 277 participants participated in annotating the dataset, and their demographic information, listening preferences, and musical background were recorded. We offer an extensive analysis of the resulting dataset, together with a baseline emotion prediction model based on a fully connected model and an LSTM model, for our newly proposed MERP dataset.
ARTICLE | doi:10.20944/preprints202205.0107.v2
Subject: Physical Sciences, Acoustics Keywords: Emotion recognition; Emotion cues; Pure tone; Frequency dependent relationship
Online: 1 June 2022 (07:44:13 CEST)
The recent advances in Human-Computer Interaction and Artificial Intelligence have significantly increased the importance of identifying human emotions from different sensory cues. Hence, understanding the underlying relationships between emotions and sensory cues have become a subject of study in many fields including Acoustics, Psychology, Psychiatry, Neuroscience and Biochemistry. This work is a preliminary step towards investigating cues for human emotion on a fundamental level by aiming to establish relationships between tonal frequencies of sound and emotions. For that, an online perception test is conducted, in which participants are asked to rate the perceived emotions corresponding to each tone. The results show that a crossover point for four primary emotions lies in the frequency range of 417–440 Hz, thus consolidating the hypothesis that the frequency range of 432–440 Hz is neutral from human emotion perspective. It is also observed that the frequency dependant relationships between emotion pairs Happy—Sad, and Anger—Calm are approximately mirrored symmetric in nature.
ARTICLE | doi:10.20944/preprints201912.0053.v1
Subject: Behavioral Sciences, Cognitive & Experimental Psychology Keywords: mental imagery; fear reactivity; emotion recognition; emotion regulation; propriosensitive
Online: 4 December 2019 (12:37:18 CET)
This study investigated the associations of imageability with fear reactivity. Imageability ratings of four word classes: positive and negative (i) emotional and (ii) propriosensitive, neutral and negative (iii) theoretical and (iv) neutral concrete filler, and fear reactivity scores – degree of fearfulness towards different situations (TF score) and total number of extreme fears and phobias (EF score), were obtained from 171 participants. Correlations between imageability, TF and EF scores were tested to analyze how word categories and their valence were associated with fear reactivity. Imageability ratings were submitted to recursive partitioning. Participants with high TF and EF scores had higher imageability for negative emotional and negative theoretical words. The correlations between imageability of negative emotional words and negative theoretical words for EF score were significant. Males showed stronger correlations for imageability of negative emotional words for EF and TF scores. High imageability for positive emotional words was associated with lower fear reactivity in females. These findings were discussed with regard to negative attentional bias theory of anxiety, influence on emotional systems, and gender-specific coping styles. This study provides insight into cognitive functions involved in mental imagery, semantic competence for mental imagery in relation to fear reactivity, and a potential psycholinguistic instrument assessing fear tendency.
ARTICLE | doi:10.20944/preprints202012.0726.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: Affective computing; Artificial intelligence; Quantum emotion; Emotion fusion; Social robots
Online: 29 December 2020 (09:59:28 CET)
This study presents a modest attempt to interpret, formulate, and manipulate emotion of robots within the precepts of quantum mechanics. Our proposed framework encodes the emotion information as a superposition state whilst unitary operators are used to manipulate the transition of the emotion states which are recovered via appropriate quantum measurement operations. The framework described provides essential steps towards exploiting the potency of quantum mechanics in a quantum affective computing paradigm. Further, the emotions of multi-robots in a specified communication scenario are fused using quantum entanglement thereby reducing the number of qubits required to capture the emotion states of all the robots in the environment, and fewer quantum gates are needed to transform the emotion of all or part of the robots from one state to another. In addition to the mathematical rigours expected of the proposed framework, we present a few simulation-based demonstrations to illustrate its feasibility and effectiveness. This exposition is an important step in the transition of formulations of emotional intelligence to the quantum era.
ARTICLE | doi:10.20944/preprints202007.0498.v2
Subject: Engineering, Automotive Engineering Keywords: emotion appraisal system; contextual emotion appraisal; cognitive robotics; sentential cognitive system; HRI
Online: 4 January 2021 (10:41:22 CET)
Emotion plays a powerful role in humans’ interaction with robots. In order to express more human-friendly emotions, robots need the capability of contextual appraisal that expresses the emotional relevance of various targets in the spatiotemporal situation. In this paper, an emotional appraisal methodology is proposed in this study to cope with such contexts. Specifically, the Ortony, Clore, and Collins model is abstracted and simplified to approximate an emotional appraisal model in the form of a sentence-based cognitive system. The contextual emotion appraisal is modeled by formulating the emotional relationships among multiple targets and the emotional transition with events and time passing. To verify the proposed robotic system’s feasibility, simulations were conducted for scenarios where it emotional interacts with humans manipulating liked or disliked objects on a table. This experiment demonstrated that the robot's emotion can variously change over time like human by using a proposed formula for emotional valence, which is moderated by emotion appraisal of occurring events.
REVIEW | doi:10.20944/preprints202205.0359.v1
Subject: Behavioral Sciences, Behavioral Neuroscience Keywords: emotion；conceptualization；interoception；afferent；efferent
Online: 26 May 2022 (08:57:02 CEST)
Empirical and theoretical advances in human neuroscience have led to a reappraisal of the relationships between mind, brain and body, the implications of which are particularly relevant to understanding emotions, which is revealed to be embodied owing to the facts that they are on the one hand primarily arise from the internal bodily states controlled by interoceptive system, on the other they give rise to physiological reactions and physical action evoked by autonomic nervous system. More specifically, when considering the ‘embodied mind’ (i.e., how mental processes are inescapably contextualized by their location within the body), the brain, instead of the ‘master’ of the body, is increasingly revealed to function as the ‘servant’, with its primary goal to maintain the body’s homeostatic integrity. This is achieved through the control of interoceptive information concerning body’s physiological state, initially as ‘simple’ organ-level homeostatic reflexes and then through higher-order coordination across organ-systems allowing ‘allostatic policies’ to predict and maintain future health of the integrated whole ‘biological-self’. In this context, motivational and emotional feelings arise from interoceptive signals that accompany (motivational and emotional) internally-directed physiological responses, and externally-directed behaviors. Emotion concepts are thus the categorized embodied outcomes of bidirectional brain-body interactions and may arguably be differentiated into afferent interoceptive processes, i.e., from body to brain, and efferent/autonomic processes, i.e., from brain to body. When comparing emotion words used in Chinese and English languages, afferent/interoceptive processes seem to dominate conceptualization of embodied emotions in Chinese, while the efferent processes feature more commonly in English. The presence of distinct conceptual systems relating to emotions may, according to the linguistic relativity hypothesis as well as the theory of constructed emotion, significantly shape the distinct values and ‘national character’ of Chinese and English–speaking cultures. Correspondingly, it is argued that, in the expression of affective traits, Chinese-speaking people are biased towards being more receptive, reflective and adaptive, whereas native English speakers may tend to be more reactive, proactive and interactive. These patterns also encompass functions historically ascribed to bodily organs by traditional Chinese and ancient Greek medicine.
ARTICLE | doi:10.20944/preprints202008.0645.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Speech Emotion Recognition; Emotion AI; Self-Supervised Learning; Transfer Learning; Low Resource Training; wav2vec
Online: 28 August 2020 (15:05:37 CEST)
We propose a novel transfer learning method for speech emotion recognition allowing us to obtain promising results when only few training data is available. With as low as 125 examples per emotion class, we were able to reach a higher accuracy than a strong baseline trained on 8 times more data. Our method leverages knowledge contained in pre-trained speech representations extracted from models trained on a more general self-supervised task which doesn’t require human annotations, such as the wav2vec model. We provide detailed insights on the benefits of our approach by varying the training data size, which can help labeling teams to work more efficiently. We compare performance with other popular methods on the IEMOCAP dataset, a well-benchmarked dataset among the Speech Emotion Recognition (SER) research community. Furthermore, we demonstrate that results can be greatly improved by combining acoustic and linguistic knowledge from transfer learning. We align acoustic pre-trained representations with semantic representations from the BERT model through an attention-based recurrent neural network. Performance improves significantly when combining both modalities and scales with the amount of data. When trained on the full IEMOCAP dataset, we reach a new state-of-the-art of 73.9% unweighted accuracy (UA).
Subject: Behavioral Sciences, Behavioral Neuroscience Keywords: emotion recognition; EEG signal decoding; brain anticipatory activity; machine learning; emotion related brain activity
Online: 31 December 2019 (10:05:27 CET)
Machine Learning (ML) approaches have been fruitfully applied to several classification problems of neurophysiological activity. Considering the relevance of emotion in human cognition and behaviour, ML found an important application field in emotion identification based on neurophysiological activity. Nonetheless, the literature results present a high variability depending on the neuronal activity measurement, the signal features and the classifier type. The present work aims to provide new methodological insight on ML applied to emotion identification based on electrophysiological brain activity. For this reason, we recorded EEG activity while emotional stimuli, high and low arousal (auditory and visual) were provided to a group of healthy participants. Our target signal to classify was the pre-stimulus onset brain activity. Classification performance of three different classifiers (LDA, SVM and kNN) was compared using both spectral and temporal features. Furthermore, we also contrasted the classifiers performance with static and dynamic (time evolving) features. The results show a clear increased in classification accuracy with temporal dynamic features. In particular, the SVM classifiers with temporal features showed the best accuracy (63.8 %) in classifying high vs. low arousal auditory stimuli.
ARTICLE | doi:10.20944/preprints202212.0387.v1
Subject: Behavioral Sciences, Behavioral Neuroscience Keywords: emotion discrimination; voice; frequency-tagging; EEG
Online: 21 December 2022 (06:07:12 CET)
Successfully engaging in social communication requires efficient processing of subtle socio-communicative cues. Voices convey a wealth of social information, such as gender, identity and the emotional state of the speaker. We tested whether our brain can systematically and automatically differentiate and track a periodic stream of emotional utterances among a series of neutral vocal utterances. We recorded frequency-tagged EEG responses of 20 neurotypical male adults while presenting streams of neutral utterances at 4 Hz base rate, interleaved with emotional utterances every third stimulus, hence at 1.333 Hz oddball frequency. Four emotions (happy, sad, angry, and fear) were presented as different conditions in different streams. To control the impact of low-level acoustic cues, we maximized variability among the stimuli and included a control condition with scrambled utterances. This scrambling preserves low-level acoustic characteristics but ensures that the emotional character is no longer recognizable. Results revealed significant oddball EEG responses for all conditions, indicating that every emotion category can be discriminated from the neutral stimuli, and every emotional oddball response was significantly higher than the response for the scrambled utterances. These findings demonstrate that emotion discrimination is fast, automatic, and is not merely driven by low-level perceptual features.
ARTICLE | doi:10.20944/preprints202104.0424.v1
Subject: Behavioral Sciences, Applied Psychology Keywords: culture; facial expressions; emotion; posed; spontaneous
Online: 15 April 2021 (16:45:31 CEST)
There is a growing consensus that culture influences the perception of facial expressions of emotion. However, little is known about whether and how culture shapes the production of emotional facial expressions, and even less so about whether culture differentially shapes the production of posed versus spontaneous expressions. Drawing on prior work on cultural differences in emotional communication, we tested the prediction that people from the Netherlands (a historically heterogeneous culture where people are prone to low-context communication) produce facial expressions that are more distinct across emotions compared to people from China (a historically homogeneous culture where people are prone to high-context communication). Furthermore, we examined whether the degree of distinctiveness varies across posed and spontaneous expressions. Dutch and Chinese participants were instructed to either pose facial expressions of anger and disgust, or to share autobiographical events that elicited spontaneous expressions of anger or disgust. Using the complementary approaches of supervised machine learning and information-theoretic analysis of facial muscle movements, we show that posed and spontaneous facial expressions of anger and disgust were more distinct when produced by Dutch compared to Chinese participants. These findings shed new light on the role of culture in emotional communication by demonstrating, for the first time, effects on the distinctiveness of production of facial expressions.
REVIEW | doi:10.20944/preprints201707.0070.v3
Subject: Behavioral Sciences, Social Psychology Keywords: guilt; shame; emotion; functionalist; social-adaptive; test of self-conscious affect; TOSCA
Online: 7 December 2017 (05:50:39 CET)
Within the field of guilt and shame, two competing perspectives have been advanced. The first, the social-adaptive perspective, proposes that guilt is an inherently adaptive emotion and shame is an inherently maladaptive emotion; thus, those interested in moral character development and psychopathology should work to increase an individual’s guilt-proneness and decrease an individual’s shame-proneness. The functionalist perspective, in contrast, argues that both guilt and shame can serve a person adaptively or maladaptively—depending on the situational appropriateness, duration, intensity, and so forth. This paper reviews the research conducted supporting both positions, critiques some issues with the most widely used guilt- and shame-proneness measure in the social-adaptive research (the TOSCA), and discusses the differences in results found when assessing guilt and shame at the state versus trait level. The conclusion drawn is that although there is broad support for the functionalist perspective across a wide variety of state and trait guilt/shame studies, the functionalist perspective does not yet have the wealth of data supporting it that has been generated by the social-adaptive perspective using the TOSCA. Thus, before a dominant perspective can be identified, researchers need to (1) do more research assessing how the social-adaptive perspective compares to the functionalist perspective at the state level, and (2) do more trait research within the functionalist perspective to compare functionalist guilt- and shame-proneness measures with the TOSCA.
ARTICLE | doi:10.20944/preprints202207.0084.v1
Subject: Arts & Humanities, Philosophy Keywords: philosophy of emotion; science of emotion; meta-semantic pluralism; embodied cognition; mind; mind-body problem; perception; cognition; emotion; cultural evolution; dual-inheritance theory; evolutionary norm psychology
Online: 6 July 2022 (03:55:00 CEST)
In this paper, I give readers an idea of what some scholars are interested in, what I found interesting, and what may be of future interest in the philosophy of emotion. I begin with a brief overview of the general topics of interests in the philosophy of emotion. I then discuss what I believe to be some of the most interesting topics in the contemporary discourse, including questions about how philosophy can inform the science of emotion, conceptions of the mind and the mind-body problem, concerns about perception, cognition, and emotion, along with questions about the place of 4E approaches and meta-semantic pluralist approaches in the embodied cognitive tradition. Finally, I discuss the emerging field of cultural evolution, the import of a dual-inheritance theory in this emerging field, and I propose a possible way to integrate the frameworks of dual-inheritance theory and meta-semantic pluralism to demonstrate at least one way in which the philosophy of emotion can contribute to the emerging field of cultural evolution.
REVIEW | doi:10.20944/preprints202209.0006.v1
Subject: Behavioral Sciences, Developmental Psychology Keywords: well-being; evolution; emotion; learning; language; motivation
Online: 1 September 2022 (04:29:47 CEST)
Evolutionary perspectives have generated many questions and some answers in the study of human health and disease. The field of evolutionary medicine, and related analytics of evolutionary psychiatry and evolutionary psychology have extended and expanded the way health disorders are viewed by searching for why humans, as a species, are vulnerable to certain pathological conditions. The search is organized into four domains that apply proximate and evolutionary explanations to human traits and developmental sequences. This framework opens inquiry to the ontogeny, phylogeny, mechanism, and adaptive significance of human health conditions. In this paper I argue that evolutionary medicine seems to parallel biomedicine in its primarily pathogenic focus. That is, conditions of pain, suffering, and disorder have received the most attention. Some work has used the architecture of evolutionary medicine to take a salutogenic approach, evaluating the proximate and evolutionary explanations of human well-being. I propose that an evolutionary understanding of human well-being requires a survey of emotions and their relationship with neurobiology, language, and culture. My anthropology based, multidisciplinary review of biopsychosocial processes reveals the way evolution has shaped modern human understanding of well-being through sociolinguistic learning processes and thereby our individual experiences of well-being. These insights have the power to contextualize human suffering and flourishing as we progress toward the goal of attenuating the former and expanding the latter.
ARTICLE | doi:10.20944/preprints202006.0271.v1
Subject: Behavioral Sciences, Behavioral Neuroscience Keywords: ERPs; Autism Spectrum Quotient; face perception; emotion
Online: 21 June 2020 (14:04:53 CEST)
This study explored the electrocortical correlates of conscious and nonconscious perceptions of emotionally laden faces in neurotypical adult women with varying levels of autistic-like traits (Autism Spectrum Quotient - AQ). Event-related potentials (ERPs) were recorded during the viewing of backward-masked images for happy, neutral, and sad faces presented either below (16 ms - subliminal) or above the level of visual conscious awareness (167 ms - supraliminal). Sad compared to happy faces elicited larger frontal-central N1, N2, and occipital P3 waves. We observed larger N1 amplitudes to sad faces than to happy and neutral faces in High-AQ (but not Low-AQ) scorers. Additionally, High-AQ scorers had a relatively larger P3 at the occipital region to sad faces. Regardless of the AQ score, subliminal perceived emotional faces elicited shorter N1, N2, and P3 latencies than supraliminal faces. Happy and sad faces had shorter N170 latency in the supraliminal than subliminal condition. High-AQ participants had a longer N1 latency over the occipital region than Low-AQ ones. In Low-AQ individuals (but not in High-AQ ones), emotional recognition with female faces produced a longer N170 latency than with male faces. N4 latency was shorter to female faces than male faces. These findings are discussed in view of their clinical implications and extension to autism.
ARTICLE | doi:10.20944/preprints201808.0312.v1
Subject: Engineering, Other Keywords: affordance; empathy; HRI; emotion; multimodal; allocentric; libraries
Online: 17 August 2018 (13:45:09 CEST)
Affordances are an important concept in cognition, which can be applied to robots in order to perform a successful human-robot interaction (HRI). In this paper we explore and discuss the idea of emotional affordances and propose a viable model for implementation into HRI. We consider “2-ways” affordances: perceived object triggering an emotion, and perceived human emotion expression triggering an action. In order to make the implementation generic, the proposed model includes a library that can be customised depending on the specific robot and application’s scenario. We present the AAA (Affordance-Appraisal-Arousal) model, which incorporates Plutchik’s Wheel of Emotions, and show some examples of simulation and possible scenarios.
ARTICLE | doi:10.20944/preprints202105.0303.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Emotion detection; CNN; VGG16; Education; Transfer learning; Engagement
Online: 13 May 2021 (13:59:44 CEST)
There is a crucial need for advancement in the online educational system due to the unexpected, forced migration of classroom activities to a fully remote format, due to the coronavirus pandemic. Not only this, but online education is the future, and its infrastructure needs to be improved for an effective teaching-learning process. One of the major concerns with the current video call-based online classroom system is student engagement analysis. Teachers are often concerned about whether the students can perceive the teachings in a novel format. Such analysis was involuntarily done in the offline mode, however, is difficult in an online environment. This research presents an autonomous system for analyzing the students' engagement in the class by detecting the emotions exhibited by the students. This is done by capturing the video feed of the students and passing the detected faces to an emotion detection mode. The emotion detection model in the proposed architecture was designed by finetuning VGG16 pre-trained image classifier model. Lastly, the average student engagement index is calculated. We received considerable performance setting reliability of the use of the proposed system in real-time giving a future scope to this research.
ARTICLE | doi:10.20944/preprints202012.0139.v3
Subject: Social Sciences, Accounting Keywords: Early literacy pedagogy; neuroscience; predictive processing; perception; emotion
Online: 6 April 2021 (13:26:50 CEST)
Significant challenges exist globally regarding literacy teaching and learning, particularly in poor socio-economic settings in countries of the Global South. In this paper we argue that to address these challenges, major features of how the brain works that are currently ignored in the educational literature should be taken into account. First, perception is an active process based in detection of errors in hierarchical predictions of sensory data and action outcomes. Reading is a particular case. Second, emotions play a key role in underlying cognitive functioning. Innate affective systems underlie and shape all brain functioning, including oral and written forms of language and sign. Third, there is not the fundamental difference between listening/speaking and reading/writing often alleged on the basis of evolutionary arguments. Both are socio-cultural practices driven and learnt by the communication imperative of the social brain. Fourth, like listening, reading is not a linear, bottom-up process. Both are non-linear contextually shaped psycho-social processes of understanding, shaped by current knowledge and cultural contexts and practices. Reductionist neuroscience studies which focus on decontextualized parts of reading cannot access all the relevant processes. An integrated view of brain function reflecting this non-linear nature implies that an ongoing focus on personal meaning and understanding provides positive conditions for all aspects of literacy learning. Assessment of literacy teaching at all its stages should include indicators that take into account these foundational features relating reading and writing to neuroscience.
ARTICLE | doi:10.20944/preprints201911.0174.v1
Subject: Behavioral Sciences, Behavioral Neuroscience Keywords: heart rate variability; dyads; physiological synchrony; relationship; emotion
Online: 15 November 2019 (06:09:04 CET)
The mere co-presence of another person synchronizes physiological signals, but no study has systematically investigated effects of type of emotional context and type of relationship in eliciting dyadic physiological synchrony. In this study, we investigated the synchrony of pairs of strangers, companions, and romantic partners while watching a series of video clips designed to elicit different emotions. Maximal cross-correlation of heart rate variability (HRV) was used to quantify dyadic synchrony. The findings suggest that an existing social relationship might reduce the predisposition to conform one's autonomic responses to a friend or romantic partner during social situations that do not require direct interaction.
ARTICLE | doi:10.20944/preprints201706.0094.v3
Subject: Behavioral Sciences, Cognitive & Experimental Psychology Keywords: cue-approach; decision making; behavioral change; preferences; emotion
Online: 23 October 2017 (03:42:21 CEST)
Recent findings show that preferences for food items can be modified without external-reinforcements using the cue-approach task. In the task, the mere association of food item images with a neutral auditory cue and a speeded button press, resulted in enhanced preferences for the associated stimuli. Here, in a series of 10 independent samples with a total of 255 participants, we show we can enhance preferences using this non-reinforced method for faces, fractals and affective images as well as snack foods, using auditory, visual and even aversive cues. This change was highly durable in follow-up sessions performed one to six months after training. Preferences were successfully enhanced for all conditions, except for negative valence items. These findings promote our understanding of non-reinforced change, suggest a boundary condition for the effect and lay the foundation for development of novel applications.
ARTICLE | doi:10.20944/preprints202210.0424.v1
Subject: Arts & Humanities, Linguistics Keywords: emotional speech processing; communication channel; emotion category; task type
Online: 27 October 2022 (08:04:59 CEST)
How language mediates emotional perception and experience is poorly understood. The present event-related potential (ERP) study examined the explicit and implicit processing of emotional speech to differentiate the relative influences of communication channel, emotion category and task type in the prosodic salience effect. Thirty participants (15 women) were presented with spoken words denoting happiness, sadness and neutrality in either the prosodic or semantic channel. They were asked to judge the emotional content (explicit task) and speakers’ gender (implicit task) of the stimuli. Results indicated that emotional prosody (relative to semantics) triggered larger N100 and P200 amplitudes with greater delta, theta and alpha inter-trial phase coherence (ITPC) values in the corresponding early time windows, and continued to produce larger LPC amplitudes and faster responses during late stages of higher-order cognitive processing. The relative salience of prosodic and semantics was modulated by emotion and task, though such modulatory effects varied across different processing stages. The prosodic salience effect was reduced for sadness processing and in the implicit task during early auditory processing and decision-making but reduced for happiness processing in the explicit task during conscious emotion processing. Additionally, across-trial synchronization of delta, theta and alpha bands predicted the ERP components with higher ITPC values significantly associated with stronger N100, P200 and LPC enhancement. These findings reveal the neurocognitive dynamics of emotional speech processing with prosodic salience tied to stage-dependent emotion- and task-specific effects, which can reveal insights to research reconciling language and emotion processing from cross-linguistic/cultural and clinical perspectives.
ARTICLE | doi:10.20944/preprints202111.0379.v1
Subject: Behavioral Sciences, Cognitive & Experimental Psychology Keywords: core affect; emotion; semantics; process cycle; quantum cognition; qubit
Online: 22 November 2021 (11:04:58 CET)
The paper describes model of human affect based on quantum theory of semantics. The model considers emotion as subjective representation of behavioral context relative to a basis binary choice, organized by cyclical process structure and an orthogonal evaluation axis. The resulting spherical space, generalizing well-known circumplex models, accommodates basic emotions in specific angular domains. Predicted process-semantic structure of affect is observed in the word2vec data, as well as in the previously obtained spaces of emotion concepts. The established quantum-theoretic structure of affective space connects emotion science with quantum models of cognition and behavior, opening perspective for synergetic progress in these fields.
ARTICLE | doi:10.20944/preprints202105.0279.v1
Subject: Engineering, Automotive Engineering Keywords: self-disclosure; social robots; diary; emotion theory; relevance; valence
Online: 13 May 2021 (10:00:08 CEST)
Social robots may become an innovative means to improve the well-being of individuals. Earlier research showed that people easily self-disclose to a social robot even in cases where that was unintended by the designers. We report on an experiment of self-disclosing in a diary journal or to a social robot after negative mood induction. The off-the-shelf robot was complemented with our inhouse developed AI chatbot and could talk about ‘hot topics’ after having it trained with thousands of entries on a complaint website. We found that people who felt strong negativity after being exposed to shocking video footage benefited the most from talking to our robot rather than writing down their feelings. For people less affected by the treatment, a confidential robot chat or writing a journal page did not differ significantly. We discuss emotion theory in relation to robotics and possibilities for an application in design (the emoji-enriched ‘talking stress ball’). We also underline the importance of - otherwise disregarded - outliers in a data set that is of a therapeutic nature.
ARTICLE | doi:10.20944/preprints202011.0369.v1
Online: 13 November 2020 (10:05:52 CET)
Social media giants like Facebook are struggling to keep up with fake news, in the light of the fact that disinformation diffuses at lightning speed. For example, the COVID-19 (i.e. Coronavirus) pandemic is testing the citizens' ability to distinguish real news from falsifying facts (i.e. disinformation). Cyber-criminals take advantage of the inability to cope with fake news diffusion on social media platforms. Fake news, crafted as a means to manipulate readers to perform various malicious IT activities. However, no previous study has investigated the strategies used to create fake news on social media. Therefore, we have analysed five data-sets that contain online news articles (i.e. both fake and legitimate news) to investigate strategies of crafting fake news on social media platforms. Our study findings revealed a threat model understanding strategies of crafting fake news which may highly likely diffuse on social media platforms.
REVIEW | doi:10.20944/preprints202010.0065.v1
Subject: Biology, Anatomy & Morphology Keywords: Altruism; Basal ganglia; Dopamine; Emotion; Evolution; Plasticity; Social bonding
Online: 5 October 2020 (10:54:09 CEST)
In million years, under the pressure of natural selection, hominins acquired vocal learning, music, language, and intense cooperation, thanks to the efficacy of music in enhancing sociality. Thus, early in human evolution music became part of human life, a relevant activity, which required sophisticated perceptual and motor skills. It contributed to developing cultures and history, social bonding, and from the beginning of life strengthens the mother-baby relation while within the mother’s womb. Music existed in all known human cultures, although it varies in rhythmic and melodic complexity. It is art made of sounds capable of arousing emotions, evokes memories, engages multiple cognitive functions, and promotes attention, concentration, stimulates the imagination, creativity, and harmony of movement. Music and language share the same complex neural network. Music changes the chemistry of the brain activating the reward and prosocial systems, altruism, and allowing its use in therapy. This review explores "what" is music and illustrates the neural circuits that allow the production of music and language and those that transduce the sounds perceived by the ear, localize and archive them, allowing to recall them. Interestingly, songbirds share many commonalities with human music:, common neural pathways that shape vocal learning, and how they make sounds.
REVIEW | doi:10.20944/preprints202007.0604.v1
Subject: Behavioral Sciences, Other Keywords: Parkinson's disease; Emotion; Facial Masking; Dysarthria; Stigma; Dehumanization; Loneliness
Online: 25 July 2020 (11:16:57 CEST)
Parkinson’s disease (PD) is typically well-recognized by its characteristic motor symptoms (e.g., bradykinesia, rigidity, and tremor). The cognitive symptoms of PD are increasingly being acknowledged by clinicians and researchers alike. However, PD also involves a host of emotional and communicative changes which can cause major disruptions to social functioning. These include problems producing emotional facial expressions (i.e., facial masking) and emotional speech (i.e., dysarthria), as well as difficulties recognizing the verbal and non-verbal emotional cues of others. These social symptoms of PD can result in severe negative social consequences, including stigma, dehumanization, and loneliness, which might affect quality of life to an even greater extent than more well-recognized motor or cognitive symptoms. It is therefore imperative that researchers and clinicians become aware of these potential social symptoms and their negative effects, in order to properly investigate and manage the socioemotional aspects of PD. The present review provides an examination of the current research surrounding some of the most common social symptoms of PD and their related social consequences, and argues that proactively and adequately addressing these issues might improve disease outcomes.
ARTICLE | doi:10.20944/preprints202112.0364.v1
Subject: Social Sciences, Education Studies Keywords: Teachers; Mindfulness; Emotion regulation; COVID-19; Work engagement; Emotional distress
Online: 22 December 2021 (12:28:23 CET)
The COVID-19 has dramatically affected mental health and work environment of many labor sectors, including the educational sector. Our primary aim was to investigate preschool teachers’ psychological distress and work engagement during the early stages of the COVID-19 outbreak, while examining the possible protective role of participating in mindfulness-based intervention (C2C-IT) and emotion regulation. Emotional distress, work engagement and COVID-19 concerns’ prevalence were evaluated among 165 preschool teachers in the early stages of the COVID-19 outbreak in Israel, using self-report questionnaires. Findings show that preschool teachers have experienced increased emotional distress. Teachers who had participated in the C2C-IT intervention six month before the pandemic outbreak (N=41) reported lower emotional distress, higher use of adaptive emotion regulation strategies and higher work engagement, compared to their counterparts that had not participated in the mindfulness training (N = 124). Emotion regulation strategies mediated the link between participating in the CTC-IT intervention and emotional distress and work engagement. Teaching is a highly demanding occupation, especially during a pandemic, therefore it is important to invest resources in empowering this population. According to the findings of the current study, implementation of mindfulness-based intervention during the school year, may benefit teachers’ well-being, even during stressful events such as the COVID-19 pandemic.
ARTICLE | doi:10.20944/preprints201908.0228.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: EEG; luminance; brightness; IAPS; STFT; feature extraction; visual processing; emotion
Online: 22 August 2019 (03:43:25 CEST)
The aim of this study was to examine brightness effect, which is the perceptual property of visual stimuli, on brain responses obtained during visual processing of these stimuli. For this purpose, brain responses of the brain to changes in brightness were explored comparatively using different emotional images (pleasant, unpleasant and neutral) with different luminance levels. Moreover, electroencephalography recordings from 12 different electrode sites of 31 healthy participants were used. The power spectra obtained from the analysis of the recordings using short time Fourier transform were analyzed, and a statistical analysis was performed on features extracted from these power spectra. Statistical findings obtained from electrophysiological data were compared with those obtained from behavioral data. The results showed that the brightness of visual stimuli affected the power of brain responses depending on frequency, time and location. According to the statistically verified findings, the distinctive effect of brightness occurred in the parietal and occipital regions for all the three types of stimuli. Accordingly, the increase in the brightness of pleasant and neutral images increased the average power of responses in the parietal and occipital regions whereas the increase in the brightness of unpleasant images decreased the average power of responses in these regions. However, the increase in brightness for all the three types of stimuli reduced the average power of frontal and central region responses (except for 100-300 ms time window for unpleasant stimuli). The statistical results obtained for unpleasant images were found to be in accordance with the behavioral data. The results also revealed that the brightness of visual stimuli could be represented by changing the activity power of the brain cortex. The main contribution of this research was to comprehensively examine brightness effect on brain activity for images with different emotional content and different frequency bands at different time windows of visual processing for different brain regions. The findings emphasized that the brightness of visual stimuli should be viewed as an important parameter in studies using emotional image techniques such as image classification, emotion evaluation and neuro-marketing.
ARTICLE | doi:10.20944/preprints202008.0251.v1
Subject: Life Sciences, Other Keywords: mental health assessment; vitality; mental activity; voice index; emotion analysis; noninvasiveness
Online: 11 August 2020 (05:33:57 CEST)
In many developed countries, mental health disorders have become problematic, and the economic loss due to treatment costs and interference with work is immeasurable. Therefore, we developed a method to assess individuals’ mental health using emotional components contained in their voice. We propose two indices of mental health: vitality, a short-term index, and mental activity, a long-term index capturing the trends in vitality. To evaluate our method, we used the voices of healthy individuals (n = 14) and patients with major depression (n = 30). The patients were also assessed by specialists using the Hamilton Rating Scale for Depression (HAM-D). A significant negative correlation existed between the vitality extracted from the voices and HAM-D scores (r = -0.33, p < .05). We could discriminate the voice data of healthy individuals and patients having depression with a high accuracy using the vitality (p = .0085, area under the curve = 0.76). Further, we developed a method to estimate stress through emotion instead of analyzing stress directly from voice data. By daily monitoring of vitality using smartphones, we can encourage hospital visits for people before they become depressed or during the early stages of depression, to prevent adverse consequences of depression.
ARTICLE | doi:10.20944/preprints201808.0522.v1
Subject: Behavioral Sciences, Cognitive & Experimental Psychology Keywords: speech-to-song illusion, auditory illusion, perception, pace, emotion, language tonality
Online: 30 August 2018 (10:37:13 CEST)
The speech-to-song illusion is a type of auditory illusion that the repetition of a part of a sentence would change people’s perception tendency from speech-like to song-like. The study aims to examine how pace, emotion, and language tonality affect people’s experience of the speech-to-song illusion. It uses a between-subject (Pace: fast, normal, vs. slow) and within-subject (Emotion: positive, negative, vs. neutral; language tonality: tonal language vs. non-tonal language) design. Sixty Hong Kong college students were randomly assigned to one of the three conditions characterized by pace. They listened to 12 audio stimuli, each with repetitions of a short excerpt, and rated their subjective perception of the presented phrase, whether it sounded like a speech or a song, on a five-point Likert-scale. Paired-sample t-tests and repeated measures ANOVAs were used to analyze the data. The findings reveal that a faster speech pace could strengthen the tendency of the speech-to-song illusion. Neither emotion nor language tonality show a statistically significant influence on the speech-to-song illusion. This study suggests that the perception of sound should be in a continuum and facilitates the understanding of song production in which speech can turn into music by having repetitive phrases and to be played in a relatively fast pace.
ARTICLE | doi:10.20944/preprints202301.0403.v1
Subject: Behavioral Sciences, Cognitive & Experimental Psychology Keywords: visual perception; emotion; emoji; emoticon; sex differences; anger; fear; emotional communication; texting
Online: 23 January 2023 (08:43:06 CET)
Emojis are colorful ideograms resembling stylized faces commonly used for expressing emotions in instant messaging, in social network sites and in email communication. Notwithstanding their increasing and pervasive use in electronic communication, they are not much investigated in terms of their psychological properties and communicative efficacy. Here we presented 112 different human facial expressions and emojis (expressing neutrality, joy, surprise, sadness, anger, fear and disgust) to a group of 96 female and male university students engaged in the recognition of their emotional meaning. Both Analysis of Variance and Wilcoxon tests showed that men were significantly better than women at recognizing emojis (especially negative ones) while women were better than men at recognizing human facial expressions. Quite interestingly, men were better at recognizing emojis than human facial expressions per se. These findings are in line with more recent evidences suggesting how men may be more competent and inclined to use emojis to express their emotions in messaging (especially sarcasm, tease and love) than previously thought. Finally, the data indicate how emojis are less ambiguous than facial expressions (except for neutral and surprise emotions), possibly because of the limited number of fine-grained details, and the lack of morphological features conveying facial identity.
ARTICLE | doi:10.20944/preprints201912.0292.v1
Subject: Social Sciences, Geography Keywords: cultural differences; spatial interaction patterns; emotion analysis; Zhihu topic data; cultural geography
Online: 22 December 2019 (10:05:48 CET)
As an important research content in cultural geography, the exploration and analysis of the laws of regional cultural differences has great significance for the discovery of distinctive cultures, protection of regional cultures and in-depth understanding of cultural differences. In recent years, with the "spatial turn" of sociology, scholars are paying more and more attention to the implicit spatial information in social media data and the various social phenomena and laws they reflect. One important aspect is to grasp the social cultural phenomena and its spatial distribution characteristics through the text. Using machine learning methods such as the popular natural language processing (NLP), this paper can not only extract hotspot cultural elements from text data but also accurately detect the spatial interaction pattern of some specific cultures and the characteristics of emotions towards non-native cultures. Taking the 6,128 answers to the question “what are the differences between South and North China that you never know” on the Zhihu Q&A Platform as an example, with the help of NLP, this paper has explored the cultural differences between South and North China in people’s mind. This paper probes into people’s feeling and cognition of the cultural differences between South and North China from three aspects, including spatial interaction patterns of hotspot cultural elements, components of hotspot culture and emotional characteristics under the influence of cultural differences between North and South. The study reveals that 1) people from North and South China have great differences in recognizing each other’s culture. 2) Food culture is the most popular among many cultural differences. 3) People tend to show negative attitude towards the food cultures different from their own. All these findings shed light upon the understanding of regional cultural differences and addressing cultural conflicts. In addition, this paper also provides an effective solution to the study from a macro perspective, which have been difficult for new cultural geography.
ARTICLE | doi:10.20944/preprints201704.0145.v2
Subject: Medicine & Pharmacology, Psychiatry & Mental Health Studies Keywords: emotion and will; music therapy; five phases, five phases music therapy; psychology
Online: 29 November 2017 (09:54:48 CET)
Music therapy has served as complementary and alternative medicine for various neurological disorders. Five Phases Music Therapy (FPMT) employs the theory of five phases and five music scales or tones (宫Gong (do), 商Shang (ri), 角Jue (mi), 徵Zhi (so) and 羽Yu (la)) to analyze and treat mind-body illness. In Chinese Medicine (CM), the five music scales are used to connect the human body and the universe, interpret personalities and constitution and analyze the influences of climatic changes on health. FPMT has a self-contained theory and routine of practice application. Large amounts of clinical and fundamental reports have been available and clinical benefits have been obtained. However more systemic clinic research esp. evidence-based and random controlled trials must be performed to validate and optimize its routines and biological and neurological mechanism must be further explored. It’s reasonable to believe that the effective music therapy will attract more attention from the world outside China with the introduction of FPMT.
ARTICLE | doi:10.20944/preprints202012.0560.v1
Subject: Medicine & Pharmacology, Nursing & Health Studies Keywords: hemodialysis; indirect forest therapy; emotion; fatigue; stress; heart rate variability; natural killer cells
Online: 22 December 2020 (12:42:59 CET)
(1) Background: Most hemodialysis patients may experience physiological and psychological stress. Exposure to nature has been previously reported to reduce the measures of psychological and physiological stress, and immune function. This study aimed to investigate psychological and physiological effects of integrated indirect forest therapy on chronic renal failure patients undergoing hemodialysis. (2) Methods: As a quasi-experiment, this study employed a nonequivalent control group, repeated measurements, and a non-synchronized design. A total of 54 participants were included: 26 and 28 in the experimental and control groups, respectively. During hemodialysis, five types of forest therapy stimuli (visual, auditory, olfactory, tactile, and motor) were applied 3 times per week for 4 weeks during 15-minute sessions. (3) Results: Positive but not negative emotion measures differed between the groups after the intervention. Fatigue and physiological stress levels were significantly reduced in the experimental group, whereas no significant difference was found between the groups on the measures of psychological stress. Activation of both the parasympathetic and sympathetic nervous systems was similar in both groups, as was the number of natural killer cells. (4) Conclusion: Integrated indirect forest therapy may help increase positive emotions and reduce fatigue and stress levels during hemodialysis in patients with chronic renal failure.
ARTICLE | doi:10.20944/preprints202301.0156.v1
Subject: Engineering, Biomedical & Chemical Engineering Keywords: Online Learning; Emotion Classification; AMIGOS dataset; Wearable-EEG (Muse and Neurosity Crown); Psychopy Experiments
Online: 9 January 2023 (09:09:08 CET)
Emotions are indicators of affective states and play a significant role in human daily life, behavior, and interactions. Giving emotional intelligence to the machines could, for instance, facilitate early detection and prediction of (mental) diseases and symptoms. Electroencephalography (EEG) -based emotion recognition is being widely applied because it measures electrical correlates directly from the brain rather than the indirect measurement of other physiological responses initiated by the brain. The recent development of non-invasive and portable EEG sensors makes it possible to use them in real-time applications. Therefore, this paper presents a real-time emotion classification pipeline, which trains different binary classifiers for the dimensions of Valence and Arousal from an incoming EEG data stream. After achieving a 23.9% (Arousal) and 25.8% (Valence) higher f1-score on the state-of-art AMIGOS dataset, this pipeline was applied to the dataset achieved by an emotion elicitation experimental framework developed within the scope of this thesis. Following two different protocols, 15 participants were recorded using two different consumer-grade EEG devices while watching 16 short emotional videos in a controlled environment. For an immediate label setting, the mean f1-score of 87% and 82% were achieved for Arousal and Valence, respectively. In a live scenario, while continuously being updated on the incoming data stream with delayed labels, the pipeline proved to be fast enough to achieve predictions in real time. However, the significant discrepancy from the readily available labels on the classification scores leads to future work to include more data with frequent delayed labels in the live settings.
ARTICLE | doi:10.20944/preprints202205.0331.v1
Subject: Behavioral Sciences, Clinical Psychology Keywords: Robot-based activities; hospitalized children; psychological health; well-being; CoderBot; positive emotion; single-setting
Online: 24 May 2022 (10:16:05 CEST)
Being hospitalized is a threatening and stressful experience for many children. From a psychological point of view, children may experience increased feelings of anxiety and fear that can negatively interfere with behavioral, cognitive, and emotional outcomes. To limit these impacts on children's well-being and mental health, interventions that could contribute to protecting the emotional domain of hospitalized children are welcomed. The present research reported a single-setting case study intervention to evaluate the impact of educational play-based activity with a CoderBot robot in a pediatric short-term recovery ward (N=61). The methodology included multiple sources of data (i.e., children, parents, nurses), observations on the field, and a sequential (quantitative-qualitative) mixed-method approach to data analysis. Results supported the idea that robot-based activities were associated with increased participant well-being (particularly positive emotions). The conclusions of this pilot study discuss the strengths, limitations, and further developments of using robots with hospitalized children.
ARTICLE | doi:10.20944/preprints202204.0201.v1
Subject: Medicine & Pharmacology, Nursing & Health Studies Keywords: COVID-19; emotion-focused coping; infection control practices; perceived stress; relation-ship-focused coping
Online: 21 April 2022 (10:06:24 CEST)
Background: COVID-19 has placed tremendous pressure on the global public health system and has changed daily life. Aim: To examine the relationships between the perceived threat, perceived stress, coping responses and infection control practices towards the COVID-19 pandemic among university students in China. Methods: Using a cross-sectional survey, 4,392 students were recruited from six universities in two regions of China. Methods: Data were collected via an online platform using self-reported questionnaires. Hierarchical multiple regression analyses were performed to predict the variables on COVID-19 infection control practices. Results: Pearson correlation coefficients showed a significant negative relationship between perceived stress and COVID-19 infection control practices. A significant positive relationship was observed between wishful thinking and empathetic responding, and infection control practices. Hierarchical multiple regression analyses revealed that gender, geographical location, perceived stress and emotion-focused and relationship-focused coping responses were predictors of COVID-19 infection control practices. Conclusions: The findings suggest that university students displayed moderate levels of stress, using wishful thinking and empathetic responses as coping strategies. Counselling services should therefore emphasise reassurance and empathy. Male university students tended to be less compliant with social distancing. Both counselling and public health measures should recognise the importance of gender differences. Nurses should integrate these findings into future health program planning and interventions.
REVIEW | doi:10.20944/preprints202005.0348.v2
Subject: Behavioral Sciences, Applied Psychology Keywords: emotion; visual thalamus; initial evaluation; lateral geniculate nucleus; thalamic reticular nucleus; pulvinar; superior colliculus
Online: 11 May 2021 (10:32:30 CEST)
Current proposals on the temporal sequence in the processing of emotional visual stimuli are partially incompatible with growing empirical data. In the majority of them, the initial evaluation structures (IES) postulated to be in charge of the earliest detection of emotional stimuli (i.e., salient for the individual), are high order structures (i.e., those receiving visual inputs after several synapses). Thus, their latency of response cannot account for the first visual cortex response to emotional stimuli (peaking 80 ms in humans). Additionally, these proposed structures lack the necessary infrastructure to locally analyze the visual features of the stimulus (shape, color, motion, etc.) that define a stimulus as emotional. In particular, the amygdala is defended as the cornerstone IES also in humans, and cortical areas such as the ventral prefrontal cortex or the insula have been proposed as well to intervene in this initial evaluation process. The present review describes several first-order brain structures (i.e., receiving visual inputs after one synapsis), and second order structures (two synapses) that may complement the former, that accomplish with both prerequisites: presenting response latencies compatible with the observed activity at the visual cortex and possessing the necessary architecture to rudimentarily analyze in situ relevant features of the visual stimulation. The visual thalamus, and particularly the lateral geniculate nucleus (LGN), a first-order thalamic nucleus that actively processes visual information, is a good candidate to be the core IES, with the complementary action of the thalamic reticular nucleus (TRN). This LGN-TRN tandem could be supported, also in an ascending, initial evaluation phase, by the pulvinar, a second order thalamic structure, and first-order extra-thalamic nuclei (superior colliculus and certain nuclei of pretectum and the accessory optic system). In sum, the visual thalamus, scarcely studied in relation to emotional processing, is a serious candidate to be the missing link in early emotional evaluation and, in any case, is worth exploring in future research.
Subject: Medicine & Pharmacology, Oncology & Oncogenics Keywords: cancer; surgery; chemotherapy; radiotherapy; health optimization; exercises; diet; emotion; chronic stress; kinetics; lethal factors
Online: 24 November 2019 (04:40:48 CET)
After reviewing cancer theories, cancer treatment development histories, randomized clinical trial's performance, cancer treatment strategy, trial follow-up times, and conducting numerous simulations using existing data, the authors found: (1) medical treatments come with three to four lethal factors: treatment side-effects, emotional distress and chronic dress, lack of exercises or physical inactivity, and excessive nutrition in some cases; (2) clinical trial exaggerates the benefits of fast-acting treatments and underestimates the slow-delivering adverse side effects as a result of statistical averaging, interfering effects of personal lifestyle factors, and insufficient follow-up times; (3) the benefits of medical treatments are limited by chain comparison, where surgery sets up a negative standard relative to the best way for resolving cancer; (4) the strategy of destroying the tumor is unworkable; (5) medical treatments can turn natural cancer growth curve into approximately doubly exponential curve; (6) multiple factor non-medical measures are much more powerful than medical treatments in controlling cancer growth and metastasis rates; and (7) cancer early diagnosis and over treatments are bad strategies that have great adverse impacts on cancer patients. Based on huge increases in cancer growth rate constants, substantial of loss of organ functional capacity, and severe systemic aging-like cellular damages, the authors concluded that medical treatments promote cancer growth and metastasis rates and shorten patient lives in most cases, and the claimed benefits are caused by triple biases of clinical trials. The authors believe that the better strategy for ending the global cancer epidemic is abandoning clinical trails as the research model, changing caner treatment strategy from killing cancer cells to slowing down cancer growth rates by using multiple factors optimization approach in personalized medicine.
ARTICLE | doi:10.20944/preprints202106.0208.v1
Subject: Arts & Humanities, Linguistics Keywords: insubordinated constructions; the expressive function of language; language and emotion; ni que construction in Spanish
Online: 8 June 2021 (10:34:07 CEST)
Authors such as Schnoebelen (2012: 12) suggest that in some languages (cf. Navajo) certain dependent clauses are frequently used independently to “mark emotional evaluation and background information”. Evans (2007) uses the term insubordination to refer to this phenomenon. Our study focuses on a particular insubordinate construction introduced by the sequence ni que in Spanish, as in the example [¡Una carta cada día!] Ni que yo fuese Umbral. (CORPES 100), used as an independent clause with a sociopragmatic meaning which is different from that of its subordinate counterpart (cf. No escribiría una carta cada día ni que yo fuese Umbral). Our research questions ask about the potential for ni que to be used as a discourse marker fulfilling an expressive function when it introduces this type of construction, and the derived hypothesis is then oriented to test whether Schnoebelen's (2012) observation about insubordinate constructions applies also to this Spanish construction. In order to test this hypothesis, we performed a functional-discourse analysis of more than 2000 concordances (and their extended contexts) in Mark Davies' Corpus del Español and the Real Academia CORPES XXI. Our findings show that the insubordinate construction differs in function and meaning from its subordinate counterpart, the former fulfilling a stronger emotive function, often combined with other discourse-pragmatic functions, such as evaluation or the organization of discourse.
ARTICLE | doi:10.20944/preprints201908.0019.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: emotion classification; machine learning classifiers; ISEAR dataset; data mining; performance evaluation; data science; opinion-mining
Online: 2 August 2019 (08:49:27 CEST)
Emotion detection from the text is an important and challenging problem in text analytics. The opinion-mining experts are focusing on the development of emotion detection applications as they have received considerable attention of online community including users and business organization for collecting and interpreting public emotions. However, most of the existing works on emotion detection used less efficient machine learning classifiers with limited datasets, resulting in performance degradation. To overcome this issue, this work aims at the evaluation of the performance of different machine learning classifiers on a benchmark emotion dataset. The experimental results show the performance of different machine learning classifiers in terms of different evaluation metrics like precision, recall ad f-measure. Finally, a classifier with the best performance is recommended for the emotion classification.
REVIEW | doi:10.20944/preprints202107.0326.v1
Subject: Life Sciences, Biochemistry Keywords: Deepfake; Animal Welfare; Animal Emotions; Artificial Intelligence; Digital Farming; Animal Based Measures; Emotion Modeling; Livestock Health
Online: 14 July 2021 (11:49:38 CEST)
Deepfake technologies are known for the creation of forged celebrity pornography, face and voice swaps, and other fake media content. Despite the negative connotations the technology bears, the underlying machine learning algorithms have a huge potential that could be applied to not just digital media, but also to medicine, biology, affective science, and agriculture, just to name a few. Due to the ability to generate big datasets based on real data distributions, deepfake could also be used to positively impact non-human animals such as livestock. Generated data using Generative Adversarial Networks, one of the algorithms that deepfake is based on, could be used to train models to accurately identify and monitor animal health and emotions. Through data augmentation, using digital twins, and maybe even displaying digital conspecifics where social interactions are enhanced, deepfake technologies have the potential to increase animal health, emotionality, sociality, animal-human and animal-computer interactions and thereby animal welfare, productivity, and sustainability of the farming industry.
ARTICLE | doi:10.20944/preprints202008.0342.v1
Subject: Medicine & Pharmacology, Psychiatry & Mental Health Studies Keywords: COVID-19 pandemic; virus’ transmission; fear of contagion; breathing difficulty; healthy adolescents; emotion awareness; anxiety-state
Online: 15 August 2020 (08:25:22 CEST)
The COVID-19 appears as a catastrophic health risk with psychological, emotional, social and relational implications. From the early stages of the virus spread, the elderly population was identified as the most vulnerable and the health authorities have rightly focused on such frailest population. Conversely, less attention was paid to emotional and psychological dimension of children and adolescents. Actually, they were less at risk quoad vitam or quoad valetudinem, nevertheless they had to face a reality of anxiety, fears and uncertainties. The current study investigated state anxiety and emotion awareness in a healthy sample of older adolescents, 84 females and 64 males, aged 17 to 19, during the pandemic lockdown, using Self-rating Anxiety Scale and the Italian Emotion Awareness Questionnaire. An unexpected anxious phenomenology, impacting the anxiety ideo-affective domain, was found, while the somatic symptomatology appeared to be less severe. The highest anxiety symptom were the breathing difficulties. These findings supported the hypothesis that the COVID-19 pandemic may be a risk condition for an increased state anxiety in older adolescents and suggest the need to provide 1. an effective, empathic communication system with the direct participation of older adolescents, 2. a psychological counseling service for stress management of adolescents.
ARTICLE | doi:10.20944/preprints201810.0537.v1
Subject: Social Sciences, Marketing Keywords: emotion; commitment; brand loyalty; willingness to pay more; coffee quality; service quality; physical environment quality; price fairness
Online: 23 October 2018 (11:58:03 CEST)
: Following the phenomenal growth of and competition among coffee chain retailers, the coffee chain market has expanded substantially thanks to rising income levels, the increasing young population, and rapidly changing lifestyles. Attracting consumers’ attention and enhancing their loyalty behaviors have become very difficult for coffee chain retailers. This study seeks to understand the mechanisms through which emotions and the dedication-constraint model lead to brand loyalty and willingness to pay more to certain coffee chain retailers. Emotions and the dedication-constraint model are major factors in the research, but few studies have combined them to examine the formation of loyalty behaviors. This study synthesizes emotional responses and the dedication-constraint model to develop a theoretical model. Based on the ambivalent view of emotions, it also examines how positive and negative emotions affect the combination of brand loyalty and willingness to pay more to certain coffee chain retailers. Moreover, it identifies the antecedents of affective and calculative commitments in the context of coffee chain retailers. Our findings indicate that loyalty behaviors (dedication- and constraint-based mechanisms from brand loyalty and willingness to pay more to certain coffee chain retailers), emotional responses, and affective and calculative commitments significantly affect brand loyalty directly and indirectly through both positive and negative emotions. Furthermore, service quality, physical environment quality, and price fairness significantly affect affective commitments, while price fairness significantly affects both affective and calculative commitments. Finally, affective and calculative commitments significantly affect willingness to pay more, both directly and indirectly, through positive emotions and affect it directly through negative emotions. The results’ theoretical and managerial implications and possible future research directions are discussed.
ARTICLE | doi:10.20944/preprints201706.0003.v2
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: human activity analysis; human intention understanding; affective computing; data visualisation; depth data; head pose estimation; emotion recognition
Online: 10 July 2017 (08:26:31 CEST)
ARTICLE | doi:10.20944/preprints202008.0221.v1
Subject: Life Sciences, Other Keywords: arousal level; emotion; major depression severity; voice index; Hurst exponent; zero-crossing rate; Hamilton Rating Scale for Depression
Online: 9 August 2020 (21:15:01 CEST)
Recently, the relationship between emotional arousal and depression has been studied. Focusing on this relationship, we first developed an arousal level voice index (ALVI) to measure arousal levels using the Interactive Emotional Dyadic Motion Capture database. Then, we calculated ALVI from the voices of depressed patients from two hospitals (Ginza Taimei Clinic [GTC] and National Defense Medical College hospital [NDMC]) and compared them with the severity of depression as measured by the Hamilton Rating Scale for Depression (HAM-D). Depending on the HAM-D score, the datasets were classified into a no depression (HAM-D<8) and a depression group (HAM-D≥8) for each hospital. A comparison of the mean ALVI between the groups was performed using the Wilcoxon rank-sum test and a significant difference at the level of 10% (p = 0.094) at GTC and 1% (p = 0.0038) at NDMC was determined. The area under the curve (AUC) of the receiver operating characteristic was 0.66 when categorizing between the two groups for GTC, and the AUC for NDMC was 0.70. The relationship between arousal level and depression severity was indirectly suggested via ALVI.
Subject: Medicine & Pharmacology, Pathology & Pathobiology Keywords: coronavirus; COVID-19; viral reproduction; immune response; low temperature injury; lung damages; cold flu influenza; deep breathing exercises; diet; emotion stress; lifestyle
Online: 4 March 2020 (05:30:58 CET)
To understand great disparities in disease outcomes between CIVID-19 patients, we explore infection and host responses in kinetics. From existing data, we deduced a model that the lungs are damaged by rapidly rising flow resistance as a result of retaining white blood cells in lung tissues. The retention of white blood cells is initially triggered by viral infection but aggravated by injuries caused by low temperature. Lungs are initially damaged by fluid leakage, rapidly followed by extruding blood into alveolar spaces. The step of blood extruding is predicted to take place in a very short time. Our simulations show that as little as 0.1% retention of white blood cells in the lungs can lead to their failure in 5 to 10 days. The small degrees of imbalance implies that this imbalance could be corrected by a large number of factors that are known to reduce flow resistance. The model implies that the top priority is maintaining blood micro-circulation and preserving organ functions in the entire disease course, especially after the virus has spread the whole lungs. From exploring a large number of hypothetical infection modes, we propose preventive, mitigating and treatment strategies for ultimately ending the pandemic. The first strategy is avoiding exposures that could result in widespread damages to lungs and taking post exposure mitigating measures that would reduce disease severity. The second strategy is reducing death rate and disability rate from the current levels to one tenth for infected patients by using multiple factors health optimization method. The double reduction strategies are expected to generate a series of chain reactions that favor mitigating or ending the pandemic. Some reactions include a big reduction of the amount of viral discharges from infected patients into the air, the avoidance of panic, chronic stress and emotional distress, and cross-infections which are expected in quarantines. The double reductions would have a final effect of ending the pandemic.
Subject: Medicine & Pharmacology, Pathology & Pathobiology Keywords: coronavirusl; SARS; MERS; viral reproduction; immune response; lung infections; lung damages; cold flu Fluenza; deep breath exercises; diet; emotion stress; lifestyle
Online: 9 February 2020 (17:43:00 CET)
We conducted many model simulations to understand the causes of the damages of coronavirus to lung tissues and constructed a diagram showing viral development, immune response and damage accumulation curves. We found that main causes are (1) the phase lag between the viral reproduction process and a belayed immune response, (2) the direct viral damages and massive collateral damages which are mainly caused by belated immune responses, and (3) further tissue damages triggered by accumulated wastes in lungs. We deduced from those causes that the key strategies for preventing lung damages include avoiding direct lung infection, altering host-virus interactions, promoting immune responses, diluting virus concentrations in lung tissues by promoting viral migration to the rest of the body, maintaining waste removal balance, protecting heart function and renal function, avoiding other infections, reducing allergic reactions and other inflammation, etc. We finally discussed how to use dietary, medical, emotional, lifestyle, environmental, mechanical factors, etc. to alter disease outcomes. We show why true benefits of those factors cannot be determined by randomized controlled trials, and why the multiple-factor optimization approach can be highly effective by examining organ usable capacity in the cause of death. This treatment protocol using water, air, salt, sound, temperature, emotion, exercise, etc. can be the most powerful cures for viral and non-viral lung infections because they do not depend on molecular specificity and are freely available to anyone.
Subject: Medicine & Pharmacology, Oncology & Oncogenics Keywords: incurable disease; chronic disease; cancer theory; central nervous system; state memory; mutation; tissue ecosystem; emotion; stress; exercise; baseline biochemical and cellular processes
Online: 24 November 2019 (14:50:49 CET)
We examined special roles of the Central Nervous System (CNS) in an attempt to resolve the puzzle that chronic diseases cannot be cured by medicine. By exploring a skill-learning model, we found that the CNS is able to remember certain information reflecting biochemical and cellular (B&C) processes in the body. From the skill using ability, we found that the CNS is able to control basic B&C processes that drive and power the skill. From the ability to adjust forces and force direction of a physical act, we found that the CNS is able to adjust B&C processes that drive the physical act. From this adjustment capability, we further inferred that the CNS must also store information on the baseline B&C processes. As a whole, we found that the CNS can maintain information on baseline B&C processes, up-regulate or down-regulate the processes, and make comparisons in performing its regulatory functions. We found that chronic diseases are the results of deviated baseline B&C processes. Per the proved hypothesis, the CNS maintains deviated baseline B&C processes, and thus protects the body states of fully developed diseases. We then used the three CNS roles to explain that cancer progresses with increasing malignancy, cancer quickly returns after a surgery, cancer cells repopulate after chemotherapy and radiotherapy, cancer develops drug resistance inevitably, immune cells rebound after suppression, generally poor benefits of cancer drugs such as beta-blockers, etc. We further showed that long-term exercises generally push most, if not all, baseline B&C processes in diametrical opposing directions against the diseased B&C processes, implying that exercises play unique roles in reversing chronic diseases. Finally, we proposed several strategical approaches to resetting the CNS’ state memory as the essential condition for curing chronic diseases.