Preprint
Article

This version is not peer-reviewed.

Innovative Research on Identifying the Potential Association Between Emotional Abnormalities and Physiological Illnesses in Young Children: Construction of an Artificial Intelligence-Based Recognition Model

Submitted:

03 December 2025

Posted:

05 December 2025

You are already at the latest version

Abstract
Background/Objectives: Health care in preschools has gained increasing attention, particularly in the post-pandemic era, as educators face dual challenges in detecting emotional and physiological abnormalities among young children. Observation-based assessments are subjective and lack real-time data, often delaying the identification of potential health risks. This study aimed to construct an artificial intelligence (AI)-based model capable of recognizing the potential association between emotional abnormalities and physiological illnesses in preschool children. Methods: A mixed-method design was employed, integrating a literature review and the Delphi method. The literature review identified trends and feasibility in AI-assisted child health monitoring. Nine interdisciplinary experts in pediatrics, AI sensing, and early childhood education participated in three Delphi rounds to establish consensus on key physiological and behavioral indicators. Results: Experts reached consensus on five primary indicators—facial expression, speech prosody, heart rate variability (HRV), galvanic skin response (GSR), and skin temperature—and recommended using noninvasive wearable devices. A real-time risk alert system using red, yellow, and green levels was proposed. The final AI model included four modules: sensor input, data pre-processing, AI integration and analysis, and feedback interface. Conclusions: The developed AI-based recognition model demonstrates strong potential for early detection of emotional and physiological abnormalities in preschoolers. It provides timely, objective, and science-based support for caregivers, facilitating early intervention and individualized care. This model may serve as a practical framework for advancing digital transformation in preschool health care.
Keywords: 
;  ;  ;  ;  

1. Introduction

Early childhood represents a crucial stage in human physical and psychological development, during which emotional expression, neural maturation, and physiological functioning are closely interconnected. Abnormalities in either emotional or physiological domains may indicate potential health problems such as autonomic imbalance, sleep disorders, or developmental delays. However, due to immature language abilities, young children are often unable to articulate their internal states clearly, making it difficult for parents and caregivers to detect signs of discomfort in time. As a result, opportunities for early intervention or preventive action are frequently missed. Traditional health assessments in early childhood rely heavily on observational records and subjective judgments, lacking real-time and objective physiological data as a foundation for decision-making.
With the rapid advancement of artificial intelligence (AI) and wearable sensing technologies, research on emotion and health monitoring has gradually expanded from adult populations to young children. Recent studies have shown that AI integrated with multimodal data can effectively analyze facial expressions, vocal tone, heart rate variability (HRV), electrodermal activity (EDA), and motion signals to identify emotional states and potential health risks [1,2,3]. For instance, Tanabe et al. [1] proposed an AI-based emotion recognition system that integrates physiological and motion signals to assess the emotional states of children with profound intellectual and multiple disabilities, demonstrating its feasibility for special-needs populations. Sacrey et al. [2] further indicated that physiological signals serve as crucial indicators of emotional regulation and neural development from infancy through preschool years. Ask et al. [3] found that although HRV was not significantly correlated with neurodevelopmental scores in healthy infants, it still holds value as a potential indicator for early develop-mental monitoring.
Nevertheless, most existing AI models are trained using adult or clinical datasets, making their application to preschool populations relatively limited. Because emotion-al expression and physiological responses in young children vary greatly depending on developmental stage, algorithms not specifically adapted for pediatric characteristics may exhibit reduced accuracy. Furthermore, preschool health care spans three major domains—education, medicine, and information technology. Without cross-disciplinary integration and ethical guidelines, the practical implementation of AI in this field remains challenging [4]. The World Health Organization also empha-sizes that AI development in health care must adhere to ethical governance principles to ensure data security, transparency, and fairness [5]. Therefore, establishing an AI-based health recognition model that balances technical feasibility, ethical sound-ness, and the practical needs of educational environments has become a pressing and valuable research objective.
In summary, this study aims to integrate AI with multimodal physiological signal analysis to construct an AI recognition model capable of identifying the potential association between emotional abnormalities and physiological illnesses in preschool children. The Delphi expert consensus method was employed, inviting professionals from the fields of medicine, early childhood education, and information technology to participate in multiple rounds of consultation. Through this process, key monitoring indicators and evaluation dimensions were identified, leading to the development of a practical AI-based multimodal recognition framework. The ultimate goal is to apply the results of this study to preschool health care settings, providing caregivers with re-al-time, objective, and scientifically grounded decision support to achieve early pre-vention and precision care.

2. Materials and Methods

2.1. Research Design Overview

This study adopted a mixed-methods design that combined literature analysis and the Delphi method to develop an AI-based early recognition model for detecting emotional abnormalities and potential physiological disorders in young children. The re-search was conducted in two main phases. The first phase involved a thematic literature review to synthesize trends and feasibility of AI applications in child health monitoring and emotion recognition, forming the theoretical foundation for the preliminary model. The second phase employed the Delphi method to gather expert opinions, achieve consensus, and construct the initial recognition framework.

2.2. Literature Analysis and Preliminary Model Construction

2.2.1. Literature Collection and Screening

A topic-oriented literature review was conducted using major databases including PubMed, IEEE Xplore, Scopus, Web of Science, and ScienceDirect. Search keywords included “artificial intelligence,” “emotion recognition,” “physiological signals,” “children,” “early detection,” and “health monitoring.” The literature search was limited to English-language, peer-reviewed journal articles published between 2015 and 2025. After reviewing titles, abstracts, and full texts, core studies meeting the following inclusion criteria were selected:
  • The research population included children, particularly preschoolers;
  • The study investigated AI applications in emotion recognition, physiological sig-nal analysis, or health monitoring;
  • The research design was empirically based and methodologically sound.

2.2.2. Data Extraction and Preliminary Model Formation

The selected core studies were subjected to data extraction and content analysis focusing on key indicator types, data sources (e.g., visual, auditory, and physiological signals), and application contexts of AI technology in child populations. Based on the synthesized findings, a conceptual AI-assisted model for identifying emotional and physiological abnormalities in young children was proposed as the preliminary framework for subsequent expert review and refinement.

2.3. Delphi Study Design

2.3.1. Expert Selection

In accordance with Delphi methodology, the composition of the expert panel is crucial to ensuring the reliability of results. Nine experts were invited to participate, each possessing substantial experience in one or more of the following fields: child and developmental psychology, pediatric and child health care, AI and machine learning, educational technology, and early childhood education. Selection criteria included academic background, practical experience, and familiarity with the research topic.

2.3.2. Questionnaire Design and Procedure

The Delphi process consisted of three iterative survey rounds. In the first round, preliminary emotional and physiological indicators identified from the literature were presented to experts, who evaluated their importance and feasibility. In the second round, indicators were revised based on feedback from the first round, and experts were asked to reassess them. The third round focused on confirming consensus regarding the final set of indicators and their roles in model construction.
A five-point Likert scale was used to assess each indicator’s importance (1 = very unimportant, 5 = very important) and feasibility (1 = very infeasible, 5 = very feasible). Between rounds, the research team summarized expert feedback and distributed a re-port to guide subsequent evaluations.

2.3.3. Data Analysis and Consensus Formation

Each Delphi round was analyzed statistically using descriptive indicators such as mean, median, standard deviation, and interquartile range (IQR) to evaluate the consistency of expert opinions. Consensus was defined as at least 70% of experts rating an item as 4 or 5 on the Likert scale, with an IQR less than 1. Indicators failing to meet this threshold were re-evaluated in subsequent rounds [6].

2.4. Model Construction

After completion of the Delphi process, the research team developed a preliminary AI-based recognition model based on the consensus indicators. The model incorporates mechanisms for integrating emotional and physiological indicators, structured workflows for data collection and analysis, and the design of an early warning system to assist in real-time monitoring and decision-making.
The Materials and Methods should be described with sufficient details to allow others to replicate and build on the published results. Please note that the publication of your manuscript implicates that you must make all materials, data, computer code, and protocols associated with the publication available to readers. Please disclose at the submission stage any restrictions on the availability of materials or information. New methods and protocols should be described in detail while well-established methods can be briefly described and appropriately cited.
Research manuscripts reporting large datasets that are deposited in a publicly available database should specify where the data have been deposited and provide the relevant accession numbers. If the accession numbers have not yet been obtained at the time of submission, please state that they will be provided during review. They must be provided prior to publication.
Interventionary studies involving animals or humans, and other studies require that ethical approval, must list the authority that provided approval and the corresponding ethical approval code.

3. Results

This study integrated the results of the literature analysis and three rounds of Delphi questionnaires to construct an AI-based early recognition model for identifying emotional abnormalities and potential physiological disorders in young children.

3.1. Literature Review Findings

3.1.1. Emotion Recognition Indicators

The literature review revealed several mature and widely adopted indicators in AI-based emotion recognition:
  • Facial Expression
Convolutional neural networks (CNNs) and optical flow features have been widely applied in facial emotion analysis, showing strong potential in pediatric populations. Zimmer et al. developed a hybrid model that significantly improved the accuracy of children’s facial emotion recognition [7].
2.
Speech Prosody
Long short-term memory (LSTM) networks have been effectively used for sequential analysis of speech signals, capturing variations in speech rate, pitch, and tone as indicators of emotional states [8].
3.
Personalized Models
Compared with general models, personalized models can more accurately detect individual emotional differences, which is particularly valuable for children with high emotional variability [9].
4.
Multimodal Fusion
Integrating facial imagery, vocal signals, and physiological data within a multimodal fusion framework markedly enhances both accuracy and stability in emotion recognition [10].

3.1.2. Physiological Signal Indicators

In terms of physiological analysis, AI applications have primarily focused on the following indicators:
1.
Heart Rate Variability (HRV)
HRV is considered a sensitive biomarker for stress and emotional changes and has been incorporated into multiple AI-based systems for detecting autism, anxiety, and stress [11].
2.
Galvanic Skin Response (GSR)
GSR reflects autonomic nervous system activity and is well-suited for real-time monitoring through wearable devices, showing a significant correlation with emotional states [11].
3.
Skin Temperature
Minor fluctuations in local skin temperature are associated with stress responses and immune system variations, serving as an auxiliary indicator [10].
4.
Wearable Device Integration
Wearable devices can integrate multiple physiological signals—such as heart rate, EDA and motion data—to detect seizures and stress variations in real time. When combined with AI-based predictive models, these systems not only support clinical monitoring but also demonstrate strong potential for application in child care and early health management [12].

3.2. Delphi Expert Survey Results

To develop an AI-based “Recognition Model for Emotional and Physiological Abnormalities in Preschool Children,” this study conducted a three-round Delphi process involving nine interdisciplinary experts with professional backgrounds in pediatric medicine and nursing, AI sensing technology, and early childhood education practice.

3.2.1. First-Round Analysis

In the first Delphi round, 12 preliminary recognition indicators were identified based on the literature review. Experts were asked to rate the importance and feasibility of each indicator using a five-point Likert scale (1 = very unimportant/infeasible, 5 = very important/feasible). Open-ended questions were also provided to allow experts to propose additional indicators.
According to the analysis results (Table 1), seven indicators (A1, A2, A3, B1, B2, B3, and B4) achieved mean scores ≥ 4.0 on both importance and feasibility, an IQR ≤ 1.0, and at least 75% of ratings ≥ 4, thereby meeting the consensus criteria [11]. These indicators were retained for the second round of evaluation. Experts additionally suggested two new indicators:
  • C1: Daily emotional trend records (linked with a parent-side app);
  • C2: Timing of abnormal pre-illness behaviors (occurring before or after school).

3.2.2. Second-Round Analysis

In the second Delphi round, experts re-evaluated the seven indicators that had reached preliminary consensus in the first round, along with two new indicators (C1 and C2) proposed by experts. The consensus thresholds were defined as: (1) a mean score ≥ 4.0, (2) an IQR ≤ 1.0, and (3) at least 75% of ratings ≥ 4. The results indicated that all seven original indicators maintained expert consensus on their importance and feasibility. The newly added indicator C1 also achieved consensus and was incorporated into the system design. However, C2, despite its potential practical value, did not meet the feasibility threshold and was therefore excluded from further development. In summary, by the end of the second round, eight indicators were confirmed for inclusion in the subsequent phase of model construction.

3.2.3. Third-Round Analysis and Model Confirmation

The purpose of the third Delphi round was to confirm expert consensus regarding the prioritization of the eight final indicators, the integration of data sources, and the feasibility of the preliminary AI framework. Experts generally agreed that integrating three sensing modalities—visual (image), auditory (sound), and physiological signals—was the most appropriate approach. They also recommended adopting a decision-level fusion strategy for risk prediction.
In addition, several experts proposed incorporating a parent-side mobile app to facilitate daily emotional data sharing, enabling continuous model learning and adaptive updates (Table 2). The final consensus confirmed that the proposed AI multimodal recognition framework demonstrated strong practical feasibility and readiness for future implementation and system expansion.

3.3. Preliminary AI Early Detection Model Design

Based on the findings of the literature review and the expert consensus achieved through the Delphi method, this study developed a Multimodal AI Early Detection Model designed to identify the association between emotional abnormalities and potential physiological illnesses in preschool children. The model integrates three major data sources—visual, auditory, and physiological signals—and applies deep learning techniques for data fusion and risk prediction. The analysis results are presented through an interactive caregiver interface to support daily child health monitoring and response.

3.3.1. Model Architecture Overview

The proposed model consists of four primary modules, as summarized in Table 3.

3.3.2. Multimodal Data Integration Design

The AI model processes three heterogeneous data types following the workflow and procedures outlined below:
  • Visual input: Real-time video streams are used to capture children’s facial expressions and body movements.
  • Audio input: Microphones record vocal features such as crying, speech rate, and tone variations for emotion recognition.
  • Physiological input: Wearable devices collect HRV, GSR, and skin temperature to detect changes in physical condition.
After standardization in the preprocessing stage, these three data streams are processed by the AI fusion module through the following steps:
  • CNNs extract and recognize facial expression features from image data;
  • LSTM models analyze sequential emotional trends in audio data;
  • HRV frequency-domain and statistical features are combined for physiological state analysis;
  • The results from all modalities are integrated through decision-level fusion, generating a final risk score ranging from 0 to 1.
Table 3. Architecture of the AI Recognition Model for the Potential Association between Emotional Abnormalities and Physiological Illnesses in Preschool Children.
Table 3. Architecture of the AI Recognition Model for the Potential Association between Emotional Abnormalities and Physiological Illnesses in Preschool Children.
Module Description Data Source
A. Sensor Input Module Collects multimodal data from children, including visual, auditory, and physiological signals Cameras, microphones, wearable devices
B. Data Preprocessing Module Performs noise reduction, normalization, and feature extraction on raw data OpenCV, Librosa,
HRV analysis tools
C. AI Recognition and Integration Module Employs CNNs and LSTM models for multimodal fusion and prediction TensorFlow,
PyTorch frameworks
D. Feedback and Interface Module Visualizes results using a red–yellow–green alert system displayed on caregiver dashboards Web/APP interface with synchronized health logs

3.3.3. System Output and Feedback Mechanism

The AI model’s prediction results are categorized into three alert levels, as shown in Table 4.
Additionally, the system automatically logs abnormal events in the health journal, synchronizes alerts to the parent-side mobile app, and generates a visualized seven-day trend graph of emotional and physiological abnormalities to assist caregivers and parents in monitoring the child’s health status.

3.3.4. Model Advantages and Limitations

As illustrated in Figure 1, the proposed AI recognition model presents three major advantages:
  • It employs noninvasive sensing technologies, enhancing children’s compliance and comfort;
  • It integrates multimodal data sources, improving recognition accuracy and robustness;
  • It provides real-time graphical alerts, assisting caregivers in timely and informed decision-making.
However, several limitations remain. First, the availability of child-specific annotated data for model training is limited. Second, some children may resist wearing devices, affecting data continuity. Finally, when processing personal and sensitive data, the system must incorporate comprehensive privacy protection mechanisms in compliance with the Personal Data Protection Act and the General Data Protection Regulation (GDPR).

4. Discussion

4.1. Major Findings and Theoretical Implications

Through literature analysis and the Delphi method, this study constructed an AI–based recognition model integrating facial expressions, speech prosody, and physiological signals to explore the potential associations between emotional abnormalities and physiological illnesses in preschool children. The expert consensus indicated that multimodal data fusion holds great potential for improving the precision and timeliness of early health monitoring in early childhood care.
From a theoretical perspective, the findings support Porges’ Polyvagal Theory, which posits that vagal activity reflects the emotional regulation functions of the autonomic nervous system and is closely related to stress responses and social behaviors [13]. Moreover, the Neurovisceral Integration Model proposed by Thayer and Lane also confirms the relationship between HRV and the prefrontal cortex in emotional regulation, suggesting that physiological signals serve as key quantifiable indicators of emotional states [14].
At the technical level, the results of this study align with recent systematic reviews on multimodal emotion recognition. Multimodal data—including speech, facial images, and physiological signals—achieves higher accuracy in identifying complex emotions than single-modality approaches and demonstrates lower sensitivity to lighting, cultural, and individual differences [15,16]. Notably, Takir et al. (2023) showed that multimodal models integrating facial and physiological signals outperform unimodal models when applied to children with hearing impairments [15]. Similarly, Coşkun et al. (2023) established a physiological signal database for children with special needs, demonstrating that children’s stress-response features differ significantly from those of adults, thus reinforcing the feasibility of applying multimodal AI architectures in early childhood health monitoring [17].
Overall, the proposed AI-based recognition model aligns with psychophysiological theories and demonstrates high interpretability and operational feasibility. The red–yellow–green alert system design enables caregivers to rapidly interpret abnormal signals, aligning with the principle of explainable AI (XAI) and enhancing human–AI collaborative decision-making.

4.2. Comparison with Existing Literature

When compared with existing research, this study demonstrates clear innovation and extended value.
Sacrey et al. (2021), in their systematic review, reported that in studies of emotional measurement from infancy to the preschool stage, HR and HRV/ RSA were the most commonly used physiological indicators. A few studies also EDA or respiratory parameters; however, their frequency of use and empirical support were weaker than those of HR and HRV. These indicators effectively reflect emotional and stress responses, particularly showing trends of increased HR and decreased HRV/RSA during emotion-eliciting tasks [2]. The present study further extends these applications to health-care contexts by constructing a practically applicable monitoring model.
In addition, Aritzeta et al. (2022) demonstrated that HRV-based biofeedback training significantly enhanced emotional self-regulation in elementary school students [18], indirectly supporting the link between physiological signals and emotional regulation during early childhood. Conversely, Ask et al. (2019) found no significant association between HRV and neurodevelopmental scales in healthy infants [3], suggesting that the relationship between emotion and physiology may vary depending on age and contextual factors.
These contrasts highlight the contribution of the present research: unlike prior studies that primarily focused on school-age children or specific clinical populations, this study centers on early childhood (ages 0–6) and integrates AI-driven recognition with expert consensus, thereby enhancing the operability and real-time applicability of physiological data in preschool health-care monitoring.

4.3. Methodological Reflection

This study employed the Delphi method to establish expert consensus, which subsequently served as the foundation for developing the recognition model framework. This approach demonstrates high adaptability and suitability for interdisciplinary research.
In terms of design, the study deliberately selected a diverse panel of experts—including pediatricians, nurses, early childhood educators, and AI specialists—in accordance with the multidisciplinary panel principle proposed by Hasson, Keeney, and McKenna (2000) [19]. The three-round Delphi process, progressing from open-ended idea collection to indicator ranking and final revision, aligns with the reporting standards for Delphi studies recommended by Diamond et al. (2014) [20].
For consensus determination, the study adopted a median threshold of ≥6 and an IQR ≤1, a standard commonly used in Delphi research within health-care fields [20]. Nevertheless, it should be acknowledged that the number of experts (n = 9) was relatively limited, which may affect representativeness. Future studies are encouraged to expand the sample size and incorporate international panels to enhance cultural and professional diversity.
Regarding the AI modeling component, this study integrated multimodal data to enhance recognition accuracy; however, it remains constrained by the limited dataset available for the preschool population. Tanabe et al. (2024) emphasized that AI-based emotion recognition for children with severe multiple disabilities often suffers from data scarcity and therefore requires transfer learning and feature enhancement techniques to address the issue [1]. Consequently, future research should aim to establish open, privacy-preserving physiological databases for children to improve both the breadth and generalizability of AI model training.

4.4. Practical and Educational Implications

The integration of AI is transforming the landscape of early childhood health care. AI technologies can rapidly synthesize physiological and behavioral signals, providing real-time decision support for early warning and prevention. This aligns with Topol’s (2019) concept of Augmented Intelligence, which emphasizes that AI is not a replacement but an assistant that enhances clinical and educational efficiency while preserving human compassion [10].
In practical settings, kindergartens and childcare centers can adopt wearable devices and real-time monitoring systems as daily health screening tools. When the system indicates a “yellow” or “red” alert, caregivers can follow standardized operating procedures to adjust the environment, provide emotional comfort, or inform parents.
From the perspective of nursing and educational training, Forman et al. (2020) identified that clinical information literacy and AI comprehension have become core competencies in next-generation nursing education and recommended the inclusion of AI applications in curriculum design [21]. Furthermore, Özçevik Subaşi et al. (2025) found that more than 70% of pediatric nurses hold a positive attitude toward AI, perceiving it as a tool that improves care quality and communication with parents [22]. These findings highlight the potential of the AI recognition model proposed in this study as an assistive tool for early childhood caregivers.
At the same time, this study underscores the importance of ethical principles. According to Floridi et al. (2018) and the AI4People framework, AI should adhere to four fundamental principles: beneficence, non-maleficence, autonomy, and justice [23]. In child-centered applications, particular attention must be paid to data anonymization, informed parental consent, bias mitigation, and algorithmic transparency. If these principles are properly implemented, the proposed AI recognition framework can be applied in practice within an ethically sound and secure environment.

4.5. Limitations and Future Directions

The limitations of this study are as follows:
  • Limited sample size – The number of experts in the panel was relatively small, which may not comprehensively represent all clinical and educational domains;
  • Lack of empirical validation – The AI model has not yet been tested in real-world kindergarten or healthcare environments;
  • Cultural constraints – Most of the experts were from a single cultural context, and cross-cultural applicability was not examined;
  • Ethical challenges – Issues such as data bias, privacy risks, and the imbalance of power between parents and children require further consideration.
  • Future research is recommended to:
  • Establish a large-scale, cross-regional multimodal database for preschool children to facilitate AI model training and validation;
  • Conduct pilot studies in childcare centers to assess system usability, accuracy, and parental acceptance;
  • Explore the adaptability of recognition models for children with special needs, such as autism or developmental delay;
  • Apply the Federated Learning framework proposed by Rieke et al. (2020) to enable multi-institutional collaboration while safeguarding data privacy [24];
  • Strengthen ethical governance and professional training for AI in healthcare to ensure compliance with principles of child rights and safety.

4.6. Theoretical and Practical Contributions

Theoretically, this study integrates psychophysiological theories with AI technology and proposes a practical emotion–physiology integrated recognition framework, expanding the theoretical boundaries of early childhood health care research. Practically, the findings assist caregivers and nurses in identifying potential health risks in children, thereby promoting early detection and timely intervention.
The innovations of this study include:
  • Combining the Delphi method with AI modeling, thereby bridging theoretical and practical dimensions;
  • Establishing emotion–physiology recognition indicators applicable in childcare and educational settings;
  • Strengthening the ethical and human-centered acceptance of AI applications in early childhood care.
Ultimately, this research provides a direction that balances technological innovation with ethical reflection, fostering the safe, sustainable, and human-centered application of AI in pediatric and early childhood health domains.

5. Conclusions

This study focused on the potential applications of AI in preschool health care, particularly regarding the feasibility of early detection of emotional abnormalities and potential physiological illnesses. Through a combination of literature analysis and the Delphi expert survey, the study developed a preliminary AI recognition model integrating multimodal input data—including facial expressions, speech prosody, and physiological signals. The findings revealed that AI has already demonstrated significant applications in domains such as autism screening, stress evaluation, and real-time health monitoring. Extending these technologies to preschool settings could effectively compensate for the limitations of traditional observation-based approaches to health assessment. Expert consensus further indicated that multimodal integration enhances recognition accuracy and responsiveness, strongly supporting noninvasive sensing technologies and real-time color-coded alert systems. The proposed model prioritizes practical feasibility while balancing on-site operational needs and information privacy, showing strong potential for real-world implementation and sustainable development.
The study recommends that academic and research institutions continue developing multimodal datasets tailored to preschool populations to mitigate biases arising from AI models trained primarily on adult data and to strengthen the empirical foundation of AI applications in children’s emotional and physiological recognition. Furthermore, it encourages interdisciplinary collaboration among healthcare, education, and information engineering sectors to jointly advance technological development and conduct pilot studies in applied settings. From an ethical and privacy standpoint, future research should adhere to regulations such as the Personal Data Protection Act and the GDPR to establish AI frameworks that ensure data security and informed parental consent. At the policy and practical levels, it is recommended that governments or higher education institutions implement pilot AI health alert systems in demonstration preschools while concurrently promoting AI health literacy training for early childhood educators. Additionally, providing funding for equipment upgrades and digital infrastructure would facilitate the practical integration and sustainable advancement of AI-driven health care in early childhood education environments.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was reviewed by the Antai Medical Care Corporation Antai Tian-Sheng Memorial Hospital Institutional Review Board and received an exemption determination (IRB No. 25-110-C).

Informed Consent Statement

All expert participants provided electronic informed consent prior to participating in the Delphi questionnaires. No identifiable personal information was collected.

Data Availability Statement

The data supporting the findings of this study, including the anonymized Delphi questionnaire responses and aggregated expert consensus results, are available from the corresponding author upon reasonable request. Due to ethical and confidentiality considerations, individual expert responses and raw data are not publicly available. The study did not use any pre-existing datasets or publicly archived materials.

Acknowledgments

The author sincerely thanks the nine interdisciplinary experts in pediatrics, nursing, AI, and early childhood education who participated in the Delphi survey and provided valuable insights for building the AI recognition model. Special appreciation is extended to Meiho University, Taiwan, for institutional and administrative support throughout this research project. The conceptual diagram in Figure 1 was designed with the assistance of a generative AI tool under the author’s supervision. The author has reviewed and edited the output and takes full responsibility for the content of this publication. No external funding was received for this study.

Conflicts of Interest

The author declares no conflict of interest. The AI tool mentioned in this study (ChatGPT, OpenAI, San Francisco, CA, USA) was used only for language editing and figure conceptualization under the author’s supervision and did not influence the research design, analysis, or interpretation of results.

Abbreviations

The following abbreviations are used in this manuscript:
AI Artificial Intelligence
HRV Heart Rate Variability
GSR Galvanic Skin Response
EDA Electrodermal Activity
IQR Interquartile Range
CNNs Convolutional Neural Networks
LSTM Long Short-Term Memory
GDPR General Data Protection Regulation

References

  1. Tanabe, H.; Shiraishi, T.; Sato, H.; Nihei, M.; Inoue, T.; Kuwabara, C. A concept for emotion recognition systems for children with profound intellectual and multiple disabilities based on artificial intelligence using physiological and motion signals. Disabil. Rehabil. Assist. Technol. 2023, 19, 1319–1326. [Google Scholar] [CrossRef] [PubMed]
  2. Sacrey, L.R.; Raza, S.; Armstrong, V.; Brian, J.A.; Kushki, A.; Smith, I.M.; Zwaigenbaum, L. Physiological measurement of emotion from infancy to preschool: A systematic review and meta-analysis. Brain Behav. 2021, 11, e01989. [Google Scholar] [CrossRef] [PubMed]
  3. Ask, T.F.; Ranjitkar, S.; Ulak, M.; Chandyo, R.K.; Hysing, M.; Strand, T.A.; Kvestad, I.; Shrestha, L.; Andreassen, M.; Lugo, R.G.; et al. The association between heart rate variability and neurocognitive and socio-emotional development in Nepalese infants. Front. Neurosci. 2019, 13, 411. [Google Scholar] [CrossRef] [PubMed]
  4. Chng, S.Y.; Tern, M.J.W.; Lee, Y.S.; Cheng, L.T.; Kapur, J.; Eriksson, J.G.; Chong, Y.S.; Savulescu, J. Ethical considerations in AI for child health and recommendations for child-centered medical AI. npj Digit. Med. 2025, 8, 152. [Google Scholar] [CrossRef]
  5. World Health Organization. Ethics and Governance of Artificial Intelligence for Health: WHO Guidance; World Health Organization: Geneva, 2021. [Google Scholar]
  6. Hsu, C.C.; Sandford, B.A. The Delphi technique: making sense of consensus. Pract. Assess. Res. Eval. 2007, 12, 1–8. [Google Scholar]
  7. Zimmer, R.; Sobral, M.; Azevedo, H. Hybrid models for facial emotion recognition in children. arXiv 2023, arXiv:2308.12547. [Google Scholar] [CrossRef]
  8. Dhuheir, M.; Albaseer, A.; Baccour, E.; Erbad, A.; Abdallah, M.; Hamdi, M. Emotion recognition for healthcare surveillance systems using neural networks: A survey. arXiv 2021, arXiv:2107.05989. [Google Scholar] [CrossRef]
  9. Li, J.; Washington, P. A comparison of personalized and generalized approaches to emotion recognition using consumer wearable devices: Machine learning study. JMIR AI 2024, 3, e52171. [Google Scholar] [CrossRef]
  10. Solis-Arrazola, M.A.; Sanchez-Yanez, R.E.; Gonzalez-Acosta, A.M.S.; Garcia-Capulin, C.H.; Rostro-Gonzalez, H. Eliciting emotions: Investigating the use of generative AI and facial muscle activation in children’s emotional recognition. Big. Data Cogn. Comput. 2025, 9, 15. [Google Scholar]
  11. Banos, O.; Comas-González, Z.; Medina, J.; Polo-Rodríguez, A.; Gil, D.; Peral, J.; Amador, S.; Villalonga, C. Sensing technologies and machine learning methods for emotion recognition in autism: Systematic review. Int. J. Med. Inform. 2024, 187, 105469. [Google Scholar]
  12. Brinkmann, B.H.; Karoly, P.J.; Nurse, E.S.; Dumanis, S.B.; Nasseri, M.; Viana, P.F.; Schulze-Bonhage, A.; Freestone, D.R.; Worrell, G.; Richardson, M.K.; et al. Seizure diaries and forecasting with wearables: Epilepsy monitoring outside the clinic. Front. Neurol. 2021, 12, 690404. [Google Scholar] [CrossRef] [PubMed]
  13. Porges, S.W. The polyvagal perspective. Biol. Psychol. 2007, 74, 116–143. [Google Scholar] [CrossRef] [PubMed]
  14. Thayer, J.F.; Lane, R.D. Claude Bernard and the heart–brain connection: Further elaboration of a model of neurovisceral integration. Neurosci. Biobehav. Rev. 2009, 33, 81–88. [Google Scholar] [CrossRef] [PubMed]
  15. Takir, S.; Toprak, E.; Uluer, P.; Barkana, D.E.; Kose, H. Exploring the Potential of Multimodal Emotion Recognition for Hearing-Impaired Children Using Physiological Signals and Facial Expressions. In Proceedings of the 25th ACM International Conference on Multimodal Interaction (ICMI ’23 Companion), Paris, France, 9–13 Oct 2023; pp. 398–405. [Google Scholar]
  16. Al-Azani, S.; El-Alfy, ES.M. A review and critical analysis of multimodal datasets for emotional AI. Artif. Intell. Rev. 2025, 58, 334. [Google Scholar] [CrossRef]
  17. Coşkun, B.; Ay, S.; Barkana, D.E.; Bostanci, H.; Uzun, I.; Oktay, A.B.; Tuncel, B.; Tarakci, D. A physiological signal database of children with different special needs for stress recognition. Sci. Data 2023, 10, 382. [Google Scholar] [CrossRef]
  18. Aritzeta, A.; Aranberri-Ruiz, A.; Soroa, G.; Mindeguia, R.; Olarza, A. Emotional self-regulation in primary education: A heart rate-variability biofeedback intervention programme. Int. J. Environ. Res. Public Health 2022, 19, 5475. [Google Scholar] [CrossRef]
  19. Hasson, F.; Keeney, S.; McKenna, H. Research guidelines for the Delphi survey technique. J. Adv. Nurs. 2008, 32, 1008–1015. [Google Scholar] [CrossRef]
  20. Diamond, I.R.; Grant, R.C.; Feldman, B.M.; Pencharz, P.B.; Ling, S.C.; Moore, A.M.; Wales, P.W. Defining consensus: A systematic review recommends methodologic criteria for reporting of Delphi studies. J. Clin. Epidemiol. 2014, 67, 401–409. [Google Scholar] [CrossRef]
  21. Forman, T.M.; Armor, D.A.; Miller, A.S. A review of clinical informatics competencies in nursing to inform best practices in education and nurse faculty development. Nurs. Educ. Perspect. 2020, 41, E3–E7. [Google Scholar] [CrossRef]
  22. Özçevik Subaşi, D.; Akça Sümengen, A.; Semerci, R.; Şimşek, E.; Çakır, G.N.; Temizsoy, E. Paediatric nurses’ perspectives on artificial intelligence applications: A cross-sectional study of concerns, literacy levels and attitudes. J. Adv. Nurs. 2025, 81, 1353–1363. [Google Scholar] [CrossRef]
  23. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef]
  24. Rieke, N.; Hancox, J.; Li, W.; Milletarì, F.; Roth, H.R.; Albarqouni, S.; Bakas, S.; Galtier, M.N.; Landman, B.A.; Maier-Hein, K.; et al. The future of digital health with federated learning. npj Digit. Med. 2020, 3, 119. [Google Scholar] [CrossRef]
Figure 1. AI-based model for identifying the potential association between emotional abnormalities and physiological illnesses in preschool children.
Figure 1. AI-based model for identifying the potential association between emotional abnormalities and physiological illnesses in preschool children.
Preprints 187956 g001
Table 1. Expert Evaluation of the Importance and Feasibility of Recognition Indicators.
Table 1. Expert Evaluation of the Importance and Feasibility of Recognition Indicators.
Code Indicator Description Mean Importance Mean Feasibility Consensus
A1 Facial expression changes Emotional cues: eyebrows, mouth corners, gaze 4.78 4.56 Yes
A2 Speech prosody changes Emotional tone, rate, pitch variation 4.56 4.44 Yes
A3 Activity level abnormalities Restlessness, reduced movement 4.33 4.22 Yes
B1 HRV Stress and disease risk predictor 4.67 4.78 Yes
B2 GSR Indicators of tension or pain 4.44 4.33 Yes
B3 Temperature anomalies Early fever detection 4.78 4.89 Yes
B4 Respiratory rhythm changes Rapid or shallow breathing 4.33 4.00 Yes
A4 Changes in gaze behavior Avoidance of eye contact 3.89 3.78 No
A5 Speech regression Reduced speech or phrase reversal 3.67 3.56 No
B5 Sleep quality variation Night waking or prolonged sleep onset 3.89 3.78 No
A6 Eating behavior changes Appetite loss or picky eating 3.22 3.00 No
A7 Abnormal nap-time twitching Physiological excitability 3.11 2.89 No
Table 2. Expert Consensus on the Overall Applicability of Recognition Indicators.
Table 2. Expert Consensus on the Overall Applicability of Recognition Indicators.
Component Expert Consensus1
Integration of image (facial + activity), audio, and wearable data analysis 100%
Priority use of noninvasive wearable devices 89%
Presentation of prediction results via color-coded alerts (red–yellow–green) 89%
Synchronization with preschool daily health records 100%
1 % rating = 5.
Table 4. Output and Feedback Mechanism of the AI Recognition System.
Table 4. Output and Feedback Mechanism of the AI Recognition System.
Score Range Display Color Interpretation
0.0–0.3 Green Normal condition; continue observation
0.31–0.7 Yellow Suspected emotional or physiological abnormality; requires attention
0.71–1.0 Red High-risk abnormality; immediate parental notification and medical evaluation recommended
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated