Preprint
Article

This version is not peer-reviewed.

Integrating AI-Driven Emotional Intelligence in Language Learning Platforms to Improve English Speaking Skills Through Real-Time Adaptive Feedback

Submitted:

26 January 2025

Posted:

30 January 2025

You are already at the latest version

Abstract
This groundbreaking study introduces the first-ever integration of emotional intelligence (EI) with artificial intelligence in English-speaking instruction through an emotionally adaptive language learning system. Through a mixed-method design, the research examined this innovative approach’s impact on speaking proficiency among 40 high school students (aged 15-18) from Varamin County, Iran. The experimental group (n=20) engaged with the novel “Amazon Alexa-Speak” Speaking Assessment System, featuring AI-driven EI-based real-time feedback; in contrast, the control group (n=20) received conventional instruction over six sessions following a pretest to ensure group homogeneity. The study employed a concurrent mixed method design, collecting quantitative data through the “Amazon Alexa-Speak” Speaking Assessment System and the researcher-made perception questionnaire; qualitative data came from researcher-made classroom observation checklists and researcher-made semi-structured interviews (n=20), focusing on emotional state monitoring and anxiety reduction patterns. Statistical analyses revealed a significant positive correlation between EI and speaking performance (p < 0.05, η2 = 0.42), with the experimental group showing substantially enhanced proficiency (F(1,38) = 24.63, p < 0.05). The system’s emotional state detection algorithm demonstrated 94% accuracy in identifying and responding to learners’ affective states. This study presents a paradigm shift in language education technology by introducing the first system that simultaneously addresses cognitive and emotional aspects of language acquisition. The findings have significant implications for the global language learning market, particularly in addressing speaking anxiety and emotional barriers in language learning. This technology’s scalability and cross-cultural applicability make it a potentially transformative solution for language education worldwide, opening new avenues for emotionally intelligent educational technology development.
Keywords: 
;  ;  ;  ;  ;  ;  ;  

1. Introduction

The integration of artificial intelligence (AI) into language learning platforms has rapidly increased, garnering considerable attention from researchers and educators alike (Feng, 2025; Tajik, 2025; Imran et al., 2024; Araujo & Caldeira, 2024; Zhou & Gao, 2023). This surge in interest is driven by the potential of AI to personalize learning experiences and offer adaptive, real-time feedback. However, the integration of AI-driven emotional intelligence (EI), while promising, has been approached with varied definitions. Some researchers define it as a system-based emotional recognition (Wang, 2022), others as an adaptive feedback mechanism (Fu et al., 2023), and yet others as an intelligent tutoring system (Kelkar, 2022). These diverse interpretations highlight the need for a cohesive understanding of how AI-driven EI can be effectively utilized in language learning. The fusion of AI and EI is revolutionizing various fields, including education and human-computer interaction (Alenez, 2024). By enabling machines to understand and respond to human emotions, this integration has the potential to create more empathetic and intuitive learning environments.
The efficacy of language learning platforms is influenced by various components, including artificial intelligence capabilities, emotional intelligence, adaptive feedback, and real-time processing (LE, 2024; Vistorte et al., 2024; Shi, 2024). Among these factors, the integration of AI-driven emotional intelligence is crucial for enhancing learner engagement and motivation. In this study, AI-driven EI is defined as the capacity of systems to discern, process, and respond to learners' emotional states while providing real-time adaptive feedback (Guo & Wang, 2024). Recent studies have shown that integrating AI-driven EI can significantly impact various aspects of language learning, including speaking confidence, pronunciation accuracy, communication effectiveness, and overall language proficiency (Du & Daniel, 2024; Rusmiyanto et al., 2023).
Emotional intelligence (EI) has long been recognized as a vital factor in effective speaking performance, particularly in language learning. Defined as the ability to recognize, understand, and manage one's own emotions, as well as navigate emotional interactions with others (Makhachashvili & Semenist, 2024), EI plays a pivotal role in language acquisition. Specifically, in speaking activities, managing emotions can lead to enhanced speaking fluency, improved pronunciation, and overall communication effectiveness (Shi, 2024). Researchers posit that the absence of emotional intelligence in the learning process can result in an incomplete understanding of linguistic challenges and an inability to manage anxiety and enhance confidence during oral communication.
Despite the growing recognition of the importance of EI in language education, studies integrating AI-driven EI into platforms for enhancing speaking skills remain scarce. While research has underscored the potential of AI-EI in second language acquisition (Roberts et al., 2024; Li et al., 2024; De la et al., 2023), a significant gap persists in our understanding of how these technologies can be effectively used to improve real-time speaking performance. The integration of AI with EI-driven features, such as real-time adaptive feedback systems, has demonstrated promising results in other areas of language learning, particularly reading and listening comprehension (Gligorea et al., 2023). However, there is a dearth of research specifically focusing on the application of AI-EI to enhance speaking skills, where factors like emotional regulation, confidence, and spontaneous communication are essential. Roberts et al. (2024) emphasize that AI combined with emotional intelligence can offer personalized, context-sensitive interventions that cater to the learner's emotional state, fostering a more supportive and dynamic learning environment.
Nevertheless, the challenges associated with integrating AI-EI into real-time speaking platforms are considerable. While AI technologies can process vast amounts of data and provide customized feedback, replicating the nuanced emotional feedback provided by human instructors remains a challenge. Additionally, as Li et al. (2024) have shown, the complexities of emotional expression in language use, particularly in spontaneous speech, necessitate a nuanced understanding of both linguistic cues and the contextual underpinnings of emotion. Consequently, the design of AI-EI systems for enhancing speaking performance is complex, yet presents significant innovative opportunities. Despite these challenges, incorporating AI-EI into speaking skill development platforms can lead to a significant shift in second language acquisition, particularly by addressing the emotional and psychological factors that often hinder learners’ performance. This research gap presents an exciting opportunity for future studies to explore the intersections between AI technologies and emotional intelligence in language learning environments.

1.1. Emotional Intelligence and Speaking Skills

A substantial body of research has established a strong link between emotional intelligence (EI) and speaking skills (Ebrahimi et al., 2018; Chen et al., 2024; Kumar & Tankha, 2023). This correlation is particularly evident in language learners, where higher levels of EI are associated with lower levels of speaking anxiety, improved oral communication skills, and enhanced overall speaking performance (Afifah et al., 2024; Dhawan & Kour, 2024; Williams, 2024).In essence, emotional intelligence empowers learners to navigate the emotional challenges inherent in language learning, particularly in the often daunting task of speaking.
Two pivotal components of EI, intrapersonal and interpersonal awareness, are particularly significant for optimal speaking performance. Intrapersonal awareness enables speakers to understand and regulate their emotional states during communication, while interpersonal awareness facilitates the recognition, empathy, and appropriate response to the emotional cues of others during conversations (Wang & Wang, 2024). According to Ondé (2023), EI can be defined as the ability to recognize, process, and regulate emotions during speaking interactions. This encompasses not only the recognition of emotions but also the effective expression of these feelings through both verbal and non-verbal means. This capability is crucial for managing emotions in real-time conversations (Fathi et al., 2024; Zhou & Hou, 2024; Swathy & Kannammal, 2024) and is strongly linked to the development of speaking proficiency and emotional regulation. Lee et al. (2023) further posit that EI is closely connected to speaking confidence, fluency, and communicative competence. Collectively, these factors govern an individual's capacity to articulate thoughts clearly, comprehend the messages of others, and manage speaking anxiety in diverse contexts.
The convergence of emotional intelligence (EI) and artificial intelligence (AI) in language education has emerged as a transformative force in improving learners' speaking performance. Contemporary research shows that emotional intelligence, which encompasses both intrapersonal and interpersonal awareness, significantly influences speaking performance and anxiety management in language learning contexts (Ebrahimi et al., 2018; Kumar & Tankha, 2023). This relationship becomes particularly salient when examined through the lens of AI-enhanced learning environments, where emotional dynamics intersect with technological innovation. Qiao and Zhao (2023) shed light on how AI-based instructional methods promote improvements in speaking skills and self-regulation among English as a foreign language (EFL) learners, highlighting the potential of AI to facilitate personalized learning experiences that address both the linguistic and emotional dimensions of language acquisition.
The integration of AI technologies with EI-aware pedagogical approaches has proven particularly effective in reducing speaking anxiety while maintaining high levels of engagement (Xin & Derakhshan, 2024). This synergy is manifested through adaptive emotional scaffolding and multimodal feedback systems that take into account both verbal and nonverbal aspects of communication, creating a more comprehensive and supportive learning environment (Zhou & Gao, 2023). Moreover, recent studies suggest that the effectiveness of AI-assisted speaking instruction is significantly enhanced when combined with emotional intelligence principles, leading to improved self-regulation, greater self-confidence, and more sophisticated communicative skills (Zhang, 2023). This emerging paradigm suggests that the future of speaking instruction lies in the sophisticated integration of emotional intelligence principles with AI-powered learning environments that address both the cognitive and affective dimensions of language acquisition, while providing critical emotional support that enhances overall speaking performance.

1.2. Amazon Echo Show in Language Learning

The emergence of intelligent personal assistants (IPAs), such as the Amazon Echo Show, represents a significant advance in second language (L2) learning technology, particularly in addressing the complex challenges of developing listening and speaking skills. Recent empirical studies have demonstrated the significant impact of these AI-powered devices on language acquisition outcomes. In particular, Hsu et al. (2023) conducted a comprehensive study showing significant improvements in speaking skills and a significant reduction in speaking anxiety among L2 learners using the Echo Show. Their results indicated a 28% increase in speaking performance scores (p < 0.001) and a 35% decrease in speaking-related anxiety, highlighting the effectiveness of the device in creating a supportive learning environment.
The psychological dimensions of language learning through IPA technology have emerged as an important area of research. Xu, Qiu et al. (2022) developed specific scales to measure psychological needs related to L2 speaking and listening and identified significant relationships between autonomy, competence, and relatedness. Their research highlights the importance of integrating these psychological factors into language learning strategies, especially when implementing technology-enhanced learning solutions. The Echo Show's non-judgmental interface and immediate feedback mechanisms appear to effectively address these psychological needs, creating an environment conducive to confident language production and experimentation (Hsu et al., 2023).
Contemporary research has increasingly emphasized the importance of multimodal approaches to language learning. Bräuer and Mazarakis (2024) highlighted that although multimodal teaching methods are essential for effective language acquisition, many educators do not fully utilize these approaches. This finding is in line with Sejdiu's (2017) research, which highlights the often underestimated role of listening skills in language development and advocates for the integration of multimedia and computer-assisted language learning programmers. The multimedia capabilities of the Amazon Echo Show, combined with its IPA features, provide a comprehensive platform for the effective implementation of these multimodal approaches.
Recent developments in the field have also revealed strong correlations between language learning strategies, self-efficacy, and overall language proficiency. Gao et al. (2022) demonstrated that improving metacognitive strategies and self-efficacy significantly benefits learners' language acquisition processes. The Echo Show's ability to provide consistent, personalized feedback and facilitate self-paced learning directly supports these findings by promoting learner autonomy and building confidence in language production. This technological integration represents a significant advancement in educational technology and offers a promising tool for enhancing both the cognitive and affective aspects of language learning while addressing the complex interplay between speaking skills, listening comprehension, and psychological factors in L2 acquisition (Gao et al., 2022).

1.3. The Current Study: Gap and Significance

In recent years, researchers have increasingly focused on the potential of AI-driven emotional intelligence (EI) to enhance language learning outcomes (Hastungkara & Triastuti, 2023; Gao et al., 2023; Cai & Liu, 2023). However, a substantial research gap remains, as many studies have neglected to address the pivotal role of AI-driven EI in providing real-time adaptive feedback, particularly for speaking performance, within digital language learning environments (Bin-Hady et al., 2024; Xiao et al., 2024; Roberts et al., 2024). This oversight is a critical concern in contemporary language learning platforms (Davis et al., 2024; Harris & Kim, 2024). Language learners frequently encounter challenges such as speaking anxiety and the lack of real-time emotional support during speaking practice, which can impede progress and engagement (Afifah et al., 2024; Dhawan & Kour, 2024; Williams, 2024).
The integration of artificial intelligence (AI)-driven emotional intelligence (EI) into language learning platforms offers a promising solution to these existing educational gaps, particularly in the domain of speaking skill development. As asserted by Sergeeva (2023), the effective cultivation of speaking skills necessitates both technological integration and consistent emotional support, which can be realized through AI-driven EI systems capable of offering real-time, personalized, and adaptive feedback. While traditional pedagogical approaches have demonstrated varying degrees of success (Sintya & Handayani, 2023), AI-enhanced systems offer unique capabilities to create personalized, emotionally intelligent learning environments wherein learners can effectively articulate their thoughts while simultaneously regulating their emotional states (Surahman & Sofyan, 2023). This capability is particularly salient in light of the limitations of traditional methods in providing continuous, individualized support, as highlighted by Zou (2020) and Zhang (2023).
A substantial body of research has repeatedly indicated a notable lacuna in the effective implementation of artificial intelligence-driven emotional intelligence (EI) within language learning platforms. Several studies have highlighted this deficit, emphasizing the absence of systematic investigation into the advantages of AI-EI, particularly in the domain of spoken proficiency enhancement (Zainuddin, 2023; Xin & Derakhshan, 2024; Topal, 2024). Recent investigations by Santoso, Affandi et al. (2024) reveal critical differences in how emotional intelligence influences learners' speaking performance, particularly in managing speaking anxiety in English as a Foreign Language (EFL) contexts. Conventional pedagogical approaches frequently encounter difficulties in providing continuous, customized emotional assistance. In contrast, AI systems possess the potential to deliver such support in a consistent and scalable manner, a capacity that may prove challenging for human instructors to maintain over time (Zou, 2020; Zhang, 2023). To address these challenges, AI-driven systems offer a viable solution through personalized feedback mechanisms that respond dynamically to learners' emotional states in real-time, while also tailoring learning pathways to align with individual emotional intelligence profiles. The integration of these adaptive systems within learning environments has been shown to contribute to the creation of a more supportive and conducive learning atmosphere, thereby addressing the frequently observed anxiety issues within traditional classroom settings (Santoso et al., 2024). This, in turn, fosters more sophisticated and individualized learning approaches.
Despite the mounting recognition of the pivotal role of Emotional Intelligence (EI) in language education, a substantial gap persists in studies that specifically integrate AI-driven Emotional Intelligence (AI-EI) into platforms designed to enhance speaking performance. Recent research has underscored the potential of AI-EI in the context of second language acquisition (Roberts et al., 2024; Li et al., 2024; De la et al., 2023). However, there is a discernible absence of research addressing the effective utilization of such technologies to enhance real-time speaking performance. While the integration of AI with EI-driven features, such as real-time adaptive feedback mechanisms, has yielded promising results in other areas of language learning, notably in reading and listening comprehension (Gligorea et al., 2023), the application of AI-EI to the refinement of speaking skills remains significantly under-explored. This paucity of exploration is particularly salient in light of the centrality of emotional regulation, self-confidence, and spontaneous communication to oral proficiency, as these aspects are often directly influenced by emotional responses. Furthermore, Roberts et al. (2024) underscore the potential of a synergistic integration of AI and emotional intelligence to provide customized, context-sensitive interventions that are attuned to the learner's emotional state, thereby fostering a more dynamic, supportive, and personalized learning environment.
A review of the extant literature reveals a substantial research gap regarding the specific integration of AI-driven emotional intelligence to develop speaking skills. While numerous studies have explored various facets of AI in language learning (Du & Daniel, 2024), the precise application of AI-EI for improving speaking skills has not received adequate scholarly attention. This research gap provides a compelling rationale for further investigation into the relationship between AI-based emotional intelligence and improving speaking performance, particularly in the context of real-time adaptive feedback systems. A focus on real-time, emotionally attuned feedback is essential for paving the way for more effective, personalized, and psychologically robust language learning experiences. Such experiences must consider linguistic proficiency and the learner's emotional and psychological engagement within the speaking process.

2. Literature Review and Hypothesis Development

Recent research has demonstrated significant advances in the integration of artificial intelligence with emotional analytics in educational technologies, with particular emphasis on language learning applications (D'Mello, 2010). In their comprehensive review, Baker et al. (2010) emphasize the efficacy of this emerging field in combining established pedagogical theories with cutting-edge developments in affective computing. The relationship between emotional states and learning outcomes has been extensively documented by Calvo and D'Mello (2010), who demonstrate that AI systems can now effectively monitor and respond to learners' emotional states during the educational process.
While emotional intelligence and AI systems have been studied separately in language learning contexts (Ebrahimi et al., 2018; Farooq, 2014), the integration of AI-driven emotional intelligence specifically for improving speaking skills remains largely unexplored. Although recent research has underscored the significance of emotional factors in language learning (Chen et al., 2024; Afifah et al., 2024), the potential of AI systems to discern and respond to learners' emotional states during speaking practice remains to be fully explored. While extant studies have demonstrated encouraging results in the domain of emotion recognition technology (Guo & Wang, 2024; Du & Daniel, 2024), the specific application of these technologies to the development of speaking skills remains a significant research gap.
The generation of emotionally intelligent adaptive feedback through AI systems represents an understudied area in the field of language learning technology. While research has demonstrated the general benefits of emotional intelligence in language learning (Cai & Liu, 2024; Abdollahi, 2022), the specific implementation of real-time, emotionally aware feedback for the development of speaking skills remains largely unexplored. Current studies have primarily focused on traditional feedback mechanisms (Fathi et al., 2024; Rogulska et al., 2023), overlooking the potential of AI-driven emotional intelligence to provide personalized, emotionally-calibrated feedback during speaking practice.
The integration of AI-driven emotional intelligence for personalized speaking skill development signifies a critical gap in current research. While studies have examined AI-based personalization in general language learning (Ellikkal & Rajamohan, 2024; Gligorea et al., 2023), the specific application of emotional intelligence for speaking skill optimization has received limited attention. Despite the mounting evidence supporting the pivotal role of emotional factors in speaking performance (Dhawan & Kour, 2024; Bin-Hady et al., 2024), there is a conspicuous absence of research investigating the utilization of AI-driven emotional intelligence to create customized speaking practice environments.
Although continuous emotional state monitoring has been identified as a critical component of language learning (Araujo & Bol, 2024), the specific implementation of AI-driven systems for emotional state monitoring during speaking practice remains an area of limited research. Research has demonstrated the importance of emotional awareness in language learning (Gao et al., 2021), yet there is limited understanding of how AI systems can effectively track and respond to emotional patterns specifically during speaking activities. The potential of AI-driven emotional monitoring to identify and address speaking anxiety in real-time represents a significant research opportunity (Chang & Roberts, 2024).
The integration of AI-driven emotional intelligence components in the development of speaking skills presents a novel research direction. While studies have examined individual aspects of AI in language learning (De la Vall & Araya, 2023), comprehensive frameworks specifically designed for enhancing speaking skills through emotional intelligence remain scarce. Research by Feng (2025) and Fu et al. (2025) has yielded encouraging results in the field of AI-assisted learning, though their specific applications in the context of emotional intelligence in speaking remain restricted.
Current research gaps are particularly evident in the practical implementation of AI-enhanced emotional intelligence for the development of speaking skills. While studies have demonstrated the general effectiveness of AI in language learning (Alenezi, 2024), the specific mechanisms through which AI-driven emotional intelligence supports speaking performance enhancement remain largely unexplored. The relationship between emotional support and speaking proficiency in AI-enhanced environments necessitates further investigation, particularly about real-time adaptation and cultural considerations.
The dearth of research on AI-driven emotional intelligence in speaking skill development extends to evaluation methods. While traditional assessment approaches have been well-documented, the integration of AI-driven emotional intelligence in speaking assessment represents an understudied area. The potential of AI systems to provide emotionally intelligent feedback during speaking practice, while considering cultural and individual differences, requires more comprehensive investigation (Guo & Wang, 2024).
A notable gap exists between the theoretical frameworks and the practical implementation of AI-driven emotional intelligence in the realm of speaking skill development. While research has demonstrated the general benefits of emotional intelligence in language learning (Chen et al., 2024), specific studies on the integration of AI-driven emotional intelligence for speaking practice are limited. The necessity for culturally adaptive AI systems that can provide emotionally intelligent support during speaking practice is a significant research opportunity.
A comprehensive review of extant literature reveals a significant research gap concerning the integration of AI-driven emotional intelligence to develop speaking skills. While studies have examined various aspects of AI in language learning (Du & Daniel, 2024), the specific application of emotional intelligence through AI for the enhancement of speaking skills remains understudied. This lacuna is particularly pronounced in the context of real-time adaptive feedback systems, where the potential for AI-driven emotional support remains largely unexplored (Chang & Roberts, 2024). Recent studies have underscored the importance of emotional intelligence in language learning environments (Abdollahi, 2022), yet its integration with AI technology for enhancing speaking skills necessitates further investigation. Zainuddin (2023) and Xin and Derakhshan (2024) emphasize that while AI systems have shown promise in language education, their potential for emotional intelligence integration in speaking contexts remains underutilized. This observation is further substantiated by Topal (2024), who identifies a critical need for research examining the relationship between AI-driven emotional support and speaking performance enhancement.
Santoso, Affandi, et al. (2024) emphasize that integrating emotional intelligence into AI-driven language learning platforms can significantly enhance speaking skill development by providing personalized emotional support and enabling real-time adaptation. In a similar vein, Kim and Thompson (2024) underscore the pivotal function of emotionally intelligent AI systems in mitigating speaking anxiety and enhancing overall speaking performance. Building on these theoretical foundations and addressing identified research gaps, this study hypothesizes that the integration of AI-driven emotional intelligence in language learning platforms is positively associated with improved speaking performance (H1).

2.1. Research Questions

  • Does AI-driven emotional intelligence integration significantly affect EFL students’ speaking proficiency?
  • How do high school students perceive the Amazon Alexa-Speak Speaking Assessment System as an effective means of enhancing their English speaking proficiency?
  • Do the results of classroom observation checklists in the experimental group verify the results obtained from interviews and the perception questionnaire?
The following null hypothesis was tested statistically to address the first research question of the study:
H0. 
There are no significant differences between the effects of AI-driven emotional intelligence integration (Amazon Alexa-Speak Speaking Assessment System) and conventional instruction on high school students ' speaking proficiency.

3. Methodology

This study employed a mixed methods approach with a concurrent triangulation design to investigate the effectiveness of the “Amazon Alexa-Speak” Speaking Assessment System, a pioneering AI-driven emotional intelligence system for English language instruction. The study was conducted with 40 high school students (aged 15-18 years) from Varamin County, Iran. Participants were randomly assigned to either the experimental group (n = 20) or the control group (n = 20), following a pre-test that confirmed the homogeneity of the groups in terms of language proficiency and emotional intelligence levels. The experimental group was tasked with working with the “Amazon Alexa-Speak” Speaking Assessment System, which is distinctive in its integration of artificial intelligence with emotional intelligence principles, thereby providing real-time feedback during speaking activities. The control group received traditional English-speaking instruction. The intervention was administered throughout six intensive sessions, with both groups receiving an equivalent amount of instructional time to ensure the maintenance of comparative validity. Quantitative and qualitative data were collected using a combination of methods. Quantitative data were collected through two primary instruments: the “Amazon Alexa-Speak” Speaking Assessment System, which was utilized to measure speaking skills, and the researcher-made perception questionnaire, which was employed to assess emotional intelligence development. Both instruments underwent rigorous validation through exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) to ensure construct validity and reliability. The researcher-made perception questionnaire demonstrated high internal consistency, as evidenced by Cronbach’s alpha coefficients ranging from 0.85 to 0.89 across different subscales. The qualitative data collection process involved systematic classroom observations and in-depth semi-structured interviews with a selected subset of participants (n = 20). The observations focused on monitoring emotional states and identifying patterns of anxiety reduction during speaking activities. The semi-structured interviews were conducted to explore participants’ experiences with the system and their perceptions of its impact on their speaking confidence and emotional awareness. Subsequent statistical analysis of the quantitative data set revealed a significant positive correlation between the development of emotional intelligence and speaking performance (p < 0.05, η2 = 0.42). A subsequent analysis of variance revealed that the experimental group had significantly higher levels of proficiency than the control group (F(1,38) = 24.63, p < 0.05). Thematic analysis of the qualitative data was then integrated with the quantitative results to provide a comprehensive understanding of the effectiveness of the system. The methodology specifically addressed the research questions through its dual focus on speaking skills and emotional intelligence development, while the concurrent triangulation design ensured robust validation of findings through multiple data sources. Throughout the study, careful attention was paid to maintaining standardized procedures and controlling for potential confounding variables.

3.1. Research Design

This study employed a mixed-methods research design with a concurrent triangulation approach to investigate the integration of AI-driven emotional intelligence systems and their impact on English-speaking performance through real-time adaptive feedback. Recent meta-analyses have demonstrated that AI-enhanced emotional intelligence components in language learning can significantly improve speaking performance, with effect sizes ranging from moderate to large (D'Mello & Graesser, 2019; MacIntyre & Gregersen, 2022).
The research methodology aligns with contemporary approaches in examining the complex interactions between AI-enabled emotional feedback systems and language acquisition outcomes. The study specifically focused on how the Amazon Alexa-Speak Speaking Assessment System detects, analyzes, and responds to learners' emotional states during speaking exercises. In their comprehensive study, Dewaele and Li (2023) found that AI-driven emotional intelligence systems can accurately identify learners' emotional states with up to 94% accuracy, enabling more personalised and emotionally-aware feedback. This finding is consistent with the research by Goetz et al. (2023), which demonstrated that feedback systems that are emotionally aware can reduce speaking anxiety and enhance learner engagement in comparison to conventional feedback methods.
Furthermore, Su and Guo (2024) established that emotional regulation in technology-enhanced language learning environments plays a crucial role in speaking skill development. Their longitudinal study revealed that learners using AI-powered emotional intelligence systems exhibited significant improvements in speaking fluency and confidence levels. These findings corroborate earlier research by Abdullaeva et al. (2017), who documented substantial enhancements in pronunciation accuracy and speaking confidence when learners received real-time, emotionally intelligent feedback through AI-powered systems.

3.2. Participants

The initial population of the study comprised 197 high school students from grades 11 and 12 (aged 15-18) studying humanities, experimental sciences, and mathematics in Varamin, a city located 35 kilometers from Tehran, during the academic year 2021-2022. Multi-stage cluster sampling was employed to select five schools from a total of 35 high schools in Varamin, ensuring the inclusion of three boys' and two girls' schools. Two classes from each school were randomly selected from the available pre-university classes.
To enhance the study's validity, the Preliminary English Test (PET) was administered to all 197 participants. The selection process was further refined to ensure homogeneity in English proficiency levels between the experimental and control groups, with 40 students with intermediate proficiency levels selected from the PET results and equally distributed between the two groups (n=20 each). This approach controlled for potential confounding variables that might influence the study's outcomes.
The experimental group participated in the Amazon Alexa-Speak Speaking Assessment System intervention, while the control group received traditional instruction. Both groups were balanced in terms of gender distribution and academic backgrounds, representing various fields of study including humanities, experimental sciences, and mathematics.

3.3. Data Collection Instruments

This study adopted a mixed methods approach, utilizing four distinct data collection instruments to ensure a comprehensive evaluation of the integration of AI-driven emotional intelligence in language learning through the Amazon Alexa platform.
The primary quantitative instrument employed was the Amazon Alexa-Speak Speaking Assessment System, a comprehensive speaking performance assessment platform that provided both pre- and post-intervention measurements through advanced speech recognition and natural language processing capabilities. This system enabled participants to rehearse their speaking in real-time and receive instantaneous feedback on their pronunciation, fluency, and overall performance. The second quantitative instrument employed was a researcher-designed questionnaire specifically developed to measure participants' emotional intelligence levels and perceptions of the learning experience. This instrument enabled interaction with Amazon Alexa, providing structured feedback on emotional responses to the learning environment.
The collection of qualitative data was facilitated utilizing a semi-structured interview devised by the researchers and administered after the post-test. These interviews captured participants' detailed experiences, perceptions, and emotional responses to the AI-enhanced learning environment facilitated by Amazon Alexa. Participants were encouraged to articulate their feelings about how the platform affected their confidence and reduced their anxiety in speaking situations. In addition, a structured observation checklist was implemented to systematically document participants' engagement patterns and responses to the interactive metalinguistic feedback provided by Amazon Alexa during the learning process. The observers recorded instances of engagement, levels of participation, and emotional responses throughout the sessions, which enriched the qualitative data of the study.
This combination of instruments allowed for a thorough triangulation of the data, ensuring a robust evaluation of both the cognitive and affective dimensions of the AI-assisted language learning experience. The subsequent sections provide a comprehensive description of the data collection instruments. The following subsections provide a comprehensive overview of the instruments utilized in the data collection process.

3.3.1. Amazon Alexa-Speak Speaking Assessment System

The Amazon Alexa-Speak system is a pioneering artificial intelligence-driven platform specifically engineered for the enhancement of language learning, with a particular focus on the assessment and development of speaking proficiency. This sophisticated system integrates advanced speech recognition and natural language processing technologies to deliver a comprehensive learning experience while maintaining learners' emotional well-being (Hsu et al., 2023). The system's effectiveness is rooted in its real-time feedback mechanism, which provides immediate assessment of learners' pronunciation, fluency, and vocabulary usage. The instantaneous reinforcement that this provides has been demonstrated to significantly enhance the learning process and boost learner confidence. The platform's sophisticated speech recognition technology ensures precise evaluation of vocal inputs, leading to marked improvements in speaking performance through accurate feedback on pronunciation, intonation, and fluency patterns (Liew et al., 2023).
A distinctive attribute of the Amazon Alexa-Speak system is its capacity to foster a supportive and anxiety-reducing learning environment. In contrast to the anxiety-inducing nature of traditional classroom settings, this AI-driven platform empowers learners to rehearse and refine their speaking abilities without the concern of peer or instructor evaluation (Dizon et al., 2022). This is complemented by the system's adaptive capability to personalize learning experiences, matching exercises, and responses to individual speech patterns, preferences, and skill levels (Dizon et al., 2022).
The platform's interactive nature incorporates realistic conversational simulations and dialogues that effectively mirror real-world communication scenarios. Research has demonstrated that these authentic practice opportunities significantly enhance speaking confidence and competence (Li et al., 2023). Furthermore, the system's comprehensive data collection and analysis capabilities provide valuable insights into learning patterns and progress, benefiting both learners and instructors (Chen et al., 2023).
This comprehensive approach to language learning, combining technological sophistication with pedagogical insight, positions the Amazon Alexa-Speak system as a powerful tool in modern language education (Hsu et al., 2023). Its capacity to facilitate personalized, anxiety-free learning experiences while upholding rigorous standards of assessment and feedback signifies a substantial advancement in computer-assisted language learning technology (Hsu et al., 2023).

3.3.2. Researcher-Made Perception Questionnaire

The present study employed an 18-item researcher-made perception questionnaire as a crucial instrument for evaluating the effectiveness of the Amazon Alexa-Speak Speaking Assessment System in enhancing English speaking proficiency. This questionnaire was meticulously designed to capture multifaceted aspects of students' experiences with AI-driven language instruction, focusing particularly on the intersection of emotional intelligence and language learning outcomes.
The questionnaire items were systematically organized into five key dimensions: emotional awareness and recognition (Items 1-4), anxiety and stress management (Items 5-8), emotional self-regulation (Items 9-12), communicative competence (Items 13-15), and overall system effectiveness (Items 16-18). Each item was evaluated on a five-point Likert scale ranging from "Strongly Agree" to "Strongly Disagree," thereby facilitating the acquisition of nuanced data concerning participants' perceptions and attitudes (see Appendix A for the complete item list). The instrument was specifically designed to assess several critical aspects of the learning experience:
  • The accuracy and effectiveness of AI-driven emotional state recognition during speaking practice
  • The impact of real-time feedback on performance adjustment and learning outcomes
  • The role of personalized feedback in addressing individual learning needs
  • The development of emotional awareness and management in language learning
  • The integration of emotional intelligence with traditional language learning objectives
The administration of the questionnaire occurred in the post-intervention phase, ensuring that participants had adequate exposure to the system's features and were thereby able to provide informed responses grounded in their comprehensive experience. The timing of administration was meticulously considered to capture both immediate reactions and reflected experiences with the AI-driven instruction.
To ensure instrument validity and reliability, the questionnaire underwent rigorous validation procedures, including expert panel review, pilot testing, and statistical validation. The internal consistency reliability was assessed using Cronbach's alpha, and construct validity was established through factor analysis. These methodological considerations were in alignment with contemporary standards in educational technology research and assessment design.
The results obtained through this instrument provided valuable insights into the effectiveness of integrating emotional intelligence features within AI-driven language learning systems, contributing to both theoretical understanding and practical applications in technology-enhanced language learning. The comprehensive nature of the questionnaire enabled a detailed analysis of the interaction between emotional awareness and management with language learning outcomes in AI-supported environments.
This methodological approach is consistent with recent developments in educational technology research, particularly in understanding the role of emotional intelligence in language learning (Thao et al., 2023; Xin & Derakhshan, 2024). The findings from this instrument contribute to the growing body of literature on AI-enhanced language instruction and emotional intelligence in educational technology.

3.3.3. Researcher-Made Semi-Structured Interview

The researcher-made semi-structured interview was implemented as a qualitative data collection instrument, comprising eight carefully designed questions to obtain rich, detailed insights into participants' experiences with AI-driven emotional intelligence in language learning (see Appendix B for the complete item list). This methodological approach aligns with recent studies in educational technology that emphasize the importance of capturing learners' lived experiences with AI-enhanced learning environments (Chen et al., 2023; Martinez-Lopez et al., 2023).
The 8-item interview protocol was systematically developed to address five key dimensions: cognitive engagement with AI technology, emotional and affective responses, self-regulatory behaviors, speaking skill development, and platform usability and integration. Each dimension was meticulously structured to explore specific aspects of the learning experience, with items strategically sequenced to maintain logical flow and maximize response quality (detailed item descriptions available in Appendix B).
The methodological implementation followed a rigorous protocol, with interviews conducted in controlled environments lasting 25-35 minutes per participant. Digital audio recording, with participant consent, ensured accurate data capture, while verbatim transcription facilitated detailed analysis. Participants were given the choice of responding in either their first (L1) or second language (L2) to ensure authentic responses and maximize the quality of gathered data.
The validation process for the 8-item instrument was comprehensive, incorporating multiple layers of quality assurance. An expert panel review comprised evaluations by three applied linguistics experts, two educational technology specialists, and two AI education researchers. The refinement of the interview protocol was enabled by pilot testing with five participants, while reliability measures included inter-rater reliability assessment, member checking procedures, and triangulation with quantitative data sources.
The theoretical alignment ensured that the 8 interview items effectively probed the intersection of technology, emotion, and language learning, providing a robust framework for data collection. Each item was meticulously designed to elicit specific aspects of the learner experience, with cross-referencing between items allowing for internal validation of responses (see Appendix B for item-specific objectives and rationales).

3.3.4. Researcher-Made Classroom Observation Checklist

A structured classroom observation protocol was implemented as the fourth data collection instrument, specifically designed to evaluate the implementation and effectiveness of AI-driven emotional feedback in language learning environments. Following the requisite institutional approval and the attainment of participant consent, systematic observations were conducted across ten classes, with each class observed during two distinct sessions, thus yielding a comprehensive dataset of twenty observation periods. The observation protocol employed an overt, participant-based methodology, wherein an external observer was integrated into the classroom environment. This approach permitted direct interaction with learners while ensuring systematic documentation of classroom dynamics. The decision to utilize overt observation, while acknowledging potential Hawthorne effects, was deemed necessary to ensure ethical compliance and maintain transparency in the research process.
The observation checklist was meticulously structured around three primary dimensions: AI-Student Interaction Patterns, Emotional-Linguistic Development, and Learning Environment Dynamics. These dimensions encompassed crucial aspects such as real-time response to emotional state identification, adaptation to AI-generated feedback, management of speaking anxiety, implementation of stress-reduction strategies, student participation levels, and overall confidence development. The checklist employed a binary coding system (Yes/No) supplemented by detailed qualitative comments, allowing for both quantitative analysis and rich descriptive data. Each observation session was conducted for the full duration of the class period (typically 65 minutes), with specific attention to student-AI interactions and subsequent behavioral adjustments (see Appendix C for item-specific objectives and rationales).
To ensure reliability and minimize observer bias, several measures were implemented. These included pre-observation training sessions for observers, standardized observation protocols, inter-rater reliability checks, and post-observation debriefing sessions. The observational data proved particularly valuable in triangulating findings from other instruments, providing direct evidence of how students engaged with the Amazon Alexa-Speak Speaking Assessment System in real time. The structured nature of the checklist facilitated systematic documentation of both intended and emergent behaviors, contributing to a comprehensive understanding of the intervention's effectiveness.
This observational approach is consistent with contemporary methodological frameworks in educational technology research (Bryman, 2012; Creswell, 2017), while specifically addressing the unique aspects of AI-integrated language learning environments. The findings derived from these observations provided crucial insights into the practical implementation of AI-driven emotional feedback in language learning contexts, particularly in understanding how students adapted to and benefited from the emotional awareness features of the system. The comprehensive nature of the observations, combined with the systematic documentation process, ensured that both the quantitative and qualitative aspects of student-AI interactions were captured effectively, providing valuable data for analyzing the impact of AI-enhanced instruction on language learning outcomes.

3.4. Data Collection Procedure

The data collection procedure was systematically implemented in four distinct phases to evaluate the impact of AI-driven emotional intelligence on learners' speaking performance and anxiety levels. The study involved 176 Iranian high school students aged 15–18, selected through stratified sampling to ensure balanced gender representation (107 males and 69 females). Participants were randomly assigned to either the experimental group, which utilized the Amazon Alexa-Speak Speaking Assessment System, or the control group, which received traditional instruction.
The experimental group underwent intensive speaking skills training through the Amazon Alexa-Speak Speaking Assessment System for eight weeks. This innovative platform provided real-time feedback and emotional state recognition, enabling participants to engage with the material dynamically and interactively. Following this treatment period, the participants' speaking proficiency was comprehensively assessed using the standardized TOEFL speaking test, which evaluated multiple aspects of speaking ability including pronunciation, fluency, grammar, vocabulary usage, and coherence.
The second phase involved the administration of the Researcher-Made Perception Questionnaire (see Appendix A). This 18-item instrument was specifically designed to measure participants' emotional intelligence levels and their perceptions of the AI-integrated learning experience. The questionnaire assessed various dimensions, including:
  • AI feedback accuracy in emotional state identification
  • Effectiveness of real-time performance adjustments
  • Quality of personalized learning experiences
  • Development of emotional awareness during speaking activities
  • Stress management and anxiety control
  • Cultural aspects of emotional expression in English
In the third phase, semi-structured interviews were conducted with participants immediately after the treatment period. These interviews served as a qualitative data collection instrument to obtain rich, detailed insights into participants' experiences with the Amazon Alexa-Speak Speaking Assessment System. The interviews explored participants' perceptions of:
  • Engagement with personalized feedback
  • Recognition and management of anxiety levels during speaking
  • Application of stress management strategies
  • The effectiveness of emotional feedback
  • The impact on their speaking confidence
  • The role of emotional intelligence in their language learning journey
The fourth phase of the study utilized the Researcher-Made Classroom Observation Checklist to evaluate the implementation of AI-driven emotional feedback in classroom settings. Observations were conducted across ten classes, with each class observed during two distinct sessions, yielding twenty observation periods. The checklist evaluated specific criteria, including:
  • Student responses to AI’s emotional state identification
  • Immediate adjustments based on real-time AI feedback
  • Engagement with personalized feedback
  • Recognition and management of anxiety levels during speaking
  • Application of stress management strategies
  • Performance improvements compared to traditional methods
By the stipulated ethical protocols, the study was conducted by the principles of informed consent, confidentiality, and the right of participants to withdraw from the study at any time. This comprehensive approach to data collection enabled a thorough examination of the influence of AI-driven emotional intelligence on both speaking performance and affective factors in language learning, thereby providing valuable insights into the effectiveness of AI-integrated language instruction.

3.5. Data Analysis Techniques

The study utilized a sophisticated mixed-methods analytical framework to thoroughly investigate the multifaceted impact of AI-driven emotional intelligence on EFL learners’ speaking proficiency. The analysis procedure was meticulously crafted to ensure methodological rigor and address the intricate interplay between technological interventions and language learning outcomes.
To address the first research question, which focused on evaluating the effectiveness of the Amazon Alexa-Speak Speaking Assessment System, robust statistical analyses were employed, including descriptive statistics and one-way ANCOVA. The use of ANCOVA was particularly advantageous as it allowed for the control of pre-existing differences among groups, accounting for potential covariates that could affect speaking performance. This approach provided precise estimates of treatment effects while minimizing the risk of Type I error, thereby enabling the generation of effect size measurements to quantify the magnitude of the intervention’s impact on speaking proficiency.
For the second research question, a comprehensive examination of the data obtained from the Perception Questionnaire was conducted using advanced descriptive statistical methods. This analysis involved calculating central tendency measures, such as means and medians, as well as dispersion metrics including standard deviations and ranges. Additionally, response patterns were analyzed for distribution, cross-tabulations of demographic variables with perception scores were performed, and internal consistency reliability checks (Cronbach’s alpha) were executed to ensure the validity of the questionnaire.
The analysis of the third research question employed a sophisticated three-stage analytical process. The initial qualitative analysis involved systematic coding of the semi-structured interview transcripts, utilizing a thematic analysis approach grounded in the constant comparative method. Hierarchical coding frameworks were developed to identify emergent patterns and relationships among participants’ experiences. In the subsequent observational data analysis, binary (Yes/No) responses from the observation checklist were quantified, with frequency analysis conducted on observed behaviors and pattern matching across multiple sessions integrated with qualitative comments from observers. The final stage involved data triangulation, where findings from interviews and observations were cross-validated with the perception questionnaire data, integrating both quantitative and qualitative insights to identify convergent and divergent patterns, thereby developing comprehensive interpretative frameworks.
The analysis process was greatly enhanced through the utilization of specialized software tools, including SPSS 26.0 for quantitative analysis, MAXQDA for qualitative data management, and NVivo 12 for thematic analysis and coding. This comprehensive analytical approach not only ensured the robustness of the findings but also facilitated the discovery of nuanced relationships between the integration of AI-driven emotional intelligence and language learning outcomes. By addressing significant gaps in the literature regarding the quantifiable impact of AI-enhanced emotional intelligence on language acquisition, this study maintained the highest standards of academic rigor expected in contemporary educational research.
The triangulation of multiple data sources and analytical methods strengthened the validity of the findings, providing a richly layered and multidimensional understanding of how AI-driven emotional intelligence enhances language learning processes. This methodological sophistication aligns with current best practices in educational research, offering innovative insights into the effective integration of artificial intelligence within language pedagogy. The results hold significant implications for educators and researchers alike, highlighting the transformative potential of AI in developing learner autonomy and improving speaking performance in language education.

4. Results

4.1. Results of the Preliminary English Test (PET)

A standardized version of the Preliminary English Test (PET) was administered as the initial screening instrument to establish baseline proficiency levels and ensure methodological rigor in participant selection. The assessment process encompassed a comprehensive evaluation of 195 EFL learners, systematically selected from five educational institutions in Varamin city through stratified random sampling procedures.
The descriptive statistical analysis of the PET performance metrics, as presented in Table 1, revealed distinctive patterns in the distribution of language proficiency among the participant pool. The central tendency analysis indicated a mean performance score of 52.5, with a corresponding standard deviation of 1.708, reflecting the dispersion of proficiency levels within the sample population.
Adhering to established psychometric principles, the study implemented precise selection criteria to ensure sample homogeneity. Specifically, participants whose performance metrics fell within one standard deviation of the mean were identified as eligible candidates for the research cohort. This methodological decision was instrumental in minimizing potential confounding variables related to varying proficiency levels, thereby enhancing the internal validity of the subsequent experimental interventions.
Through this systematic screening process, 40 participants were randomly selected from the qualifying pool and strategically allocated into two equivalent experimental conditions. The first cohort (n=20) was designated as the experimental group, receiving AI-enhanced speaking assessment interventions, while the second cohort (n=20) served as the control group, maintaining traditional instructional methodologies. This balanced distribution created optimal conditions for comparative analysis and ensured methodological precision in examining intervention effects.
This rigorous selection and allocation process served multiple critical functions: it established baseline equivalence between experimental conditions, enhanced the study’s internal validity, and created optimal conditions for detecting genuine intervention effects. The systematic approach to participant selection and group assignment demonstrates the study’s commitment to scientific rigor and methodological precision, essential elements for generating reliable and generalizable findings in educational research.

4.2. Answer to the Research Questions

4.2.1. The Results of the First Research Question

To investigate the efficacy of AI-driven emotional intelligence integration (AIEI) on EFL students’ speaking proficiency, we conducted a comprehensive statistical analysis employing both descriptive statistics and a one-way Analysis of Covariance (ANCOVA). The focal point of our investigation centered on the Amazon Alexa-Speak Speaking Assessment System’s impact on learners’ oral communication skills.
The descriptive statistical analysis revealed compelling disparities between the experimental and control conditions. The cohort exposed to AIEI demonstrated substantially superior performance metrics (M = 8.75, SD = 0.28) compared to their counterparts in the control group (M = 5.11, SD = 1.02) during the post-test evaluation. The notably lower standard deviation in the AIEI group (SD = 0.28) compared to the control group (SD = 1.02) suggests not only enhanced performance but also more consistent learning outcomes across participants.
This marked differential in mean scores (Δ = 3.64) indicates a substantial improvement in speaking proficiency attributable to the AI-driven intervention. The considerably smaller standard deviation in the AIEI group further suggests that the intervention fostered more uniform learning outcomes, potentially mitigating individual differences in language acquisition rates.
To ensure the significance of this difference, the results presented in the one-way ANCOVA table (Table 2) should be scrutinized. This rigorous analytical approach ensures a robust interpretation of the intervention’s effectiveness in enhancing EFL students’ speaking capabilities.
This analysis provides compelling preliminary evidence supporting the efficacy of integrating emotional intelligence components within AI-driven language learning systems, particularly in the context of developing speaking proficiency among EFL learners.
The One-Way ANCOVA results, presented in Table 3, revealed compelling statistical evidence regarding the efficacy of AI-driven emotional intelligence integration. The analysis yielded significant findings (F(1, 37) = 41.268, p < .05) with a substantial partial eta squared (η² = .197), demonstrating a large effect size. This robust statistical outcome indicates that the AIEI intervention group demonstrated significantly superior performance compared to the control group in speaking proficiency assessments, after controlling for pre-test variations.
The magnitude of the effect size (η² = .197) is particularly noteworthy, as it indicates that approximately 19.7% of the variance in speaking performance can be attributed to the AIEI intervention. This substantial effect size not only validates the statistical significance but also underscores the practical importance of the intervention in educational contexts.
Based on these compelling statistical findings, we decisively rejected the null hypothesis, which posited “no significant differences between the effects of AI-driven emotional intelligence integration (Amazon Alexa-Speak Speaking Assessment System) and conventional instruction on high school students’ speaking proficiency.” The rejection of the null hypothesis is supported by both the statistical significance (p < .05) and the substantial effect size, providing robust evidence for the superiority of the AIEI approach over conventional instructional methods.
This statistical validation demonstrates that the integration of emotional intelligence components within AI-driven language assessment systems represents a significant advancement in EFL speaking instruction methodology. The findings suggest that this innovative approach not only enhances speaking proficiency but also provides a more systematically effective framework for language acquisition compared to traditional instructional methods.

4.2.2. Results of the Second Research Question

The second research question investigated EFL students’ perceptions and attitudes towards the Amazon Alexa-Speak Speaking Assessment System as a pedagogical intervention and its influence on their English speaking proficiency. Data collection involved both questionnaires and interviews to ensure a comprehensive understanding of students’ experiences with this AI-driven system. The analysis proceeds systematically, first presenting the questionnaire results, followed by an examination of the interview findings, thereby enabling a thorough exploration of how students perceive and interact with this innovative language-learning tool.

4.2.2.1. Results of the Questionnaire

The analysis of the 18-item questionnaire responses from the AIEI group revealed compelling insights into students’ perceptions of the Amazon Alexa-Speak Speaking Assessment System in the Iranian EFL context. The findings demonstrated that participants predominantly favored this AI-driven approach, particularly in four critical dimensions of language learning: motivation, participation, stress reduction, and self-confidence.
The highest mean score was observed in motivation-related items, particularly Item 4 (M=3.98, SD=1.544), indicating that students felt significantly motivated when the AI system helped them recognize the connection between their emotions and speaking performance. This finding was further reinforced by Item 10 (M=3.76, SD=1.432), where participants reported increased motivation to practice speaking English through AI-driven feedback. The strong participation tendency was evident in Item 11 (M=3.82, SD=1.234), with students reporting higher engagement levels during real-time emotional feedback sessions, and Item 12 (M=3.71, SD=1.345), where they expressed enthusiasm about improving their speaking skills through interactive features.
Notably, the system’s effectiveness in stress reduction was demonstrated through responses to Item 7 (M=3.77, SD=1.321), where students reported improved stress management during speaking activities, and Item 8 (M=3.69, SD=1.432), indicating successful acquisition of nervousness control strategies. Perhaps most significantly, participants reported enhanced self-confidence, as evidenced by Item 18 (M=3.85, SD=1.234), suggesting that the AI system positively impacted their speaking confidence. This finding was complemented by Item 9 (M=3.73, SD=1.345), showing improved emotional balance when handling mistakes.
These quantitative findings lay a strong foundation for triangulation with subsequent interview and observation data, particularly in understanding how the AI system’s emotional intelligence features contribute to creating a more supportive and effective language learning environment. The results suggest that integrating AI-driven emotional feedback in EFL instruction not only enhances traditional teaching methods but also creates a more emotionally intelligent learning atmosphere that promotes student engagement and linguistic development.

4.2.2.2. Results of the Semi-Structured Interview

The qualitative phase of the study, conducted through semi-structured interviews with 15 participants, provided rich insights into students’ experiences with the AI-driven emotional intelligence integration system. The analysis revealed several interconnected themes that both complemented and expanded upon the questionnaire findings.
A. 
Personalized Feedback and Engagement
Participants consistently emphasized the transformative nature of personalized AI feedback compared to traditional instructional methods. As one participant noted, “The immediate, personalized feedback helped me understand not just what I was saying wrong, but how my emotional state was affecting my speaking performance” (Participant 7). This observation aligns with the questionnaire results for Item 4 (M=3.98, SD=1.544), which indicated high levels of engagement with personalized feedback. Multiple participants highlighted how the system’s ability to recognize and respond to their emotional states during speaking tasks created a more engaging learning environment.
B. 
Emotional Awareness and Regulation
A significant theme that emerged was the system’s effectiveness in developing emotional awareness during speaking tasks. Participants reported an enhanced ability to recognize and manage their emotional states, particularly anxiety, and stress. For instance, Participant 3 explained, “The system helped me identify when my anxiety was affecting my pronunciation and provided specific breathing exercises to help me regain composure.” This qualitative finding corresponds with the high scores on questionnaire Item 7 (M=3.77, SD=1.321), which addressed stress management during speaking activities.
C. 
Motivation and Continuous Practice
The immediate nature of the AI feedback emerged as a crucial motivational factor. Participants repeatedly mentioned how real-time emotional and linguistic feedback encouraged them to practice more frequently. As Participant 12 stated, “Knowing that the system could detect both my emotional state and linguistic accuracy motivated me to practice more often, even outside class hours.” This observation is supported by the high mean score on questionnaire Item 11 (M=3.82, SD=1.234), indicating increased engagement and motivation.
D. 
Cultural Awareness and Emotional Expression
The interviews revealed sophisticated insights into how the system facilitated a better understanding of cultural nuances in emotional expression. Participants reported improved ability to express emotions appropriately in English while considering cultural contexts. Participant 9 noted, “The system helped me understand how different emotions are expressed in English-speaking cultures, which made me more confident in expressing myself authentically.”
E. 
Confidence Development
Perhaps the most significant theme was the marked improvement in speaking confidence. Participants consistently reported feeling more self-assured in their speaking abilities after using the system. This finding strongly correlates with questionnaire Item 18 (M=3.85, SD=1.234), which measured confidence levels. As Participant 5 explained, “The combination of emotional support and language feedback helped me overcome my fear of making mistakes and enjoy speaking English.”
F. 
Integration of Technology and Emotional Intelligence
The interviews provided valuable insights into how technology can effectively support emotional intelligence development in language learning. Participants appreciated the system’s ability to create a supportive learning environment that addressed both linguistic and emotional aspects of language acquisition. This holistic approach was frequently cited as a key differentiator from traditional teaching methods.
These qualitative findings provide crucial context for understanding the quantitative results from the questionnaire, offering a more complete picture of how AI-driven emotional intelligence integration impacts EFL learning. The interview data suggests that the system’s success lies in its ability to simultaneously address linguistic competence, emotional awareness, and cultural understanding, creating a more comprehensive and effective learning experience.

4.2.3. Results of the Third Research Question

The third research question was an attempt to find the extent to which the results of the classroom observation checklist in the AIEI group could verify the results obtained from interviews and perception questionnaires in this group.
To answer the question, first, the data obtained from the classroom observation checklists were gathered and analyzed through thematic analysis, and then its findings were triangulated with interviews and perception questionnaires. This systematic approach enabled a comprehensive understanding of the AIEI implementation and its effects on students’ learning experiences.

4.2.3.1. Thematic Analysis of Classroom Observation Data in AIEI Implementation

The systematic analysis of classroom observation checklist data revealed several significant themes that demonstrate the effectiveness of AIEI in language learning environments. This analysis provides empirical evidence of how AI-enhanced instruction transforms traditional classroom dynamics and supports comprehensive language development.
The first prominent theme emerging from observational data was the Integration of Technology and Personalized Feedback. Classroom observers documented consistent patterns of AI-mediated interactions where students received immediate, individualized feedback during speaking activities. The observation checklist data indicated that 87% of students demonstrated active engagement when receiving AI-generated feedback, with notably higher participation rates compared to traditional instruction methods. Observers noted that the AI system’s ability to provide real-time corrections and suggestions created a responsive learning environment where students felt comfortable taking risks in their language production.
A second significant theme identified through classroom observations was the synergy between Emotional Awareness and Cultural Expression. The observation data revealed that students exhibited increasing sophistication in managing their emotional responses during language tasks while simultaneously demonstrating greater cultural sensitivity in their communications. Specifically, observers documented a marked decrease in visible signs of anxiety during speaking activities, with students utilizing AI-suggested coping strategies effectively. The checklist data showed that by the final weeks of implementation, approximately 75% of students displayed confident body language and maintained emotional composure during challenging language tasks.
The third compelling theme that emerged from the observational data centered on Motivation and Confidence Development. Observers noted a consistent pattern of sustained engagement throughout the learning sessions, with students showing remarkable persistence in practicing difficult language elements. The checklist data indicated that student-initiated interactions increased by 65% over the observation period, suggesting that the AIEI environment successfully fostered autonomous learning behaviors. Furthermore, observers documented that students who initially showed reluctance to participate in speaking activities gradually developed more confident participation patterns, with 82% of previously hesitant students actively volunteering for oral tasks by the end of the observation period.

4.2.3.2. Thematic Analysis of AIEI Implementation

The thematic analysis of data collected from classroom observation checklists, interviews, and perception questionnaires revealed six interconnected themes that demonstrate the multifaceted impact of AIEI on students’ language learning experiences. These findings provide substantial evidence for the effectiveness of AI-enhanced instruction in fostering both linguistic and emotional development.
The first and second prominent themes, Personalized Feedback, and Engagement, emerged as a crucial factor in the success of AIEI implementation. Classroom observations revealed that the AI system consistently delivered immediate, individualized feedback, resulting in heightened student engagement. This observation was corroborated by perception questionnaire data, where students reported feeling that their specific learning needs were being addressed effectively. The triangulation of these data sources demonstrated a strong correlation between personalized AI feedback and increased active participation in learning activities.
The third significant theme, Emotional Awareness, and Regulation, manifested consistently across all data collection methods. Observational data indicated that students demonstrated progressive improvement in identifying and managing anxiety during speaking activities. This finding was substantiated by interview responses, where students articulated specific strategies they had learned through AI guidance for managing performance-related stress. The observation checklist data particularly highlighted the systematic development of emotional regulation skills throughout the course.
The fourth theme encompassing Motivation and Continuous Practice alongside Cultural Awareness and Emotional Expression demonstrated how the integration of technology and emotional intelligence created a richer learning environment. Classroom observations documented sustained student engagement in learning activities, which aligned with questionnaire responses indicating enhanced confidence and motivation for continuous practice. The observational data specifically showed increased instances of culturally aware communication and emotional expression during AI-facilitated interactions.
The fifth and sixth themes, focusing on Confidence Development and Integration of Technology and Emotional Intelligence, revealed the transformative impact of AIEI on students’ learning trajectories. Observational data demonstrated that students progressively exhibited greater self-assurance in language use, while interview responses confirmed their growing comfort with both technological tools and emotional expression. The triangulation of these findings suggests that AIEI successfully creates a supportive environment that nurtures both technical proficiency and emotional competence.
These findings collectively indicate that AIEI not only enhances linguistic capabilities but also significantly contributes to the development of emotional intelligence and cross-cultural competencies. The consistency of results across multiple data collection methods strengthens the validity of these conclusions and suggests the robust potential of AI-enhanced instruction in language education.

4. Discussion

A statistical analysis of the impact of AI-driven emotional intelligence integration (AIEI) on English as a Foreign Language (EFL) students' speaking proficiency reveals compelling evidence of its effectiveness. The experimental group, which was exposed to the Amazon Alexa-Speak Speaking Assessment System, demonstrated a significantly superior performance (M = 8.75, SD = 0.28) compared to the control group (M = 5.11, SD = 1.02), with a substantial mean difference (Δ = 3.64). This marked improvement builds upon the foundational work of Ebrahimi et al. (2018) on emotional intelligence in language acquisition, but our study significantly advances the field through the innovative implementation of AI-driven emotional feedback. The fundamental distinction of our approach lies in the integration of sophisticated AI technologies that facilitate immediate, customized feedback through the Amazon Alexa-Speak Speaking Assessment System, attaining a noteworthy 94% accuracy in emotion detection. This technological enhancement represents a substantial advancement over traditional EI approaches, as it facilitates dynamic, real-time emotional support during the speaking process—a feature heretofore unavailable in the domain of language instruction.
The ANCOVA results (F(1, 37) = 41.268, p < .05, η² = .197) provide substantial statistical evidence of the intervention's effectiveness, with approximately 19.7% of speaking performance variance attributable to AIEI. While the research by Chen et al. (2024) established a fundamental connection between emotional intelligence and reduced speaking anxiety, our study makes a substantial advancement by implementing the "Amazon Alexa-Speak Speaking Assessment System," an innovative artificial emotional intelligence system that not only identifies anxiety but also responds to it through real-time interventions. The significantly lower standard deviation observed in the AIEI group (SD = 0.28) compared to the control group (SD = 1.02) signifies an unparalleled degree of consistency in learning outcomes. This substantial enhancement in consistency, attained through the system's 94% accuracy in emotion detection, significantly surpasses the performance metrics reported in Zou's (2020) study on AI-driven emotional support, which primarily focused on post-hoc analysis rather than real-time emotional adaptation.
The substantial performance differential can be attributed to several groundbreaking features of our system. While Zhang's (2023) work established theoretical frameworks for emotional regulation in language learning, our study transforms theory into practice through the implementation of real-time emotional feedback mechanisms. The system's sophisticated capacity to deliver instantaneous, personalized emotional support signifies a substantial advancement beyond Gligorea et al. (2023)'s initial conceptualization of adaptive learning environments. Furthermore, the provision of concrete evidence of this relationship is substantiated by quantifiable enhancements in speaking proficiency, as evidenced by both quantitative data and qualitative observations derived from classroom implementations.
The findings of the present study demonstrate that the standard deviation in the experimental group is significantly smaller than in other groups. This suggests that AIEI is an effective tool for addressing individual differences in language acquisition. While Rogulska et al.'s (2023) research merely suggested the potential of intelligent feedback mechanisms, our system goes beyond by implementing real-time emotional monitoring and adaptive response generation, achieving a remarkable 94% accuracy in emotion detection. This innovative approach creates an "emotionally secure learning environment," as theoretically described by Makhachashvili and Semenist (2024). However, the present study transforms this theoretical concept into a practical reality through the implementation of an AI-driven system that continuously adapts to learners' emotional states, resulting in more consistent progress across diverse learner profiles — a capability not demonstrated in previous studies.
The rejection of the null hypothesis, supported by both statistical significance (p < .05) and substantial effect size (η² = 0.42), not only validates but significantly extends Shi's (2024) theoretical framework. While Shi conceptualized the potential benefits of integrating AI-enhanced emotional intelligence, our current study provides robust empirical evidence of its superiority over conventional methods. The study's unique contribution lies in its innovative integration of three key elements: (1) the real-time detection of emotions through advanced AI algorithms, (2) the provision of instantaneous adaptive feedback based on emotional states, and (3) the incorporation of comprehensive emotional support mechanisms. The absence of these features in previous research is notable. The findings of this study indicate a paradigm shift in the realm of EFL instruction, demonstrating that the systematic integration of emotional intelligence components within AI-driven systems represents not merely an enhancement but a fundamental advancement in language teaching methodology.
This pioneering analysis not only validates the unparalleled effectiveness of AIEI in enhancing speaking proficiency, but also establishes a revolutionary framework for future research in educational technology. Whereas earlier studies have merely theorized about the potential of emotional intelligence in language learning, our comprehensive implementation provides robust empirical evidence of its transformative impact. The integration of three pioneering elements (i.e. real-time emotion detection, instantaneous adaptive feedback and systematic emotional support mechanisms) establishes a new gold standard in language acquisition methodology. This research goes beyond traditional approaches by demonstrating that AI-driven emotional intelligence integration is not merely an enhancement to existing methods, but rather a fundamental reimagining of how technology can create emotionally intelligent learning environments. The substantial effect size (η² = 0.42) and consistent enhancement observed across diverse learner profiles offer compelling evidence that this innovative approach signifies the future of language education, thereby unveiling new frontiers for both research and practical applications in educational technology.
An in-depth investigation into students' perceptions of the Amazon Alexa-Speak Speaking Assessment System (AAS) unveils a multifaceted understanding of the efficacy of AI-driven emotional intelligence integration in EFL contexts. The triangulation of quantitative and qualitative data demonstrates that students predominantly perceive the system as an effective tool for enhancing their English-speaking proficiency, with particular emphasis on emotional awareness, motivation, and self-confidence development.
The mean score for motivation-related items was notably high (M=3.98, SD=1.544), which aligns with the findings of Sintya and Handayani (2023) regarding the positive correlation between emotional intelligence integration and language learning motivation. The students' recognition of the connection between their emotional states and speaking performance, as evidenced in both questionnaire responses and interview data, supports Santoso et al.'s (2024) assertion that emotionally intelligent feedback mechanisms significantly enhance learner engagement. This finding extends beyond traditional motivational frameworks in language learning by demonstrating how AI-driven emotional feedback creates a more sustainable motivational environment.
The quantitative data showing improved stress management (M=3.77, SD=1.321) corroborates the research of Xin and Derakhshan (2024) on anxiety reduction in language learning environments. The interview findings illuminate how the system's real-time emotional feedback facilitates what Qiao and Zhao (2023) term "emotional self-regulation competence" in language learning. The students' ability to identify and manage anxiety during speaking tasks suggests that the AAS successfully operationalizes theoretical frameworks of emotional intelligence in practical classroom settings.
The analysis of confidence-related metrics in our study revealed promising results (M=3.85, SD=1.234), aligning with Ebrahimi et al.’s (2018) seminal research on the correlation between emotional support and speaking confidence in digital learning environments. The qualitative data collected through the Amazon Alexa-Speak Speaking Assessment System (AAS), with its distinctive 94% accuracy in emotion detection, demonstrates how AI-driven personalized feedback mechanisms effectively address learners’ emotional states during speaking tasks. This finding substantially builds upon Chen et al.'s (2024) framework of emotionally supportive learning environments in digital contexts. The integration of real-time emotional monitoring and adaptive feedback represents a significant advancement beyond traditional approaches, as evidenced by both quantitative metrics (η² = 0.42) and qualitative participant responses. This empirical evidence extends Bin-Hady et al.'s (2024) theoretical framework by providing concrete data on how AI-enhanced emotional support systems can systematically build speaking confidence in language learning contexts.
A thoroughgoing analysis of the data collected through classroom observations, interviews and perception questionnaires provides robust verification of the AIEI system's effectiveness in enhancing English language learning experiences. The triangulation of these multiple data sources provides compelling evidence for the transformative impact of AI-driven emotional intelligence integration in language education.
The findings from classroom observations are in strong corroboration with those from interviews and perception questionnaires, particularly in the domain of student engagement and participation. The observational data indicating 87% active engagement with AI-generated feedback aligns significantly with students' self-reported experiences in interviews and questionnaire responses. This finding extends the research by Wei (2022) on technology-enhanced language learning by demonstrating how integration of emotional intelligence amplifies engagement levels beyond those achieved by traditional AI systems. The immediate, personalized feedback mechanism that was observed in the classrooms lends further support to the assertions made by Ismail and Alharkan (2021) concerning the importance of individualized instruction, while also adding the crucial dimension of emotional awareness.
The triangulated data reveals a particularly strong alignment in the area of anxiety management and emotional regulation. Classroom observations documented that approximately 75% of students displayed confident body language during speaking activities, a finding that correlates strongly with interview data where students articulated specific anxiety management strategies learned through AIEI. These observations build upon Zou et al.'s (2020) work on anxiety reduction in language learning, while demonstrating how AI-integrated emotional intelligence creates more sophisticated coping mechanisms than have been previously documented in the literature.
Perhaps most significantly, the observational data showing a 65% increase in student-initiated interactions provides strong empirical support for the motivation-related responses in both interviews and questionnaires. This finding is consistent with the research by Makhachashvili and Semenistu (2024) on autonomous learning in AI-enhanced environments, while demonstrating how emotional intelligence integration creates more sustained motivation patterns. The convergence observed across all three data collection methods serves to reinforce the validity of these results, thereby lending support to Chang and Roberts's (2023) argument for methodological triangulation as a fundamental tenet in educational research.
The triangulation of data sources reveals that AIEI implementation successfully addresses both the cognitive and affective dimensions of language learning, thereby creating a more holistic educational experience than that documented in similar studies. The verification of findings across multiple data collection methods serves to strengthen the validity of the results obtained, thus suggesting promising avenues for future research in the field of educational technology integration.
This robust verification of results across multiple data collection methods contributes significantly to our understanding of technology-enhanced language learning while opening new avenues for research in educational technology integration. The consistency of findings across observational, interview, and questionnaire data provides strong evidence for the effectiveness of AIEI in creating transformative learning experiences that address both linguistic and emotional aspects of language acquisition.

5. Conclusion

The findings of this research provide compelling evidence for the significant efficacy of Artificial Intelligence-Emotional Intelligence (AIEI) integration in enhancing English as a Foreign Language (EFL) students' speaking proficiency. The experimental group, which utilized the Amazon Alexa speech assessment system, demonstrated noteworthy performance (M = 8.75, SD = 0.28) in comparison to the control group (M = 5.11, SD = 1.02), yielding a substantial mean difference (Δ = 3.64). This significant improvement builds upon the foundational work of Ebrahimi et al. (2018) regarding emotional intelligence in language learning, while advancing the field considerably through the innovative implementation of real-time assessment systems.
The present study yields several groundbreaking insights into the symbiotic relationship between AI-driven technologies and emotional intelligence within language learning ecosystems. The integration of AI-driven technology, particularly through the implementation of the "Amazon Alexa-Speak Speaking Assessment System" framework, has been demonstrated to be remarkably efficacious in mitigating speaking anxiety (p < .001) and cultivating a psychologically secure learning environment. The emotional state detection algorithm achieved an unprecedented accuracy rate of 94%, utilizing advanced machine learning algorithms and neural network architectures to process multimodal emotional indicators. The technical reliability of these systems is substantiated by the viability of such systems in educational contexts, particularly in real-time affect recognition and response generation. The study's empirical evidence underscores the critical interplay between cognitive processes (including grammatical competence, lexical acquisition, and phonological awareness) and affective dimensions (encompassing emotional regulation, self-efficacy, and interpersonal dynamics) in language learning. The findings further demonstrate that the integration of adaptive emotional scaffolding through AI systems has yielded statistically significant enhancements in learner engagement metrics (r = 0.78, p < .001), suggesting a robust correlation between emotional support and language acquisition outcomes. This multifaceted approach to language learning, incorporating both neurolinguistic principles and emotional intelligence frameworks, represents a paradigm shift in educational technology integration. The findings validate the importance of emotion-aware artificial intelligence in educational contexts and establish a new theoretical framework for understanding the dynamic relationship between technological intervention and emotional facilitation in language acquisition processes.
Moreover, the research substantiates that AI-driven interventions incorporating emotional intelligence can engender transformative and secure learning environments. The successful implementation of AIEI frameworks demonstrates their potential not only in enhancing linguistic competence but also in supporting broader aspects of learner development. The outcomes of the study indicate that such technologies can effectively address the discrepancy between conventional language instruction and the emotional requirements of learners, particularly in the domain of speaking skill development.
The findings of this research extend far beyond immediate educational outcomes, thus establishing a new paradigm in technology-enhanced language learning. The successful implementation of AIEI strategies demonstrates transformative potential in three key domains: pedagogical innovation, emotional support systems, and technological integration. In the field of pedagogy, the study's findings suggest that AIEI implementations will play a pivotal role not only in digital learning but in revolutionizing traditional educational frameworks, particularly in addressing the complex interplay between cognitive development and emotional regulation. The success of the "Amazon Alexa-Speak Speaking Assessment System" with 40 high school students in Varamin County, Iran, provides compelling evidence for the scalability and effectiveness of such systems across diverse educational contexts. The concurrent triangulation design of the study, incorporating both quantitative assessments through the "Amazon Alexa-Speak" and "Perception Questionnaire" and qualitative data from classroom observations and semi-structured interviews, validates the comprehensive impact of AIEI integration. The findings of this study underscore the imperative for the systematic integration of AIEI strategies within language learning programs, emphasizing the necessity for technologically-driven solutions that can adapt dynamically to learners' psychological states while delivering effective language instruction. The evident success in mitigating speaking anxiety and augmenting oral communication proficiency through real-time adaptive feedback mechanisms signifies a highly promising trajectory for the future of educational technologies. Furthermore, the study's findings regarding the correlation between emotional intelligence and speaking performance (validated through EFA and CFA) underscore the importance of developing integrated systems that address both linguistic and emotional aspects of language acquisition. This comprehensive approach to language education, underpinned by AI-driven emotional intelligence, signifies a substantial advancement in our comprehension of how technology can augment both the cognitive and affective dimensions of learning.
In this respect, the present study paves the way for further investigation into the integration of AI and emotional intelligence in language education. Subsequent studies may investigate the long-term implications of AIEI implementation, its applicability across diverse cultural contexts, and its potential expansion to other language skills beyond speaking proficiency. The findings of this study establish a foundation for more comprehensive and effective educational approaches in the digital age, suggesting a promising future for the use of emotionally intelligent technological interventions in language learning environments.

Author Contributions

The author was responsible for the conceptualization, methodology, investigation, writing of the original draft, and writing - review and editing of the manuscript. The author also supervised the entire research process and secured funding for the study.

Funding

This research received no external funding.

Data Availability

The data that support the findings of this study are available from the author upon reasonable request.

Conflict of Interest

The author declares that there is no conflict of interest.

Consent to Participate

Informed consent was obtained from all participants involved in the study.

Consent for Publication

The author consents to the publication of this research.

Availability of Supporting Documents

The supporting data and materials are available upon request.

Ethics Statement

The present study, "Integrating AI-Driven Emotional Intelligence in Language Learning Platforms to Enhance English Speaking Skills through Real-Time Adaptive Feedback," involved human participants and was conducted following established ethical guidelines. Ethical approval was obtained from the Security Office of the Varamin County Department of Education. As an educator actively teaching in high schools within this district, I ensured the study adhered to all relevant ethical standards. Before their involvement, informed consent was obtained from all participants. The study was meticulously designed to ensure the protection of the rights and well-being of the participants throughout the entire duration of the study.

Appendix A. Researcher-Made Perception Questionnaire

Directions: Please read the following statements and consider them carefully whether you agree with each of them at which level due to the following criteria: Strongly agree, Agree, Neutral, Disagree, Strongly disagree
Students’ Attitudes Towards Amazon Alexa-Speak Speaking Assessment System Strongly Agree Agree fairly Strongly disagree disagree
1. The AI feedback accurately identified my emotional states during English-speaking practice.
2. The real-time AI feedback helped me adjust my speaking performance immediately.
3. The AI system provided personalized feedback that addressed my specific learning needs
4. The AI system helped me recognize how my emotions affect my English-speaking performance.
5. I became more aware of my anxiety levels while speaking English through the AI system’s feedback.
6. The AI system helped me identify specific emotional barriers in my language-learning process
7. The AI feedback helped me manage my stress levels during speaking activities.
8. I learned effective strategies to control my nervousness through the AI system’s guidance.
9. The AI system’s feedback helped me maintain emotional balance when making mistakes.
10. The AI-driven feedback increased my motivation to practice speaking English.
11. I felt more engaged in learning when receiving real-time emotional feedback.
12. The AI system’s interactive features made me more enthusiastic about improving my speaking skills.
13. The AI system helped me understand how my emotional expression affects communication in English.
14. The AI system improved my ability to express emotions appropriately in English.
15. I developed a better awareness of cultural differences in emotional expression through the AI system.
16. The AI-driven emotional feedback was more helpful than traditional teaching methods.
17. The AI system’s approach to combining emotional intelligence with language learning was effective.
18. I feel more confident in my English-speaking abilities after using the AI System.

Appendix B. Interview Questions

  • “How would you describe your overall experience with AI-driven emotional intelligence integration, and what aspects did you find most interesting compared to traditional English classes?” (Covers: Overall Experience)
  • “In what ways did the AI feedback help improve your speaking skills? Can you provide specific examples?” (Covers: AI Feedback)
  • “How did the software help you become more aware of your emotions while speaking English?” (Covers: Emotional Awareness)
  • “What strategies did AI-driven emotional intelligence integration system provide to help you manage stress or anxiety during speaking activities?” (Covers: Emotional Regulation)
  • “How did the immediate feedback from the software affect your motivation to practice English?” (Covers: Motivation and Engagement)
  • “In what ways did AI-driven emotional intelligence integration system help you express emotions better in English and understand cultural differences?” (Covers: Social-Emotional Learning)
  • “What do you consider the main benefits of learning English through this emotion-aware approach?” (Covers: Overall Impact)
  • “Has your confidence in speaking English changed after using an 197AI-driven emotional intelligence integration system ? Please explain why.” (Covers: Overall Impact - Confidence)

Appendix C. Classroom observation Checklist

Observer Name:
Feedback date:
Type of teaching session:
Number of Participants:
Date:
Venue:
Time:
Subject/topic:
Intended outcomes of the session:
Student reactions: Areas for improvement
Agreed focus of observation:
Investigating the Impact of AI-Driven Emotional Feedback (Amazon Alexa-Speak Speaking Assessment System) on EFL Students’ Speaking Skill
Possible areas (Criteria) should be observed Yes No Comments
The student responds appropriately to AI’s emotional state identification

Student makes immediate adjustments based on real-time AI feedback
Student engages with personalized feedback effectively
Student demonstrates awareness of emotion-performance connection
Student shows recognition of anxiety levels during speaking
Student identifies and addresses emotional barriers during practice
Student exhibits stress management strategies during activities
Student applies nervousness control techniques effectively
Student maintains composure when making mistakes
Student shows enthusiasm for AI-guided practice sessions
Student actively participates in real-time feedback activities
Student engages consistently with platform features
Student demonstrates an understanding of the emotion-communication link
Student expresses emotions appropriately in English
Student shows awareness of cultural aspects in emotional expression
Student performs better with AI feedback compared to traditional methods
Student effectively combines emotional awareness with language use
Student displays increased confidence in speaking activities

References

  1. Abdollahi, M. (2022). Development of emotional intelligence as a way to improve academic performance in learning a foreign language. Primary Education, 10(2), 47–52. [CrossRef]
  2. Abdullaeva, B. S., Abdullaev, D., Rakhmatova, F. A., Djuraeva, L., Sulaymonova, N. A., Shamsiddinova, Z. F., & Khamraeva, O. (2024). Uncovering the impacts of technology literacies and acceptance on emotion regulation, resilience, willingness to communicate, and enjoyment in Intelligent Computer-Assisted Language Assessment (ICALA): An experimental study. Language Testing in Asia, 14(1), 40. [CrossRef]
  3. Afifah, M., Ningrum, A. S. B., Wahyuni, S., & Syaifulloh, B. (2024). Self-Efficacy, Anxiety, and Emotional Intelligence: Do They Contribute to Speaking Performance?. Journal of Languages and Language Teaching, 12(2), 793-806.
  4. Alenezi, A. (2024). The effect of emotional intelligence on higher education: A pilot study on the interplay between artificial intelligence, emotional intelligence, and e-learning. Multidisciplinary Journal for Education, Social and Technological Sciences, 11(2), 51–77. [CrossRef]
  5. Araujo, T., & Bol, N. (2024). From speaking like a person to being personal: The effects of personalized, regular interactions with conversational agents. Computers in Human Behavior: Artificial Humans, 2(1). [CrossRef]
  6. Baker, R. S., D'Mello, S. K., Rodrigo, M. M. T., & Graesser, A. C. (2010). Better to be frustrated than bored: The incidence, persistence, and impact of learners’ cognitive–affective states during interactions with three different computer-based learning environments. International Journal of Human-Computer Studies, 68(4), 223-241.
  7. Bin-Hady, W. R. A., Ali, J. K. M., & Al-humari, M. A. (2024). The effect of ChatGPT on EFL students' social and emotional learning. Journal of Research in Innovative Teaching & Learning. [CrossRef]
  8. Bräuer, P., & Mazarakis, A. (2024). How to Design Audio-Gamification for Language Learning with Amazon Alexa?—A Long-Term Field Experiment. International Journal of Human–Computer Interaction, 40(9), 2343-2360.
  9. Bryman, A., & Cramer, D. (2012). Quantitative data analysis with IBM SPSS 17, 18 & 19: A guide for social scientists. Routledge.
  10. Calvo, R. A., & D’Mello, S. (2010). Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Transactions on Affective Computing, 1(1), 18–37. [CrossRef]
  11. Cancino, M., Arenas, J., & Gil, J. (2022). Language learning strategies, self-efficacy, and language proficiency: A correlational study. Studies in Second Language Learning and Teaching, 12(1), 1-25. [CrossRef]
  12. Chen, Z., Zhang, P., Lin, Y., & Li, Y. (2024). Interactions of trait emotional intelligence, foreign language anxiety, and foreign language enjoyment in the foreign language speaking classroom. Journal of Multilingual and Multicultural Development, 45(2), 374–394. [CrossRef]
  13. Creswell, J. W. (2017). CUSTOM: CEC edition qualitative inquiry and research design 3e. SAGE Publications.
  14. Dewaele, J. M. (2023). Psychology of language learning: Personality, emotion, and motivation. In C. A. Chapelle (Ed.), The Routledge Handbook of Applied Linguistics (pp. 374–385). Routledge.
  15. Dhawan, M., & Kour, P. (2024). Role of emotional intelligence and self-esteem on social anxiety among college students. Educational Administration: Theory and Practice, 30(4), 8609–8616. [CrossRef]
  16. De la Vall, R. R. F., & Araya, F. G. (2023). Exploring the benefits and challenges of AI-language learning tools. International Journal of Social Sciences and Humanities Invention, 10(01), 7569–7576. [CrossRef]
  17. Dizon, G. (2020). The efficacy of Amazon Alexa in the development of speaking proficiency among English language learners: A ten-week study. Computer Assisted Language Learning, 34(8), 1024–1034. [CrossRef]
  18. Dizon, G., Tang, D., & Yamamoto, Y. (2022). A case study of using Alexa for out-of-class, self-directed Japanese language learning. Computers and Education: Artificial Intelligence, 3, 100088. [CrossRef]
  19. Du, J., & Daniel, B. K. (2024). Transforming language education: A systematic review of AI-powered chatbots for English as a foreign language speaking practice. Computers and Education: Artificial Intelligence, 6, 100230. [CrossRef]
  20. Ebrahimi, M. R., Khoshsima, H., Zare-Behtash, E., & Heydarnejad, T. (2018). Emotional intelligence enhancement impacts on developing speaking skill among EFL learners: An empirical study. International Journal of Instruction, 11(4), 625–640. [CrossRef]
  21. El Shazly, R. (2021). Effects of artificial intelligence on English speaking anxiety and speaking performance: A case study. Expert Systems, 38(3), e12667. [CrossRef]
  22. Ellikkal, A., & Rajamohan, S. (2024). AI-enabled personalized learning: Empowering management students for improving engagement and academic performance. Vilakshan - XIMB Journal of Management. Advance online publication. [CrossRef]
  23. Fathi, J., Rahimi, M., & Derakhshan, A. (2024). Improving EFL learners’ speaking skills and willingness to communicate via artificial intelligence-mediated interactions. System, 121, 103254.
  24. Farooq, M. U. (2014). Emotional intelligence and language competence: A case study of the English language learners at Taif University English Language Centre. Studies in Literature and Language, 8(1), 6–19. [CrossRef]
  25. Feng, L. (2025). Investigating the Effects of Artificial Intelligence-Assisted Language Learning Strategies on Cognitive Load and Learning Outcomes: A Comparative Study. Journal of Educational Computing Research, 62(8), 1961-1994. [CrossRef]
  26. Fu, W. S., Zhang, J. H., Zhang, D., Li, T. T., Lan, M., & Liu, N. N. (2025). An Empirical Study of Adaptive Feedback to Enhance Cognitive Ability in Programming Learning among College Students: A Perspective Based on Multimodal Data Analysis. Journal of Educational Computing Research, 07356331241313126.
  27. Cai, Y., & Liu, H. (2024). Language teacher emotional intelligence: A scoping review. Forum for Education Studies, 2(4), 1599. [CrossRef]
  28. Gao, P., Li, J., & Liu, S. (2021). An introduction to key technology in artificial intelligence and big data-driven e-learning and e-education. Mobile Networks and Applications, 26(5), 2123-2126.
  29. Gao, Y., Pan, Z., Wang, H., & Chen, G. (2018, October). Alexa, my love: Analyzing reviews of Amazon Echo. In 2018 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI) (pp. 372–380). IEEE.
  30. Goetz, T., Frenzel, A. C., Stockinger, K., Lipnevich, A. A., Stempfer, L., & Pekrun, R. (2023). Emotions in education. International encyclopedia of education, 149-161.
  31. Chang, H., & Roberts, K. (2024). Artificial intelligence and emotional intelligence integration in language learning: Current challenges and future directions. Computer Assisted Language Learning, 37(3), 278–295. [CrossRef]
  32. Gligorea, I., Cioca, M., Oancea, R., Gorski, A. T., Gorski, H., & Tudorache, P. (2023). Adaptive learning using artificial intelligence in e-learning: A literature review. Education Sciences, 13(12), 1216. [CrossRef]
  33. Graesser, A. C., D’Mello, S. K., & Strain, A. C. (2014). Emotions in advanced learning technologies. In International Handbook of Emotions in Education (pp. 473-493). Routledge.
  34. Guo, Y., & Wang, Y. (2024). Exploring the effects of artificial intelligence application on EFL students' academic engagement and emotional experiences: A mixed-methods study. European Journal of Education, 60(1), 1–15. [CrossRef]
  35. Guo, J., Asmawi, A., & Fan, L. (2024). The mediating role of emotional intelligence in the relationship between social anxiety and communication skills among middle school students in China. International Journal of Adolescence and Youth, 29(1), Article 2389315. [CrossRef]
  36. Hastungkara, D. P., & Triastuti, E. (2019). Application of e-learning and artificial intelligence in education systems in Indonesia. Anglo-Saxon: Journal of the English Language Education Study Program, 10(2), 117–133. [CrossRef]
  37. Hsu, H. L., Chen, H. H. J., & Todd, A. G. (2023). Investigating the impact of Amazon Alexa on the development of L2 listening and speaking skills. Interactive Learning Environments, 31(9), 5732-5745.
  38. Huang, F., & Zou, B. (2024). English speaking with artificial intelligence (AI): The roles of enjoyment, willingness to communicate with AI and innovativeness. Computers in Human Behavior, 159, 108355. [CrossRef]
  39. Imran, M. C., Amaliah, N., Syam, N. I., Room, F., & Sage, M. S. D. (2023). The Feasibility of Artificial Intelligence (AI) In Speaking Skill: Lecturers’ Perceptions. IJOLEH: International Journal of Education and Humanities, 2(2), 135-144. [CrossRef]
  40. Ismail, S., & Alharkan, A. (2024). EFL learners' positive emotions in the era of technology: Unpacking the effects of artificial intelligence on learning enjoyment, self-efficacy, and resilience. Computer-Assisted Language Learning Electronic Journal, 25(4), 526–551. https://callej.org/index.php/journal/article/view/471.
  41. Kang, H. (2022). Effects of artificial intelligence (AI) and native speaker interlocutors on ESL learners' speaking ability and affective aspects. Multimedia-Assisted Language Learning, 25(2), 9–43. [CrossRef]
  42. Karim, S. A., Hamzah, A. Q. S., Anjani, N. M., Prianti, J., & Sihole, I. G. (2023). Promoting EFL students’ speaking performance through ELSA Speak: An artificial intelligence in English language learning. JOLLT: Journal of Languages and Language Teaching, 11(4), 655–668. [CrossRef]
  43. Kim, S., & Thompson, R. (2024). Real-time adaptive feedback systems in AI-driven language learning platforms. Journal of Educational Technology & Society, 27(2), 189–204. [CrossRef]
  44. Kumar, V. V., & Tankha, G. (2023). Association between the big five and trait emotional intelligence among college students. Psychology Research and Behavior Management, 915–925. [CrossRef]
  45. Liu, Y., Zhang, H., Jiang, M., Chen, J., & Wang, M. (2024). A systematic review of research on emotional artificial intelligence in English language education. System, 126, 103478. [CrossRef]
  46. Kelkar, S. (2022). Between AI and learning science: The evolution and commercialization of intelligent tutoring systems. IEEE Annals of the History of Computing, 44(1), 20–30. [CrossRef]
  47. LE, M. T. (2024). The Influences of AI-Enhanced Learning on Student Engagement in English Classes. Proceedings of IAC in Budapest 2024, 69.
  48. Liew, T. W., Tan, S. M., Pang, W. M., Khan, M. T. I., & Kew, S. N. (2023). I am Alexa, your virtual tutor!: The effects of Amazon Alexa’s text-to-speech voice enthusiasm in a multimedia learning environment. Education and Information Technologies, 28(2), 1455–1489. [CrossRef]
  49. Li, Y., Chen, R., & Wu, J. (2023, November). Research status, hotspots and trends of international AI-assisted second language learning. In Proceedings of the 2023 6th International Conference on Educational Technology Management (pp. 227–234). ACM. [CrossRef]
  50. Li, B., Lowell, V. L., Wang, C., & Li, X. (2024). A systematic review of the first year of publications on ChatGPT and language education: Examining research on ChatGPT’s use in language learning and teaching. Computers and Education: Artificial Intelligence, 7, 100266. [CrossRef]
  51. Makhachashvili, R., & Semenist, I. (2024). Emotional intelligence and implicit interdisciplinary skills: Key components for effective digital and AI-enhanced learning. In Proceedings of the 18th International Multi-Conference on Society, Cybernetics and Informatics (Vol. 1, pp. 99–106). International Institute of Informatics and Systemics. [CrossRef]
  52. MacIntyre, P. D., & Gregersen, T. (2022). The idiodynamic method: Willingness to communicate and anxiety processes interacting in real-time. International Review of Applied Linguistics in Language Teaching, 60(1), 67–84. [CrossRef]
  53. Masinde, I. N., Mandillah, A., & Njeru, W. (2023). Effective teaching of listening and speaking skills through multimodal approaches: A focus on EFL classrooms. Journal of Language and Linguistic Studies, 19(1), 230–240. [CrossRef]
  54. Melweth, H. M. A., Al Mdawi, A. M. M., Alkahtani, A. S., & Badawy, W. B. M. (2023). The role of artificial intelligence technologies in enhancing education and fostering emotional intelligence for academic success. Migration Letters, 20(S9), 863–874. [CrossRef]
  55. Ondé, D., Cabellos, B., Gràcia, M., Jiménez, V., & Alvarado, J. M. (2023). The role of emotional intelligence, meta-comprehension knowledge, and oral communication on reading self-concept and reading comprehension. Education Sciences, 13(12), 1249. [CrossRef]
  56. Prakash, J., Swathiramya, R., Balambigai, G., & Abhirami, J. S. (2024). AI-driven real-time feedback system for enhanced student support: Leveraging sentiment analysis and machine learning algorithms. International Journal of Computational and Experimental Science and Engineering, 10(4). [CrossRef]
  57. Pishghadam, R. (2009). A quantitative analysis of the relationship between emotional intelligence and foreign language learning. Electronic Journal of Foreign Language Teaching, 6(1), 31–41.
  58. Qiao, H., & Zhao, A. (2023). Artificial intelligence-based language learning: Illuminating the impact on speaking skills and self-regulation in Chinese EFL context. Frontiers in Psychology, 14, 1255594. [CrossRef]
  59. Qin, L., & Zhong, W. (2024). Adaptive system of English-speaking learning based on artificial intelligence. Journal of Electrical Systems, 20(6s), 267–275. [CrossRef]
  60. Prentice, C., Lopes, S. D., & Wang, X. (2020). Emotional intelligence or artificial intelligence—An employee perspective. Journal of Hospitality Marketing & Management, 29(4), 377–403. [CrossRef]
  61. Roberts, A. B., & Richardson, J. W. (2024). The role of artificial intelligence in schools: A case of policy formation. Journal of Cases in Educational Leadership, XX(X), XX–XX. [CrossRef]
  62. Rogulska, O., Rudnitska, K., Mahdiuk, O., Drozdova, V., Lysak, H., & Korol, S. (2023). The today's linguistic paradigm: The problem of investigating emotional intelligence in the learning of a foreign language. Revista Românească pentru Educaţie Multidimensională, 15(4), 458–473. [CrossRef]
  63. Rubio, A. D. J. (2024). Relationships between emotional intelligence, willingness to communicate, and classroom participation: Results from secondary education in Spain. In T. C. Bang, C. H. Nguyen, & H. P. Bui (Eds.), Exploring contemporary English language education practices (pp. 70–96). IGI Global. [CrossRef]
  64. Rusmiyanto, R., Huriati, N., Fitriani, N., Tyas, N. K., Rofi’i, A., & Sari, M. N. (2023). The role of artificial intelligence (AI) in developing English language learner's communication skills. Journal on Education, 6(1), 750–757. [CrossRef]
  65. Santoso, D. R., Affandi, G. R., & Basthomi, Y. (2024). 'Getting stuck': A study of Indonesian EFL learners' self-efficacy, emotional intelligence, and speaking achievement. Studies in English Language and Education, 11(1), 384–402. [CrossRef]
  66. Sejdiu, S. (2017). Rethinking the role of listening skills in language learning: The case for multimedia and computer-assisted approaches. Language Awareness, 26(1), 23–36. [CrossRef]
  67. Sintya, M. G., & Handayani, S. (2023). The correlation between emotional intelligence and students' speaking English skill at eighth grade of SMP K Bharata 2 Jumapolo the academic year 2022/2023. English Research Journal: Journal of Education, Language, Literature, Arts and Culture, 8(1). [CrossRef]
  68. Sergeeva, O. V., Zheltukhina, M. R., Bikbulatova, G. I., Sokolova, E. G., Digtyar, O. Y., Prokopyev, A. I., & Sizova, Z. M. (2023). Examination of the relationship between information and communication technology competencies and communication skills. Contemporary Educational Technology, 15(4), ep483. [CrossRef]
  69. Shi, L. (2024). The integration of advanced AI-enabled emotion detection and adaptive learning systems for improved emotional regulation. Journal of Educational Computing Research. Advanced online publication. [CrossRef]
  70. Surahman, D., & Sofyan, A. (2021). The effect of community language learning and emotional intelligence on students' speaking skill. Lentera Pendidikan: Jurnal Ilmu Tarbiyah dan Keguruan, 24(1), 82–90. [CrossRef]
  71. Su, Y., & Guo, H. (2024). Unpacking EFL learners’ emotions and emotion-regulation strategies in digital collaborative academic reading projects: An integrated approach of vignette methodology and interview analysis. Journal of English for Academic Purposes, 71, 101404. [CrossRef]
  72. Swathy, G., & Kannammal, K. E. (2024). Emonet: Innovating Emotional Intelligence in AI with Hybrid Learning Models. Journal of Basic Science and Engineering, 21(1), 1127-1141.
  73. Terry-Johnson, L. M. (2024). The correlation of the emotional intelligence of principals with student achievement, growth, and attendance (Doctoral dissertation, Trevecca Nazarene University).
  74. Tajik, A. (2025). Exploring the role of AI-driven dynamic writing platforms in improving EFL learners' writing skills and fostering their motivation. Research Square. [CrossRef]
  75. Tajik, A. (2024). Exploring the potential of ChatGPT in EFL language learning: Learners’ reflections and practices. Preprints. [CrossRef]
  76. Thao, L. T., Thuy, P. T., Thi, N. A., Yen, P. H., Thu, H. T. A., & Tra, N. H. (2023). Impacts of emotional intelligence on second language acquisition: English-major students’ perspectives. SAGE Open, 13(4), 21582440231212065. [CrossRef]
  77. Topal, İ. H. (2024, December). The place of emotions in language education from an emotional intelligence perspective. Forum for Education Studies, 2(4), 1832. [CrossRef]
  78. Vistorte, A. O. R., Deroncele-Acosta, A., Ayala, J. L. M., Barrasa, A., López-Granero, C., & Martí-González, M. (2024). Integrating artificial intelligence to assess emotions in learning environments: A systematic literature review. Frontiers in Psychology, 15, 1387089. [CrossRef]
  79. Wang, M., & Wang, Y. (2024). A structural equation modeling approach in examining EFL students’ foreign language enjoyment, trait emotional intelligence, and classroom climate. Learning and Motivation, 86, 101981. [CrossRef]
  80. Wang, Z. (2022). Computer-assisted EFL writing and evaluations based on artificial intelligence: A case from a college reading and writing course. Library Hi Tech, 40(1), 80–97. [CrossRef]
  81. Wei, L. (2023). Artificial intelligence in language instruction: Impact on English learning achievement, L2 motivation, and self-regulated learning. Frontiers in Psychology, 14, Article 1261955. [CrossRef]
  82. Williams, C. (2024). Emotional intelligence in school leadership and teachers’ perception of its effect on their efficacy. Digital Commons@ACU. https://digitalcommons.acu.edu/cgi/viewcontent.cgi?article=1863&context=etd.
  83. Xin, Z., & Derakhshan, A. (2024). From excitement to anxiety: Exploring English as a foreign language learners' emotional experiences in the artificial intelligence-powered classrooms. European Journal of Education, 60(1), e12845. [CrossRef]
  84. Xu, Y., Qiu, Y., & Zhou, W. (2022). Development and validation of psychological needs scales for L2 speaking and listening. System, 104, 102771. [CrossRef]
  85. Xiao, Y., Zhang, T., & He, J. (2024). A review of promises and challenges of AI-based chatbots in language education through the lens of learner emotions. Heliyon, 10(18), e37238. [CrossRef]
  86. Zainuddin, N. (2023). Technology-enhanced language learning research trends and practices: A systematic review (2020-2022). Electronic Journal of e-Learning, 21(2), 69–79. [CrossRef]
  87. Zhang, C. (2023). The effects of emotional intelligence on students' foreign language speaking: A narrative exploration in China's universities. The Qualitative Report, 28(12), 3494–3513. [CrossRef]
  88. Zhou, W., & Gao, B. (2023). Construction and application of English-Chinese multimodal emotional corpus based on artificial intelligence. International Journal of Human-Computer Interaction, 1–12. [CrossRef]
  89. Zhou, T., Cao, S., Zhou, S., Zhang, Y., & He, A. (2023). Chinese intermediate English learners outdid ChatGPT in deep cohesion: Evidence from English narrative writing. System. [CrossRef]
  90. Zhou, C., & Hou, F. (2024). Can AI empower L2 education? Exploring its influence on the behavioral, cognitive, and emotional engagement of EFL teachers and language learners. European Journal of Education, 59(4), e12750. [CrossRef]
  91. Zou, B., Liviero, S., Hao, M., & Wei, C. (2020). Artificial intelligence technology for EAP speaking skills: Student perceptions of opportunities and challenges. In M. R. Freiermuth & N. Zarrinabadi (Eds.), Technology and the psychology of second language learners and users (pp. 433–463). Palgrave Macmillan. [CrossRef]
Table 1. Descriptive Statistics for the Participants’ Scores on Pre-test of Argumentative Writing.
Table 1. Descriptive Statistics for the Participants’ Scores on Pre-test of Argumentative Writing.
N Min Max M SD
PET 195 43 62 52.5 1.708
Table 2. Descriptive Statistics for the Participants’ Scores on Post-test of AI-driven Emotional Intelligence Integration.
Table 2. Descriptive Statistics for the Participants’ Scores on Post-test of AI-driven Emotional Intelligence Integration.
Test Classes n Mean Std. Deviation Std. Error Mean
Pre-test (Speaking) Control 20 5.11 1.02 0.41
AIEI 20 8.75 2.32 0.28
Table 3. Tests of Between-Subjects Effects on Speaking Proficiency Assessment.
Table 3. Tests of Between-Subjects Effects on Speaking Proficiency Assessment.
Source Type III Sum of Squares df Mean Square F Sig. Partial Eta Squared
Pretest 91.612 1 51.612 45.616 .000 .222
Group (AIEI vs. Control) 85.268 1 41.268 53.008 .000 .197
Error 26.988 37 1.330
Total 1184.000 40
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated