Preprint
Article

This version is not peer-reviewed.

Effectiveness of Artificial Intelligence Practices in Teaching of Social Sciences: A Multi-Complementary Approach Research on Pre-School Education

A peer-reviewed article of this preprint also exists.

Submitted:

06 February 2025

Posted:

08 February 2025

You are already at the latest version

Abstract
The aim of this study is to evaluate artificial intelligence applications in the preschool education level within the framework of the Multi-complementary Approach (McA). McA is designed as a comprehensive approach that encompasses multiple analysis methods. In the first phase of the study, the pre-complementary knowledge process, meta-analysis and meta-thematic analysis methods were used; in the post-complementary knowledge process, an experimental design with a control group and pre-test-post-test was applied. Finally, in the complementary knowledge phase, the findings of the first two phases were combined, providing an opportunity to evaluate the effectiveness of artificial intelligence applications in preschool education from a more comprehensive and broader perspective. The study provides information about McA, and then the methodological process and findings of the research are presented in detail within this framework. After providing information about McA, the methodological process and results of the study are presented step by step within this framework. A literature review based on document analysis in the context of social sciences and teaching in preschool education using artificial intelligence applications has shown that the application of artificial intelligence has positive and significant effects on both student performance and various variables supporting teaching. The complementary results favoring artificial intelligence applications encourage the increased use of such technologies in preschool education, promoting their more widespread and systematic use in the teaching environment.
Keywords: 
;  ;  ;  ;  
Subject: 
Social Sciences  -   Education

1. Introduction

Social change and transformation continue to accelerate inevitably, with new technologies emerging every day and significant developments occurring in existing technologies. It is observed that, influenced by technological advancements in recent years, artificial intelligence is beginning to be recognized as one of the fundamental technologies of today's society. Artificial intelligence (AI) is defined as any technique that enables a computer to mimic human behavior and reproduce the human decision-making process to solve complex tasks with minimal or no human intervention (Russel and Norvig, 2021). AI is described as computer systems that understand like humans, behave like humans, reason logically, and perform rationally, exhibiting intelligent behaviors of humans (Howard, 2019), and as machines or computers that demonstrate human-like capabilities such as learning from information, adapting, synthesizing, self-correcting, and processing data in complex tasks (Chatterjee and Bhattacharjee, 2020; Popenici and Kerr, 2017).
Social sciences, while continuing the effort to understand human behaviors and social structures through traditional methods, are increasingly recognizing the importance of artificial intelligence in this field due to current requirements such as big data analysis and modeling of social dynamics (Turgut, 2024). Artificial intelligence in education benefits educators by providing more effective and efficient administrative functions, improving students' experiences and learning quality (Chen et al., 2020). Artificial intelligence in education offers benefits such as personalized learning experiences for students, enhanced learning experiences, and increased teacher productivity (Tambuskar, 2022). Applications of artificial intelligence in preschool education have created social impacts in various applications such as adaptive learning environments, machine learning, computer vision, speech processing, neuroscience, health, natural language understanding, and the Internet of Things (Zhou et al., 2017). It helps improve instructional practices by providing personalized learning experiences (Liao, 2002; Nan, 2020). Artificial intelligence-supported teaching systems benefit both students and teachers by increasing the accuracy and efficiency of data acquisition in preschool education (Sun, 2022).

1.1. Artificial Intelligence in Social Sciences

Artificial intelligence is becoming both a participant in and a mediator of human interaction (Rezaev & Tregubova, 2018). In recent years, with the advancement of digital technologies, AI applications have been widely used in various fields. While artificial intelligence has primarily been the focus of engineers and information technology specialists, it has now expanded to include the social sciences as well (Jarek & Mazurek, 2019). AI, which provides an interdisciplinary and multifaceted field of study, is utilized not only in computer science, linguistics, and mathematics but also in fields such as art, psychology, and design (Rezk, 2023).
One of the fundamental and rapidly evolving components of artificial intelligence is machine learning (ML), which refers to the process of defining specific algorithms and continuously improving a data model based on data analysis (Rebala et al., 2019; Russell & Norvig, 2010). The growing interest in big data, machine learning, and other data science techniques in recent years has sparked a dynamic debate about how and in which areas these methods can be applied in research (Agarwal & Dhar, 2014). As a rapidly advancing technology, machine learning has been widely adopted across a broad spectrum that includes the social sciences, such as sociology (Chen et al., 2021; Liu, 2021), medicine (Mercaldo et al., 2017; Waljee & Higgins, 2010), economics (Mullainathan & Spiess, 2017), and finance (Cavalcante et al., 2016). Deep learning (DL) is a subfield of machine learning, which itself is a subfield of artificial intelligence, following the developmental progression of AI > ML > DL (Agarwal & Murty, 2021). Deep learning is particularly useful for high-dimensional and large datasets and outperforms ML algorithms in many applications that require processing deep neural networks, images, text, speech, and audio data (Lecun et al., 2015). The resurgence of artificial intelligence can be attributed, at least in part, to deep learning algorithms (Zhang & Feng, 2021). It is noted that AI, particularly in the context of machine learning and deep learning, is a dominant factor in emerging technologies as well as in social and economic sciences (Apostolopoulos & Groumpos, 2023).
Machine learning is increasingly applied to large-scale social datasets related to humans (Lazer et al., 2009). The use of big data applications like machine learning in the social sciences offers a promising new statistical modeling culture for social scientists beyond merely data availability (Veltri, 2017). With the internet enhancing the capacity to generate, analyze, and store big data, a new era known as the digital age or the digital information age has begun. These developments have paved the way for new research areas and methodologies in the social sciences (Castells, 2010). To obtain this data, social scientists use either qualitative or quantitative methods. In quantitative research, AI can predict relationships between specific variables using machine learning techniques such as regression, decision trees, random forests, and support vector machines (Chen et al., 2020; Probst et al., 2019; Sun et al., 2020). Unlike quantitative research, qualitative research primarily deals with non-numerical data such as text, images, audio, and video (Longo, 2020). Therefore, AI is utilized in a different manner in this type of research.
Natural Language Processing (NLP), defined as a component of artificial intelligence that enables familiarity with human interpretative language (Kumar & Mishra, 2020), facilitates analyses such as sentiment analysis in the social sciences with the help of AI (Aşkun, 2023). NLP has become increasingly relevant to social science research due to its ability to provide reliable tools for analyzing textual data, including interview transcripts, social media posts, and news articles (Boumans & Thrilling, 2018). Through artificial intelligence, audio and video data can be automatically converted into text, leading to cost savings (Abram et al., 2020) as well as reductions in labor and time (Mahmood et al., 2023). AI-powered tools such as NVivo and ATLAS.ti allow researchers to conduct thematic analysis by automatically identifying recurring themes, patterns, and key concepts within texts (Paulus & Marone, 2024; Rietz & Maedche, 2021). AI tools are also effectively utilized in data visualization, enabling the presentation of data in visual formats such as graphs and charts (Çelik, 2019; Isichei et al., 2022; Owan et al., 2023).

1.2. Artificial Intelligence in Pre-School Education

With the increasing demand for education, the integration of artificial intelligence (AI) and education has led to the emergence of a new research field, resulting in a growing body of literature on educational AI (Song & Wang, 2020). AI has been found to have a profound impact on various fields, including education. According to Chen et al. (2020), AI in education is seen as a synthesis of three fundamental areas: knowledge, education, and computational science. Baker and Smith (2019) categorize AI-powered educational tools into three main types: those aimed at students, teachers, and the educational system. In today's society, children prefer learning resources to be at their fingertips and to learn at their own pace, making AI a promising tool to support students (Chassignol et al., 2018; Mondal, 2019). AI-driven applications such as personalized learning systems (Chan & Zary, 2019; Miao et al., 2021; Mintz et al., 2023), chatbots (Smutny, 2020), robots (Nilsson, 2010; Su et al., 2023; Woo et al., 2021), online games (Young et al., 2012), and simulations (Chen et al., 2020) are increasingly being utilized in education.
Early childhood is considered an ideal time to spark children's interest in AI (Su, 2022). AI is becoming more integrated into various educational settings, including preschool education. The use of AI-powered tools in early childhood classrooms has gained attention in recent years (Ding, 2021), with research highlighting the potential of AI-driven smart learning structures, chatbots, and digital assistants in improving the quality of early childhood education (Ma, 2021). Several studies have shown promising effects of AI in early childhood education, with an increasing focus on its role in enhancing learning and development (Lin et al., 2020; Nan, 2020; Tseng et al., 2021; Vartiainen et al., 2020). AI technologies are being employed to improve the quality of preschool education (Weiwei, 2022).
The inclusion of AI in early childhood classrooms is believed to have positive effects (Amershi et al., 2014). AI technologies can increase children's enthusiasm for learning (Jin, 2019; Nan, 2020; Tian & Cui, 2022), while intelligent tutoring systems provide real-time feedback to enhance the effectiveness of preschool education (Jiang, 2022; Sun, 2022). AI can also support the development of computational thinking skills (Lee, 2020), contribute to efficient educational content by enabling personalized learning experiences (Liao & Gu, 2022), and allow AI-driven robots to interact with children to create tailored learning experiences (Dongming, 2020; Williams et al., 2019). Additionally, AI-powered tools such as augmented reality and virtual reality can make learning more engaging and enhance children's understanding of information (Tian & Cui, 2022).

1.3. Purpose and Significance of the Study

The purpose of this study is to examine the impact of artificial intelligence (AI) applications on the learning environment and academic performance in early childhood education from a holistic perspective. The literature indicates that while there are numerous studies on AI across various fields, including social sciences, the number of studies specifically addressing its use in social sciences is limited (Bail, 2024; Grossman et al., 2023). Considering this gap, and recognizing the insufficiency of studies that integrate both qualitative and quantitative research findings, this study aims to contribute innovatively to the literature through an experimental design conducted at the early childhood education level. By comprehensively presenting AI’s role in educational processes, this study seeks to bridge this gap. The objective is to combine qualitative and quantitative findings to obtain broader and more comprehensive results. Thus, in this study, the impact of AI applications at the preschool level on learning environments and academic performance is examined using the Multi-complementary Analysis (McA) approach (Batdı, 2016), as detailed in the methodology section. Within this framework, the study is expected to answer the following research questions regarding the effectiveness of AI applications in the field of social sciences:

1.4. Pre-Complementary Knowledge Phase

  • What is the overall effect size (g) of AI applications in the field of social sciences according to meta-analysis findings?
  • What are the participant perspectives on AI applications in social sciences based on meta-thematic analysis?
Post-Complementary Knowledge Phase
  • At the end of the experimental process in preschool education, is there a statistically significant difference in pre-test and post-test scores in favor of the experimental group, ensuring consistency?
  • What are the observer perspectives on AI applications in preschool education from a qualitative standpoint?
Complementary Knowledge Phase
  • Do the comprehensive results obtained by integrating pre- and post-complementary knowledge on AI applications complement or reinforce each other?
  • Based on the findings, what recommendations can be made on the topic?
This study seeks to provide a comprehensive understanding of the role of AI in early childhood education, contributing valuable insights to both academia and educational practice.

2. METHOD

This study aims to determine the impact of artificial intelligence (AI) applications on the field of social sciences. Through a literature review, the study seeks to evaluate AI applications from both quantitative and qualitative perspectives, analyzing and interpreting the findings of the current experimental study. In this context, the study also aims to compare the results of the experiment with similar studies in the literature to identify commonalities and differences. Additionally, to provide a broader perspective, the study has been conducted within the framework of the Multi-complementary Analysis (McA) approach. McA is a method that integrates findings obtained through various analytical tools, extensive data sources, and a holistic perspective (Batdı, 2016, 2017, 2018).

2.1. What is the Multi-Complementary Approach?

McA is an approach that enables the holistic evaluation of qualitative and quantitative data collected through different analytical programs. This approach allows a subject to be examined from multiple perspectives, identifying gaps in the literature and contributing uniquely to research through various data collection and analysis methods (Batdı, 2016). Accordingly, McA consists of three main stages that facilitate access to pre-complementary knowledge, post-complementary knowledge, and complementary knowledge (Batdı, 2016, 2017, 2018).

2.1. Pre-Complementary Knowledge Phase

As shown in Figure 1, the pre-complementary knowledge phase is the first stage of McA. In this phase, a literature review is conducted to examine studies on AI applications in the field of social sciences. The literature is analyzed through meta-analysis and meta-thematic analysis processes, aiming to identify existing gaps in this field.
In this context, it can be stated that a detailed table was created by reviewing studies on artificial intelligence applications using the document analysis method in both types of analysis. These two types of analysis are defined as follows:
Glass (1976) described meta-analysis as the “analysis of analyses,” while Petitti (1994) defined it as a quantitative approach that systematically analyzes the findings of individual studies to derive a general conclusion. Meta-analysis is defined as a research synthesis that combines the statistical results of quantitative studies (Borenstein et al., 2009). The primary goal of meta-analysis is to summarize and interpret all available evidence on a specific topic or research question (Lipsey & Wilson, 2001). Another method used in the pre-holistic knowledge phase of the research is meta-thematic analysis. Meta-thematic analysis is a process based on document analysis, aiming to access qualitative research on a specific topic and re-analyze raw data from these studies to develop themes and codes (Batdı, 2019). This method involves collecting detailed notes through observation, interviews, and document analysis, then classifying the extensive raw data into significant themes, categories, and explanatory case examples to transform them into readable narratives (Patton, 2014). Therefore, the most distinctive feature of the meta-thematic analysis process is its ability to access raw qualitative data and generate reliable research findings (Batdı, 2020).

2.2.1. Literature Review and Inclusion Criteria

To develop the information in the pre-complementary phase, a comprehensive literature review was conducted on artificial intelligence applications in the field of social sciences. The study utilized Web of Science, Science Direct, Taylor and Francis Online, ProQuest Dissertations & Theses Global, Google Scholar, and YÖK databases to access relevant studies.
Searches were conducted in both Turkish and English using the keywords:
  • "yapay zeka" (artificial intelligence)
  • "sosyal bilimler ve yapay zeka" (social sciences and artificial intelligence)
  • "artificial intelligence"
  • "social sciences and artificial intelligence"
To refine the search, additional terms such as “effectiveness of,” “effect of,” “the impact of,” and “the effects” were included to focus on the effectiveness of AI applications.

2.2.2. Meta-Analysis Inclusion Criteria

The criteria for including studies in the meta-analysis were as follows:
  • Conducted between 2005 and 2025,
  • Implemented AI applications in the experimental group,
  • Contained pre-test and post-test data on AI applications,
  • Focused on the impact of AI applications on students' academic achievement,
  • Included descriptive statistics required for effect size calculations within the McA framework, such as sample size (n), arithmetic mean (X̅), and standard deviation (SD).
In addition to the above five criteria, the meta-thematic analysis included studies that:
  • Examined the impact of AI applications on academic success,
  • Used qualitative research methods and included participants' perspectives,
  • Were conducted between 2005 and 2025 and were retrieved from the same databases used in the meta-analysis.
A total of 248 studies on AI applications in social sciences were identified. However, only six studies met the meta-analysis criteria, and five studies met the meta-thematic analysis criteria. The remaining studies were excluded from the analysis due to irrelevance, failure to meet inclusion criteria, or duplication across multiple databases.
The number of studies included in the meta-analysis and meta-thematic analysis is presented in Figure 2 using the PRISMA flow diagram, developed by Moher et al. (2009).
In the pre-complementary knowledge phase of the study, the data obtained through document analysis were transferred to Microsoft Office Excel and Word programs. During the meta-analysis process, the statistical software Comprehensive Meta-Analysis (CMA) 2.0 was used (Borenstein et al., 2009; Rosenberg et al., 2000). For the document analysis of studies conducted within the meta-thematic framework, the Maxqda-11 software was utilized. The collected data were analyzed using the content analysis method. Content analysis is a qualitative research technique that involves classifying research findings based on specific themes and codes and interpreting them systematically (Cohen et al., 2007). Accordingly, participant opinions obtained from study documents were analyzed, and themes and codes were developed. The pre-complementary knowledge phase analysis of the study was conducted in two separate processes. The meta-analysis process is detailed below.

2.2.3. Effect Size and Model Selection

The value calculated to provide information about the magnitude and direction of the relationship between two groups or variables is defined as effect size (Borenstein et al., 2009). In this study, Hedges' g, a standardized measure of the difference between means, was preferred for effect size estimation (Hedges, 1981).
For interpreting effect size values, the classification by Thalheimer and Cook (2002) was taken into account. According to their classification:
  • If -0.15 ≤ g < 0.15, the effect size is considered negligible.
  • If 0.15 ≤ g < 0.40, it is small.
  • If 0.40 ≤ g < 0.75, it is moderate.
  • If 0.75 ≤ g < 1.10, it is large.
  • If 1.10 ≤ g < 1.45, it is very large.
  • If g ≥ 1.45, it is considered excellent (Thalheimer & Cook, 2002).
In meta-analytic calculations, effect sizes are generally estimated using either the fixed-effect model (FEM) or the random-effects model (REM) (Ried, 2006). According to Schmidt et al. (2009), the application of the FEM is quite limited. Given the variability in instructional levels, subject areas, implementation durations, and sample sizes in this study, the random-effects model (REM) was deemed the most appropriate approach.

2.2.4. Heterogeneity Test

According to Higgins and Thompson (2002), heterogeneity in meta-analysis refers to the extent to which the results of included studies differ. Instead of the Q statistic, which is commonly used in heterogeneity testing, the I² statistic is preferred as it provides more reliable results.
  • The I² value ranges from 0% to 100%.
  • 0% indicates no heterogeneity.
  • 75% or higher indicates high heterogeneity (Higgins et al., 2003).
In this study, the calculated I² value was 82.82, indicating a high level of heterogeneity. Due to this, moderator analyses were conducted to test the differences between groups based on various variables (Rosenthal, 1979).

2.2.5. Coding

Coding is defined as a method that enhances the reliability of a study by enabling independent replication by different researchers (Wilson, 2009). In this study, to ensure validity and reliability, a coding form was developed (Bangert-Drowns & Rudner, 1991).
This form included:
  • Study code, title, author(s), year of publication, academic term, course, educational level, and sample details
  • Statistical data related to these variables
Additionally, this form allowed multiple coders to examine the dataset and ensured that only studies with inter-coder agreement were assigned codes. To assess inter-coder reliability, the formula proposed by Miles and Huberman (1994, p. 64) was used:
Reliability=AgreementAgreement+Disagreement×100\text{Reliability} = \frac{\text{Agreement}}{\text{Agreement} + \text{Disagreement}} \times 100Reliability=Agreement+DisagreementAgreement​×100
An agreement rate of 90-71 % was achieved among the coders.
In the meta-thematic analysis of qualitative studies, participant opinions were re-evaluated, and studies with similar characteristics were grouped into common categories, from which themes were developed. In cases of disagreement, coders engaged in literature-based discussions to reach a consensus on common codes (Silverman, 2005).
During the interpretation of themes and codes, participant statements were included as direct quotations to enhance clarity, transparency, and reliability. Each referenced study was coded using letters, numbers, and symbols. For example, in M11-s.5:
  • "M" represents the article
  • "11" represents the study number
  • "s.5" refers to the page number of the quotation.

2.2.6. Publication Bias and Reliability

In meta-analytic studies, various methods have been developed to ensure that analyses are conducted reliably and to avoid potential biases. One of these methods is the calculation of the "fail-safe N", proposed by Rosenthal (1979). The fail-safe N refers to the number of unpublished null studies required to invalidate the observed effect. A high fail-safe N value supports the validity of the results. In this study, the calculated safe N value was 248 (P = 0.1), and when compared to the studies included in the analysis, this value is considered quite high, suggesting that the risk of bias is low (Borenstein et al., 2009).
Figure 4. Funnel Plot.
Figure 4. Funnel Plot.
Preprints 148560 g003
In meta-analyses, a funnel plot is also used to determine publication bias (Duval & Tweedie, 2000). In this plot, the horizontal axis represents the effect size, while the vertical axis shows the sample size, variance, or standard error values. The funnel plot explains the relationship between the effect size of the study and the sample size (i.e., number of studies). The key point of the plot is that as the sample size (number of studies) increases, the precision of the effect size estimate also increases (Sterne & Harbord, 2004). Additionally, in the absence of publication bias, the plot will take the form of an inverted symmetric funnel as shown in Figure 4 (Sterne et al., 2011). Upon examining the funnel plot, it is observed that the studies (points) are distributed on both sides of the vertical line, indicating that there is no bias. Therefore, it can be explicitly stated that no bias is present in this study.

2.3. Post-Complementary Knowledge Phase

The second phase, the post-complementary knowledge phase, involves information from original studies conducted by the researcher to address the deficiencies identified in the first phase. To achieve this, the research sought to determine how the subject in question has been addressed in the relevant literature. This process was carried out using meta-analysis and meta-thematic analysis methods. Since McA is based on inductive reasoning, the data obtained in the first step are key indicators evaluated within an integrated framework that shows what is missing in early childhood education. Thus, the complementary knowledge phase was conducted as a new initiative to address the missing data. This step was considered a complementary process aimed at addressing gaps, drawing attention, and raising awareness within the context of the study area. Accordingly, in the first phase, it was found that there were a limited number of studies on artificial intelligence applications in early childhood education, and this gap was identified as a significant void in the relevant literature. Therefore, efforts were made to fill this gap and draw the attention of other researchers to this area. As a result, artificial intelligence applications were implemented in the experimental group in early childhood education, and these applications were developed through experimental processes conducted to evaluate students' performance.

2.4. Design of the Experimental Process

McA is a mixed design that combines multiple research methods. In this design, various combinations of quantitative, qualitative, or both methods are used to structure each stage based on the findings from the previous stage (Batdı, 2018). In the experimental section of the research, an explanatory sequential design model was adopted. According to Ivankova et al. (2006), in this multiphase mixed method, quantitative data are collected first, followed by qualitative data that help provide a deeper explanation of the quantitative findings. The main purpose of this design is to provide a more comprehensive perspective on the research problem through quantitative data and findings. The qualitative data and their analysis enrich and clarify the statistical results by deeply exploring participants' views. In the pre-complementary knowledge phase, data from studies on artificial intelligence applications were evaluated through meta-analysis and meta-thematic analysis, and these analyses contributed to guiding the experimental processes. After the pre-complementary phase, it was found that research was primarily focused on social sciences, particularly in the health field, and there was a significant gap in studies targeting early childhood education. Therefore, an experimental study was planned to examine the impact of artificial intelligence applications on student performance in early childhood education.

2.5. Formation of the Experimental Group

In the study, a single-group pre-test/post-test experimental design was applied. In this design, the effect of the experimental process is tested through a single group. In this context, the experimental group consists of 14 early childhood education students (n = 14) enrolled in the 2024-2025 academic year. A pre-test (T1) was applied to the experimental group before the experimental process. The same test was re-administered as the post-test (T2) at the end of the experimental process.
Table 1. Symbolic Representation of the Experimental Study.
Table 1. Symbolic Representation of the Experimental Study.
Experimental Group R T1 X T2
R: Neutrality in group formation, X: Independent variable level (Artificial Intelligence applications in early childhood education), T1: Pre-test application, T2: Post-test application.

2.6. Data Collection Tool: Student Evaluation Form

A student evaluation form was prepared by the researcher to identify the impact of artificial intelligence applications on student performance using a multi-complementary approach with 60-72 month-old children attending preschool education. The form was used as a measurement tool. First, the preschool education program where the application would take place was reviewed. The achievements and indicators in the preschool education program were examined in detail. In the student evaluation form developed to assess the effect of artificial intelligence applications on preschool students' performance, the answer options for the items are scored as follows: 1 = Cannot do, 2 = Can do partially, 3 = Can do completely. The items in the measurement tool were prepared with the developmental characteristics of 60-72 month-old students in mind. Expert opinions were obtained regarding the items in the tool. Based on the feedback and suggestions received, the items of the tool were revised for semantic coherence, scope, sentence structure, and spelling rules, and the final version of the measurement tool was applied.

2.7. Process Duration

Weekly lesson plans suitable for artificial intelligence applications were prepared based on the achievements in the preschool education program. Themes were determined, especially related to health, art, Turkish, and social studies achievements, as they are more related to the social sciences. Additionally, when creating the lesson plans, activities were determined based on the use of artificial intelligence tools for the experimental group students. Before the process, information about the artificial intelligence tool to be used in the study was provided. In the first week, the topic of "Healthy Eating," in the second week "Learning Colors-Pink," in the third week "Natural Disasters," and in the fourth week "Dental Health" were taught using the artificial intelligence tool "Animated Drawings."
Week 1: The teacher welcomes the children and guides them to the play centers. "Little Bean Song" movements were performed as sports and dance activities. "Healthy Eating," "How Can We Be Healthy?" and "Healthy Tomorrow" educational videos were watched. The importance of breakfast and the components of a healthy breakfast were discussed. Healthy foods were introduced. The concept of a balanced and adequate diet was emphasized. After these explanations and brainstorming, the teacher showed flashcards and a slide presentation about "Healthy and Unhealthy Foods." The teacher made the students prepare a healthy breakfast plate with the song "Healthy Eating Song." The students created a healthy breakfast plate using the healthy foods they had learned. The students chose one healthy or unhealthy food and drew it. Afterward, the drawings were animated using the artificial intelligence tool. The teacher taught the "Vegetable Fruit Song." In this song, students distinguished new words and discussed the new vocabulary. "The Visit of Vitamins Story" was shown. The lesson aimed to teach the word "vitamin," and the teacher asked the students to use the word appropriately.
Week 2: The children were asked what colors they had already learned. Then, each child was given a color and asked to show or say an object of that color. They were told that by mixing red and white, they would get a new color and were asked to guess what this color might be. A class experiment was conducted to discover this color. Three cups were placed on the table: one empty and two filled with red and white paint, respectively. The cups were mixed, and the color pink was created. The children were asked what natural things were pink. "Pink Color Flashcards and Slide Show" and the "Learning Colors-Pink" educational video were shown. After all these activities, the "Pink Color Finger Game" was sung. Finally, "The Brave Pink Cloud Story" was shown, and then the students performed the "Flamingo Coloring" activity. The students drew and colored a pink flamingo. For those struggling with the drawing, a pre-made flamingo image was given. After completing the drawing and coloring, the drawings were uploaded to the artificial intelligence tool for animation.
Week 3: The teacher welcomed the children and directed them to the play centers. The teacher asked the children what natural events were. They discussed disasters like earthquakes, floods, fires, droughts, and volcanic eruptions, and when and why they occur. The teacher explained what should be done in case of an earthquake. First, the "Preschool Disaster Education" educational video and the "Earthquake Education for Children 'Earth' Animation Cartoon" were shown. Then, "Drop, Cover, and Hold On Story" was shown. Afterward, a "Natural Disasters Finger Game" was taught, and everyone sang it together. Flashcards about other natural disasters were shown. The next natural event to be taught was the volcano (volcanic eruption). The teacher took a boiled egg and a globe model. The teacher explained that the Earth has layers like the egg and peeled the outer shell, showing the inner white layer. Then the yellow part was shown as the magma, the central part of the Earth. It was explained that this magma sometimes needs to escape to the surface, but this requires a mountain. The children were asked to perform the "Volcano Experiment" to demonstrate what they had learned. The students were asked to draw any natural event. After completing the drawing, it was uploaded to the artificial intelligence tool. The students were asked to review each other's work.
Week 4: First, as sports and dance activities, the "Brushing Teeth Song" movements were performed. After the movements, the "Why Do We Brush Our Teeth?" and "How to Brush Our Teeth Properly?" educational videos were shown. The children sat in a way that allowed them to see the teacher. The teacher suddenly held her cheek and asked, "What happened to my tooth?" The children were prompted to think about why the tooth might hurt. The importance of brushing teeth for dental health was discussed, including which foods should be eaten and avoided for healthy teeth. An "Oral and Dental Health Drama" was performed. Two or three students acted as germs, while the others represented healthy teeth. The germs chased the healthy teeth, and when they caught them, the teacher used a toothbrush to clean the teeth and save them from germs. The "My Teeth Song" was then taught and sung together. The students were given time to draw healthy and unhealthy (decayed) teeth. After completing their drawings, they were uploaded to the artificial intelligence tool.
The lessons for the experimental group were applied over four weeks in accordance with the preschool education program. The teaching methods for all topics involved a mix of question-and-answer, auditory-visual, experiments, and drama-based techniques. Throughout the process, the students reinforced their learning by drawing and animating their creations using the artificial intelligence tool.

2.8. Thematic Analysis Process

Following the final comprehensive data phase, thematic analysis was conducted to support the quantitative data obtained and to gain more detailed information about artificial intelligence applications. In this process, information was gathered through observation regarding students' understanding of artificial intelligence applications. If a researcher wants to understand a behavior occurring in a specific environment in a detailed, comprehensive, and time-evolving manner, they may resort to the observation method (Bailey, 1982). According to Yıldırım and Şimşek (2006), observation is a method used to gather direct and detailed information about events, situations, and behaviors. When applied regularly, it provides reliable results as a research method (Merriam, 2013). In the study, an observation form was prepared by consulting expert opinions. The personal information of the observers was kept confidential. Two individuals participated as observers in the study. Citations from the observers were labeled with the expressions G1 and G2. During the thematic analysis process, the consistency of the themes and codes derived from the observations of students was evaluated, along with the Cohen Kappa agreement values between the coders. The Kappa value is a measure indicating the level of agreement between the observers or data coders during the research process. According to this value, the levels of agreement are as follows: values of .20 or lower indicate weak agreement, values between .21-.40 indicate low-to-moderate agreement, values between .41-.60 indicate moderate agreement, values between .61-.80 indicate good agreement, and values between .81-1.00 indicate very good agreement (Vierra & Garrett, 2005). Looking at the agreement values for the study, .917 shows that there was very good agreement.

2.9. Complementary Knowledge

The final phase of the Multi-Complementary Approach (McA), the complementary knowledge phase, involves the integration of the findings and results obtained from the first two phases. In this phase, the use of different data collection and analysis methods can contribute to enriching the study. According to Johnson and Onwuegbuzie (2004), methodologically, mixed-methods research can provide more qualitative results compared to single-method studies. As Tashakkori and Creswell (2007) state, in mixed-methods studies where qualitative and quantitative data are integrated, the research findings complement and support each other in terms of interpretation, explanation, description, and verification. In this regard, the integration of data obtained through two qualitative and two quantitative research methods to examine the effectiveness of artificial intelligence applications can be seen as a fundamental feature of the comprehensive information phase, providing rich and in-depth insights.

3. FINDINGS

In this section of the study, the findings obtained from the meta-analysis and meta-thematic analysis conducted on the use of artificial intelligence applications in social sciences are interpreted.
Table 2. Meta-Analysis Data.
Table 2. Meta-Analysis Data.
Test Type Model 95 % Confidence Interval Heterogeneity
n g Lower Upper Q p I2
Achievement FEM 13 0.55 0.41 0.69 217.75 0.00 94.48
REM 13 0.74 0.13 1.35
According to the meta-analysis findings presented in Table 2, the effect size of AI applications (Achievement) in social sciences, calculated based on REM, was found to be g = .74 [.13; 1.353]. Since this effect size is considered large, it indicates that AI-based applications have a positive and significant impact in social sciences (Thalheimer & Cook, 2002).
Table 3. Overall Effect Sizes of the Studies Included in the Analysis According to the Moderator Analysis.
Table 3. Overall Effect Sizes of the Studies Included in the Analysis According to the Moderator Analysis.
Item Groups Effect Size and 95% Confidence Interval Null Test Heterogeneity
n g Lower Upper Z-value P-value Q-value df P-value
Education Level University 6 0.65 -0.66 1.96 0.97 0.33
Others 7 0.80 0.17 1.42 2.49 0.01
Toal 13 0.77 0.20 1.33 2.67 0.00 0.04 1

0.84

Application Duration Sessions 7 0.30 -0.29 0.90 1.00 0.32
9-+ 4 0.33 -0.12 0.79 1.43 0.15
Total 11 0.32 -0.04 0.68 1.74 0.08 0.00 1

0.95

Sample Size

Small
6 0.01 -0.24 0.26 0.09 0.93
Medium 3 1.82 1.21 2.43 5.88 0.00
Large 4 1.02 -0.36 2.40 1.44 0.15
Total 13 0.29 0.06 0.52 2.51 0.01 30.29 2 0.00
When examining the heterogeneity test value obtained in Table 2, it is observed that the effect sizes of artificial intelligence (AI) applications in social sciences are distributed heterogeneously (Q=217.746; p˂.05). The I² value (94.48%) indicates that 94% of the observed variance originates from true variance among the studies. According to Cooper et al. (2009), an I² value of 25% represents low heterogeneity, 50% indicates moderate heterogeneity, and 75% or above signifies high heterogeneity. In this study, the calculated I² value of 94.48 confirms the presence of high heterogeneity (Higgins et al., 2003). In this context, the moderator variables influencing the overall effect size appear at a high heterogeneity level. Therefore, since the obtained I² value indicates heterogeneity, conducting a moderator analysis is necessary (Borenstein et al., 2009). For this reason, the selected moderator variables include education level, implementation duration, and sample size (Table 3). According to the findings of the moderator analysis, the highest effect sizes were observed in the following categories:
  • Education level: "Others" category (g=0.80)
  • Implementation duration: 9+ weeks (g=0.33)
  • Sample size: Medium sample group (g=1.82)
These results suggest that AI applications are more effective in the specified groups within the moderator analysis. However, the significance test did not reveal significant differences in terms of education level (Q_B=0.04; p˃.05), implementation duration (Q_B=0.00; p˃.05), or sample size (Q_B=30.29; p˃.05). When the analysis results are evaluated as a whole, AI applications have shown a broad-level effect across all groups in a similar manner. However, no significant differences were found among the groups.

3.1. Meta-Thematic Findings on Artificial Intelligence Applications

This section presents the results of the meta-thematic analysis of qualitative studies on artificial intelligence (AI) applications within specific themes and codes. The obtained data are categorized under the following themes:
  • Contribution to Educational Environments
  • Contribution to Innovation and Technological Development
  • Challenges and Solutions
The codes related to these themes are visually modeled below. Additionally, direct quotations are included in the relevant discussions to support these codes.

3.3.1. Contributions to Education Environments

Figure 4. Contributions to Education Environments.
Figure 4. Contributions to Education Environments.
Preprints 148560 g004
When examining Figure 4, the codes fall under the theme of "Contribution to Educational Environments" in relation to artificial intelligence (AI) applications. Some of these codes can be explained as follows:
  • Resembling a real teacher
  • Improving readiness
  • Being reassuring
  • Using motivating expressions
  • Reinforcing learning by reteaching topics
  • Providing 24/7 learning opportunities
  • Demonstrating tolerance
Relevant reference statements supporting these codes include:
  • "(M5-p.17053) In the study, it is stated: 'When I make a mistake, it tells me not to worry, gives me hints, and helps me find the correct answer. It also summarizes topics... It provides visuals, which is something a real teacher would do.'"
  • "(M5-p.17051) It was useful for students who were unprepared for the lesson. It served as a preparatory tool, and thanks to the chatbot, even if we didn’t fully grasp the topic, we had already learned half of it by the time the teacher started explaining."
  • "(M5-p.17069) As a student, I felt good. I believe my classmates felt the same way... It says things like 'You are amazing!' 'Great!' or 'I’m making this question easier for you!'"
  • "(M5-p.17051) It was positive... I mean, a teacher comes and teaches a topic, then another one comes and summarizes it."
These findings suggest that AI contributes to educational environments in various ways, enhancing the learning experience from multiple dimensions.

3.3.2. Contributions to Innovation and Technological Advancement

Figure 5. Contributions to Innovation and Technological Advancement.
Figure 5. Contributions to Innovation and Technological Advancement.
Preprints 148560 g005
When examining Figure 5, the codes fall under the theme of "Contribution to Innovation and Technological Development" in relation to artificial intelligence (AI) applications. Some of these codes can be explained as follows:
  • Usage across different disciplines
  • Benefits for educational services
  • Contribution to diagnosis and treatment
  • Saving time and increasing efficiency
  • Helping users stay up to date
  • Performing automated tasks
Relevant reference statements supporting these codes include:
  • "(M1-p.180) AI and robotics technology can be used in many different disciplines. In the future, it will be an essential field that professionals in all industries need to learn… I cannot provide a very technical definition due to my lack of knowledge."
  • "(M2-p.33) AI is particularly active in the diagnosis and treatment process, especially in the diagnosis phase."
  • "(M4-p.10) AI can retrieve more relevant information faster, helping you stay up to date and potentially learn new skills more quickly."
  • "(M4-p.12) In the future, repetitive, time-consuming, and automatable tasks will be handled by AI."
  • "(M3-p.77) There will be a significant gain in speed and time. It will accelerate processes greatly. Right now, we think our current pace doesn’t harm us. We still wait two weeks for molecular tests. But in 20 years, those two weeks could mean a lot. We need to be even faster."
These findings suggest that AI applications contribute to innovation and technological development by facilitating work across various disciplines, helping professionals stay up to date, handling time-consuming tasks, improving job performance, and increasing overall efficiency.

3.3.3. Challenges in AI Applications and Suggested Solutions

Figure 6. Challenges in AI Applications and Suggested Solutions.
Figure 6. Challenges in AI Applications and Suggested Solutions.
Preprints 148560 g006
When examining Figure 6, the codes are seen within the theme of “challenges and solutions in AI applications.” Some of these codes can be explained as “expressing fear, being in the hands of certain individuals, causing job loss, creating ethical problems, carrying the risk of cost and waste, should be in an assistant position, and should be economically viable.”
The reference statements for these codes include:
  • In the study (M1-p.183), it is stated:
“Right now, it expresses fear and anxiety in me. Since I do not fully grasp the situation, and I cannot predict what it might do to people in the future, it seems frightening to me due to the uncertainty.”
  • In the study (M1-p.192), another statement highlights job loss concerns:
“As a banker, I believe it will negatively impact my profession. Since we mainly deal with statistical calculations and more technical matters, I think the banking profession will cease to exist in the near future, after 2030.”
  • Another concern (M1-p.199) is about control and accessibility:
“What worries me is that it will be in the hands of certain individuals, unable to reach the public, and unable to serve the general population. A majority of people may remain in hunger and poverty, and they may survive only if ‘certain individuals’ provide help. There is such a danger.”
  • Ethical concerns are also raised in (M3-p.59):
“I honestly believe it will create ethical problems. After all, what will happen in terms of ethics? Robots have no legal responsibility…”
  • Another concern regarding cost and inefficiency is found in (M3-p.75):
“Let's say an aspect of AI is developed, and it looks great. You invest in it, make serious financial commitments, bring in people to set it up, pay those people, buy the machines. But in the end, you get far less performance than what was promised. That is waste.”
  • Proposed Solutions
Reference statements related to possible solutions include:
  • In (M1-p.188), a participant suggests AI should be used as an assistant rather than a replacement:
“I would prefer it to be an assistant. I think it would be more useful that way. I would prefer it as an assistant to make my daily tasks easier.”
  • In (M3-p.72), financial feasibility is emphasized:
“Financial viability is a very important factor. The initial costs of setting up and integrating new systems can be significant…”
In conclusion, AI applications present various challenges such as causing social anxiety, creating ethical concerns, diminishing cognitive skills, surpassing human capabilities, leading to job loss, and negatively affecting relationships. However, they also offer solutions such as reducing workload, functioning as an assistant, making life easier, and handling bureaucratic tasks.

3.2. Comparison of Pre-Test and Post-Test Results After the Experimental Process

Table 4 presents the results of the assessment tool applied to students in the experimental group at the end of the AI-assisted instruction. In this section of the research, the aim was to collect complementary data through an experimental study integrating the first phase of ÇBY.
For this purpose, the data from the assessment tool evaluating the impact of AI applications on students' learning performance in the "preschool education level" is presented in Table 4.
In Table 4 above, a significant difference is observed between the pre-test and post-test scores of the experimental group during the experimental process. The pre-test and post-test scores of the students in the experimental group are presented. When examining the relevant data, a difference of 9.29 points is observed between the pre-test score (18.64) and the post-test score (27.93). This difference is in favor of the post-test (p < .05). As a result, it can be stated that the applied interventions have positively contributed to students' learning performance.

3.3. Thematic Findings from Observers' Opinions After the Experimental Process

In this section, the findings derived from observers' opinions regarding the use of the AI tool are interpreted. In the final holistic information phase of the research, the observers' comments are grouped under two thematic headings. These themes can be expressed as "contribution to the social-emotional dimension" and "problems encountered in AI applications and their solutions."
Figure 7. Contributions to Social-Affective Dimension.
Figure 7. Contributions to Social-Affective Dimension.
Preprints 148560 g007
When examining Figure 7, the codes related to the "contribution to the social-emotional dimension" of AI applications are represented in the model. The observing teachers indicated that the application sparked interest and curiosity among the students. Regarding this code, G1 stated, "Introducing the AI tool sparked curiosity in the students," while G2 expressed, "Using the AI tool during the lesson increased students' interest." The observers also noted that the application enhanced students' enthusiasm for learning, made learning enjoyable, and fostered a positive mood. G1 commented, "I observed that the application significantly increased the students' enthusiasm for the learning process," while G2 mentioned, "Thanks to the activities, the students' overall mood became more positive." Additionally, the AI application is said to have fostered social and emotional development by promoting responsibility, increasing motivation, enhancing collaboration, boosting self-confidence, improving communication skills, and increasing students' willingness to participate in class. The issues encountered by the observers during the AI application process and their proposed solutions are shown in Figure 8.

3.4. Findings Related to the Holistic Information Stage

At this stage, the results of the pre-complementary and post-complementary knowledge phases are combined and expressed. In the pre-complementary knowledge phase, it is observed that AI applications positively influence the field of social sciences. The effect size of AI applications in social sciences research is g=0.74. Additionally, the moderator analysis indicates that the greatest effect size is observed in university-level and other educational stages, which has encouraged researchers to implement AI applications in preschool education. No studies on preschool education were found in the included works. In this context, the implementation of AI applications in preschool education is seen as an indication of a gap in the literature.
In the qualitative part of the pre-complementary knowledge phase, which consists of meta-thematic analysis, it can be stated that despite some issues with AI applications, they provide significant contributions to educational environments, innovation, and the technological development process. In the second part of the study, an experimental process was applied to AI applications in preschool education. The results of the post-complementary phase show that there is a significant difference between the pre-test and post-test scores of the experimental group.
Finally, when evaluating the thematic process of the study, it is concluded that, according to the observer teachers' feedback, AI applications contributed significantly to the socio-emotional dimension by increasing students' self-confidence and motivation, making learning enjoyable, improving responsibility and communication skills, and fostering a positive mood. The pre- complementary and post- complementary findings support each other and merge coherently to reach a holistic conclusion, indicating that the research findings are consistent.

4. Discussion and Conclusion

The findings of this research have been discussed within the framework of the McA. Accordingly, the results of the pre-complementary phase, which includes meta-analysis and meta-thematic analysis, are presented first. Following this, artificial intelligence (AI) applications were implemented, and a measurement tool was applied. The findings obtained from teachers' observations, together with other data, were combined, and the results of the holistic phase, which includes recommendations, were presented. To explain the effects of AI applications in a more systematic and comprehensive manner, the process was approached in a phased manner.

4.1. Results of the Pre-Complementary Knowledge Phase

Based on the meta-analysis of the documents included in the research, it was determined that AI applications have a positive effect on the social sciences field (g=0.74). This result highlights that AI applications are effective in the social sciences. AI methods have successfully developed diagnosis, understanding human development, and data management in behavioral and social sciences (Robila & Robila, 2019). In recent years, research in big data and AI in social sciences has grown exponentially, with management and psychology leading the way, along with emerging interdisciplinary areas like social sciences and geography (Liao et al., 2020). AI improves social science research by providing accurate and efficient data analysis, enhancing decision-making, and promoting responsible and ethical development (De La & Hernández-Lugo, 2024). Models like LLMs are transforming social science research by simulating human-like responses and allowing for large-scale testing of theories and hypotheses about human behavior (Grossmann et al., 2023; Xu et al., 2024). Additionally, AI increases the effectiveness of data management in social and human services by offering advanced tools for literature review, data collection, and visualization (Robila & Robila, 2019). Generative AI provides new approaches to studying human behavior through surveys, online experiments, and automated content analysis (Bail, 2024), while its capabilities in text, image, and sound processing support decision-making processes in social research with greater accuracy and efficiency (De La & Hernández-Lugo, 2024).
To examine the effects of AI applications in social sciences in more detail, moderator analyses were conducted. The effect sizes (g=0.77, 0.32) for teaching level and application duration indicate that AI applications have a significant impact on the teaching level in social sciences, while having a moderate impact on the application duration. Here, it is observed that the impact of AI applications varies with the duration of the application, but this variation is not as pronounced as the differences seen in teaching levels.
Moreover, the positive results obtained from the meta-analysis were found to align with the findings from the meta-thematic analysis. The themes identified in the literature review, particularly AI's "contribution to education," are supported by previous studies. Hwang et al. (2020) noted that AI technologies, which are increasingly widespread in education, show promise for improving student learning performance and experiences. AI tools contribute to the learning process by providing feedback and offering flexible, personalized learning experiences (Yetişensoy & Karaduman, 2024). Furthermore, studies have shown that AI applications contribute to innovation and technological advancements. AI transforms work operations by enhancing human tasks, improving productivity, and fostering innovation across various sectors (Sharma, 2024). In the financial sector, it improves decision-making and customer service through automation, analytics, and algorithmic trading (Sharma, 2024; Soni, 2023), while deep learning and neural network technologies assist in predicting diseases, patient care, and disease outbreaks (Bhattamisra et al., 2023; Secinaro et al., 2021).

4.2. Results of the Post-Complementary Knowledge Phase

In the post-complementary knowledge phase of the study, a significant difference was found between the pre-test and post-test scores of the experimental group in the use of AI applications in early childhood education (Pre-test: x̄=18.64, Post-test: x̄=27.93). This result demonstrates that AI applications had a greater impact on achieving learning objectives in early childhood education, highlighting the effectiveness of AI in the learning process. The positive effects of AI applications on learning have been observed in many studies. In early childhood education, AI contributes to teaching and learning processes through collaboration between humans and machines (Crescenzi-Lanna, 2022) and provides adaptive learning environments that enhance personalized learning experiences (Doğan et al., 2023).
Thematic analysis based on observations from the teachers involved in the experimental process resulted in two key themes. First, AI applications support students' social and emotional development, which is also corroborated by previous research. AI enhances creativity, problem-solving skills, and social skills such as collaboration and communication by digitizing game activities and providing real-time monitoring of children's developmental progress (Fikri & Rhalma, 2024; Masturoh et al., 2024). AI systems create personalized learning paths through machine learning algorithms, taking individuals' strengths and weaknesses into account and fostering the development of self-esteem, confidence, and critical thinking skills through interactive activities and educational games (Kuchkarova et al., 2024). The second theme that emerged from the observations relates to the challenges and solutions encountered in AI applications. Issues such as technology addiction (Bozkurt, 2023; Puteri et al., 2024), lack of social integration (Demircioğlu et al., 2024), and effects on teacher-student interaction (Schiff, 2020) were identified as challenges in AI implementation.

4.3. Results in the Complementary Knowledge Phase

In the final stage of the McA process, the results from the preliminary and final complementary knowledge phases were combined to assess the consistency of the findings. The results of the measurement tool conducted after the experimental process indicate a positive effect on students' learning performance, and these findings align with the data from the pre-complementary knowledge phase. This consistency in findings suggests that the experimental process had a positive effect on learning outcomes. The thematic analyses conducted in this phase also revealed that AI applications enhanced students' interest, curiosity, communication skills, friendship relationships, and willingness to cooperate. These outcomes facilitated the students' ability to adapt their social and emotional competencies to real-life contexts, thus contributing to the improvement of their learning performance. The results obtained from the measurement tool are consistent with the findings from the meta-analysis and meta-thematic analysis. Overall, all findings support each other, indicating that AI applications provide a meaningful contribution to both the learning process and various factors influencing learning.

4.4. Limitations

While the McA offers a comprehensive approach, there are several limitations to consider. The meta-analysis and meta-thematic analysis processes conducted in the study were limited to specific databases. The experimental process focused on the use of AI applications in the learning processes of preschool-level students. The research was also constrained to the application process for preschool students and the expected learning outcomes for the topics covered. Additionally, evaluating the effectiveness of AI applications in different subject areas and educational levels within the McA framework could provide further insights.

4.5. Suggestions

To enhance the effectiveness of AI applications in preschool education, the development of teachers' digital pedagogical competence should be prioritized. In this regard, in-service training programs for teachers on the integration of AI applications into educational processes are recommended. These training programs would help teachers better understand the pedagogical potential of AI tools and use them in alignment with the lesson content. Furthermore, AI-supported learning environments should be designed to accommodate students' individual differences and developmental characteristics, promoting more active participation in the learning process.
Considering the general findings of the study, it is recommended that environments conducive to the use of AI applications be created in educational settings, as the results showed that AI positively contributed to educational environments, innovation, technological development, and students' socio-emotional aspects. Moreover, it is suggested that AI be integrated into courses at different educational levels in line with the results regarding its positive contributions to educational environments. Lastly, measures should be taken to prevent AI from fostering technological dependency, being perceived as mere games, or hindering face-to-face communication.

Appendix 1: The Kappa Agreement Values

Meta-thematic Analysis Part
Contributions to Education Environments Contributions to Innovation and Technological Advancement Problems Encountered and Suggestions for Solution
K2 K2 K2
K1 + - Σ K1 + - Σ K1 + - Σ
+ 18 2 20 + 22 1 23 + 31 2 33
- 3 13 16 - 1 17 18 - 2 18 20
Σ 21 15 36 Σ 23 18 41 Σ 33 20 53
Kappa: .717
p:.000
Kappa: .901
p:.000
Kappa: .839
p:.000
Experimental-Qualitative Part
Contribution to Social-Affective Dimension Problems Encountered and Suggestions for Solution
K2 K2
K1 + - Σ K1 + - Σ
+ 16 0 16 + 16 1 17
- 1 9 10 - 1 7 8
Σ 17 9 26 Σ 17 8 25
Kappa: .917
p:.000
Kappa: .816
p:.000

References

  1. Abram, M. D., Mancini, K. T., & Parker, R. D. (2020). Methods to Integrate Natural Language Processing Into Qualitative Research. International Journal of Qualitative Methods, 19. [CrossRef]
  2. Agarwal, R. & Dhar, V. (2014). Big data, data science and analytics: the opportunity and challenge for is research. Information Systems Research, 25(3), 443-448.
  3. Aggarwal, M. & Murty, M.N (2021) Deep Learning. In: Springer-Briefs in Applied Sciences and Technology. [CrossRef]
  4. Amershi, S., Cakmak, M., Knox, W. B., & Kulesza, T. (2014). Power to the people: The role of humans in interactive machine learning. AI Magazine, 35(4), 105–120. https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/2513.
  5. Apostolopoulos, I.D.; Groumpos, P.P. (2023). Fuzzy Cognitive Maps: Their Role in Explainable Artificial Intelligence. Appl. Sci, 13, 3412. [CrossRef]
  6. Aşkun, V. (2023). Sosyal Bilimler Araştırmaları İçin Chatgpt Potansiyelinin Açığa Çıkarılması: Uygulamalar, Zorluklar Ve Gelecek Yönelimler. Erciyes Akademi, 37(2), 622 - 656.
  7. Bail, C. (2024). Can Generative AI improve social science?. Proceedings of the National Academy of Sciences of the United States of America, 121. [CrossRef]
  8. Bailey, K.D. (1982) Methods of social research (2nd ed.). New York: The Free Press.
  9. Baker, T., & Smith, L. (2019). Educ-AI-tion rebooted? Exploring the future of artificial intelligence in schools and colleges. Erişim adresi: https://media.nesta.org.uk/documents/Future_of_AI_and_education_v5_WEB.pdf.
  10. Bangert-Drowns, R. L. ve Rudner, L. M. (1991). Meta-analysis in educational research. Paper presented to ERIC Clearinghouse on Tests, Measurement, Evaluation. https://files.eric.ed.gov/fulltext/ED339748.pdf.
  11. Batdı, V. (2016). Metodolojik çoğulculukta yeni bir yönelim: çoklu bütüncül yaklaşım. Sosyal Bilimler Dergisi, 50, 133–147.
  12. Batdı, V. (2017). Smart board and academic achievement in terms of the process of integrating technology into instruction: A study on the McA. Croatian Journal of Education, 19(3), 763–801. [CrossRef]
  13. Batdı, V. (2018). Eğitimde yeni bir yönelim: Mega-çoklu bütüncül yaklaşım ve beyin temelli öğrenme [A new trend in education: Mega-multi complementary approach and brain based learning]. IKSAD Publishing.
  14. Batdı, V. (2019). Meta-tematik analiz. V. Batdı (Ed.), Meta-tematik analiz: örnek uygulamalar. (ss. 10- 76). Ankara: Anı Yayıncılık.
  15. Batdı, V. (2020). Introduction to meta-thematic analysis. V. Batdı (Ed.), Meta-thematic analysis in research process. (pp. 1-38). Ankara: Anı Yayıncılık.
  16. Bhattamisra, S., Banerjee, P., Gupta, P., Mayuren, J., Patra, S., & Candasamy, M. (2023). Artificial Intelligence in Pharmaceutical and Healthcare Research. Big Data Cogn. Comput., 7, 10. [CrossRef]
  17. Borenstein, M., Hedges, L. V., Higgins, J. P. T. ve Rothstein, H. R. (2009). Introduction to meta-analysis (1st ed.). John Wiley & Sons.
  18. Boumans, J. W., & Trilling, D. (2018). Taking stock of the toolkit : An overview of relevant automated content analysis approaches and techniques for digital journalism scholars. Içinde Rethinking Research Methods in an Age of Digital Journalism (ss. 8-23). Routledge. [CrossRef]
  19. Bozkurt, A. (2023). ChatGPT, üretken yapay zeka ve algoritmik paradigma değişikliği. Alanyazın, 4(1), 63-72. [CrossRef]
  20. Castells, M. (2010). The rise of the network society: The information age: Economy, society, and culture. John Wiley & Sons.
  21. Cavalcante, R. C., Brasileiro, R. C., Souza, V. L., Nobrega, J. P., & Oliveira, A. L. (2016). Computational intelligence and financial markets: A survey and future directions. Expert Systems with Applications, 55, 194-211.
  22. Chan, K. S., & Zary, N. (2019). Applications and challenges of implementing artificial intelligence in medical education: An integrative review. JMIR Medical Education, 5(1), e13930. [CrossRef]
  23. Chassigonal, M., Khoroshvin, A., & Klimova, A. (2018). Artificial intelligence trends in education: A narrative overview. Procedia Computer Science. Elsevier.
  24. Chatterjee, S., & Bhattacharjee, K. K. (2020). Adoption of Artificial Intelligence in Higher Education: a Quantitative Analysis Using Structural Equation Modelling.Forthcoming in Education and Information Technologies.
  25. Chen, M., Liu, Q., Huang, S., & Dang, C. (2020). Environmental cost control system of manufacturing enterprises using artificial intelligence based on value chain of circular Economy. Enterprise Information Systems, 16(8-9).
  26. Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: a review. IEEE Access, 8, 75264-75278.
  27. Chen, Y., Wu, X., Hu, A., He, G. ve Ju, G. (2021). Social prediction: A new research paradigm based on machine learning. The Journal of Chinese Sociology, 8(1), 1-21. [CrossRef]
  28. Crescenzi-Lanna, L. (2022). Literature review of the reciprocal value of artificial and human intelligence in early childhood education. Journal of Research on Technology in Education, 55, 21 - 33. [CrossRef]
  29. Çelik, S. (2019). Understanding Data Science. Journal of Current Research on Social Sciences, 9(3), 235-256.
  30. Cohen, L., Manion, L., ve Morrison, K. (2007). Research Methods in Education (6th ed.). Routledge.
  31. Cooper, H., Hedges, L. V. ve Valentine, J. C. (2009). The handbook of research synthesis and meta-analysis. Russell Sage Publication.
  32. De La C. & Hernández-Lugo, M. (2024). Artificial Intelligence as a tool for analysis in Social Sciences: methods and applications. LatIA. [CrossRef]
  33. Demircioğlu, E., Yazıcı, C., & Demir, B. (2024). Yapay Zekâ Destekli Matematik Eğitimi: Bir İçerik Analizi. International Journal of Social and Humanities Sciences Research (JSHSR), 11(106), 771-785.
  34. Ding, Y. (2021) ‘Performance analysis of public management teaching practice training based on artificial intelligence technology’, Journal of Intelligent & Fuzzy Systems, 40(2), pp.3787–3800 [online] . [CrossRef]
  35. Doğan, M., Dogan, T., & Bozkurt, A. (2023). The Use of Artificial Intelligence (AI) in Online Learning and Distance Education Processes: A Systematic Review of Empirical Studies. Applied Sciences. [CrossRef]
  36. Dongming, L., Wanjing, L., Shuang, C., & Shuying, Z. (2020). Intelligent Robot for Early Childhood Education. Proceedings of the 2020 8th International Conference on Information and Education Technology. [CrossRef]
  37. Duval, S. ve Tweedie, R. (2000). Trim and fll: A simple funnel-plot based method of testing and adjusting for publication bias in meta-analysis. Biometrics, 56(2), 455–463. [CrossRef]
  38. Fikri, Y., & Rhalma, M. (2024). Artificial Intelligence (AI) in Early Childhood Education (ECE): Do Effects and Interactions Matter?. International Journal of Religion. [CrossRef]
  39. Glass, G. V. (1976). Primary, secondary, and meta-analysis of research. Educational researcher, 5(10), 3-8. [CrossRef]
  40. Grossmann, I., Feinberg, M., Parker, D., Christakis, N., Tetlock, P., & Cunningham, W. (2023). AI and the transformation of social science research. Science, 380, 1108 - 1109. [CrossRef]
  41. Hedges, L. (1981). Distribution theory for Glass’s estimator of efect size and related estimates. Journal of Educational Statistics, 6, 107–112. [CrossRef]
  42. Higgins, J. P. ve Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis. Statistics in Medicine, 21(11), 1539–1558. [CrossRef]
  43. Higgins, J. P. T., Thompson, S. G., Deeks, J. J. ve Altman, D. G. (2003). Measuring inconsistency in meta-analyses. BMJ, 237, 557–560. [CrossRef]
  44. Howard, J. (2019). Artificial intelligence: Implications for the future of work. American Journal of Industrial Medicine, 62(11): 917-926.
  45. Hwang, G. J., Xie, H., Wah, B. W., & Gašević, D. (2020). Vision, challenges, roles and research issues of artificial intelligence in education. Computers & Education: Artificial Intelligence, 1, 100001. [CrossRef]
  46. Isichei, B.C.; Leung, C.K.; Nguyen, L.T.; Morrow, L.B.; Ngo, A.T.; Pham, T.D.; Cuzzocrea, A. Sports Data Management, Mining, and Visualization; Lecture Notes in Networks and Systems; Springer: Cham, Switzerland, 2022; Volume 450 LNNS, pp. 141–153. [CrossRef]
  47. Ivankova, N. V., Creswell, J. W. ve Stick, S. L. (2006). Using mixed-methods sequential explanatory design: From theory to practice. Field Methods, 18(1), 3–20. [CrossRef]
  48. Jarek, K. ve Mazurek, G. (2019). Marketing and artificial ıntelligence. Central European Business Review, 8(2), 46-55.
  49. Jiang, X. (2022). Design of Artificial Intelligence-based Multimedia Resource Search Service System for Preschool Education. 2022 International Conference on Information System, Computing and Educational Technology (ICISCET), 76-78. [CrossRef]
  50. Jin, L. (2019). Investigation on Potential Application of Artificial Intelligence in Preschool Children’s Education. Journal of Physics: Conference Series, 1288. [CrossRef]
  51. Johnson, R.B. and Onwuegbuzie, A.J. (2004) Mixed Methods Research: A Research Paradigm Whose Time Has Come. Educational Researcher, 33, 14-26. [CrossRef]
  52. Kumar, R., & Mishra, B. K. (Eds.). (2020). Natural language processing in artificial intelligence (1st ed.). Apple Academic Press. [CrossRef]
  53. Kuchkarova, G., Kholmatov, S., Tishabaeva, I., Khamdamova, O., Husaynova, M., & Ibragimov, N. (2024). Ai-Integrated System Design for Early Stage Learning and Erudition to Develop Analytical Deftones. 2024 4th International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), 795-799. [CrossRef]
  54. Lazer, D., Pentland, A., Adamic, L., Aral, S., Barabasi, A. L., Brewer, D., ...ve Van Alstyne, M. (2009). Computational social science. Science, 323(5915), 721-723. [CrossRef]
  55. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. [CrossRef]
  56. Lee, J. (2020). Coding in early childhood. Contemporary Issues in Early Childhood, 21, 266–269. [CrossRef]
  57. Liao, H., Wang, Z., & Liu, Y. (2020). Exploring the cross-disciplinary collaboration: a scientometric analysis of social science research related to Artificial Intelligence and Big Data application. IOP Conference Series: Materials Science and Engineering, 806. [CrossRef]
  58. Liao, L., & Gu, F. (2022). 5G and artificial intelligence ınteractive technology Applied in preschool education courses. Wireless Communications and Mobile Computing. [CrossRef]
  59. Lin, P., Van Brummelen, J., Lukin, G., Williams, R., & Breazeal, C. (2020). Zhorai: Designing a conversational agent for children to explore machine learning concepts. Proceedings of the AAAI Conference on Artificial Intelligence, 34(9), 13381–13388. [CrossRef]
  60. Liu, Z. (2021). Sociological perspectives on artificial intelligence: A typological reading. Sociology Compass, 15(1).
  61. Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Sage Publications, Inc.
  62. Longo, L. (2020). Enpowering qualitative research methods in education with artificial intelligence, World Conference on Qualitative Research. [CrossRef]
  63. Ma, L. (2021) ‘An immersive context teaching method for college English based on artificialintelligence and machine learning in virtual reality technology’, Mobile Information Systems,pp.1–7 [online] . [CrossRef]
  64. Mahmood, A., Wang, J., Yao, B., Wang, D., & Huang, C. (2023). LLM-Powered Conversational Voice Assistants: Interaction Patterns, Opportunities, Challenges, and Design Guidelines. ArXiv, abs/2309.13879.
  65. McCarthy, J. (2004). What is artificial intelligence?. Erişim adresi (11 Ocak 2019): http://www-formal.stanford.edu/jmc/whatisa.
  66. Masturoh, U., Irayana, I., & Adriliyana, F. (2024). Digitalization of Play Activities and Games: Artificial Intelligence in Early Childhood Education. TEMATIK: Jurnal Pemikiran dan Penelitian Pendidikan Anak Usia Dini. [CrossRef]
  67. Mercaldo, F., Nardone, V., & Santone, A. (2017). Diabetes Mellitus Affected Patients Classification and Diagnosis through Machine Learning Techniques. Procedia Computer Science, 112, 2519-2528. [CrossRef]
  68. Merriam, S.B. (2013) Qualitative Research: A Guide to Design and Implementation. John Wiley & Sons Inc., New York.
  69. Miao, F., Holmes, W., Huang, R., & Zhang, H. (2021). AI and education: Guidance for policymakers. UNESCO Publishing.
  70. Miles, M. B. ve Huberman, A. M. (1994). Qualitative data analysis: an expanded sourcebook. Sage Publications.
  71. Mintz, J., Holmes, W., Liu, L., & Perez-Ortiz, M. (2023). Artificial intelligence and K-12 education: Possibilities, pedagogies, and risks. Technology, Pedagogy and Education, 40(4), 325–333.
  72. Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G. ve Prisma, G. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Annals of Internal Medicine, 6(7), e1000097. [CrossRef]
  73. Mondal, K. (2019). A synergy of artificial intelligence and education in the 21st-century classrooms. In Proceedings of the International Conference on Digitization (ICD) (pp. 68–70).
  74. Mullainathan, S., & Spiess, J. (2017). Machine learning: an applied econometric approach. J. Econ. Perspect. 31(2),87–106.
  75. Nan, J. (2020). Research of applications of artificial intelligence in preschool education. Journal of Physics;Conference Series,1607. [CrossRef]
  76. Nilsson, N. J. (2010). The quest for artificial intelligence: A history of ideas and achievements. Cambridge University Press.
  77. Owan, V., Abang, K. B., Idika, D. O., Etta, E. O., & Bassey, B. A. (2023). Exploring the potential of artificial intelligence tools in educational measurement and assessment. EURASIA Journal of Mathematics, Science and Technology Education, 19(8).
  78. Patton, M. Q. (2014). Nitel araştırma ve değerlendirme yöntemleri. (M. Bütün, & S. B. Demir, Çev. Ed.). Ankara: PegemA Yayıncılık.
  79. Paulus, T. M., & Marone, V. (2024). “In Minutes Instead of Weeks”: Discursive Constructions of Generative AI and Qualitative Data Analysis. Qualitative Inquiry, 0(0). [CrossRef]
  80. Petitti, D. B. (1994). Meta-analysis, decision analysis, and cost-effectiveness analysis: Methods for quantitative synthesis in medicine. Oxford University Press.
  81. Popenici, S. A. D., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Res. Pract. Technol. Enhanc.Learn. (RPTEL), 12(22), 1e13.
  82. Probst P, Wright MN, ve Boulesteix A. (2019). Hyperparameters and tuning strategies for random forest. Wiley Interdisciplinary Reviews: data mining and knowledge discovery, 9(3), e1301.
  83. Puteri, S., Saputri, Y., & Kurniati, Y. (2024). The Impact of Artificial Intelligence (AI) Technology on Students' Social Relations. BICC Proceedings. [CrossRef]
  84. Rebala, G., Ravi, A., & Churiwala, S. (2019). An introduction to machine learning. Springer. [CrossRef]
  85. Rezaev A. V. ve Tregubova N. D. (2018) Are sociologists ready for ‘artificial sociality’? Current issues and future prospects for studying artificial intelligence in the social sciences. Monitoring of Public Opinion: Economic and Social Changes, 5, 91-108. [CrossRef]
  86. Rezk, Sara Mohammed M. (2023). The Role of Artificial Intelligence in Graphic Design. Journal of Art, Design&Music. Volume 2, Issue 1, 1- 12.
  87. Ried, K. (2006). Interpreting and understanding meta-analysis graphs: a practical guide. Australian Family Physician, 35(8), 635–638. [CrossRef]
  88. Rietz, T., & Maedche, A. (2021). Cody: An AI-based system to semi-automate coding for qualitative research. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21) (Article 394, pp. 1–14). Association for Computing Machinery. [CrossRef]
  89. Robila, M., & Robila, S. (2019). Applications of Artificial Intelligence Methodologies to Behavioral and Social Sciences. Journal of Child and Family Studies, 29, 2954 - 2966. [CrossRef]
  90. Rosenberg, M., Adams, D. ve Gurevitch, J. (2000). Meta Win statistical software for meta-analysis. Version 2.0. Sinauer Associates Inc.
  91. Rosenthal, R. (1979). The “fle drawer problem” and tolerance for null results. Psychological Bulletin, 86(3), 638–641. [CrossRef]
  92. Russell, S. J. ve Norvig, P. (2010). Artificial intelligence: A modern approach. New Jersey: Pearson Education.
  93. Russell, S. J., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson.
  94. Schiff, D. (2020). Out of the laboratory and into the classroom: The future of artificial intelligence in education. AI and Society. Springer. [CrossRef]
  95. Schmidt, F. L., Oh, I.-S. ve Hayes, T. L. (2009). Fixed- versus random efects models in meta-analysis: Model properties and an empirical comparison of diferences in results. British Journal of Mathematical and Statistical Psychology, 62, 97–128. [CrossRef]
  96. Secinaro, S., Calandra, D., Secinaro, A., Muthurangu, V., & Biancone, P. (2021). The role of artificial intelligence in healthcare: a structured literature review. BMC Medical Informatics and Decision Making, 21. [CrossRef]
  97. Sharma, B. (2024). Research Paper on Artificial Intelligence. INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT. [CrossRef]
  98. Silverman, D. (2005). Doing qualitative rearch: a pratical handbook. Sage Publications.
  99. Sterne, J. A. ve Harbord, R. M. (2004). Funnel plots in meta-analysis. The Stata Journal: Promoting Communications on Statistics and Stata, 4(2), 127–141. [CrossRef]
  100. Sterne, J. A. C., Sutton, A. J., Ioannidis, J. P. A., Terrin, N., Jones, D. R., Lau, J., Carpenter, J., Rücker, G., Harbord, R. M., Schimid, C. H., Tettzlaf, J., Deeks, J. J., Peters, J., Macaskill, P., Schwarzer, G., Duval, S., Altman, D. G., Moher, D. ve Higgins, J. P. T. (2011). Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ, 343, d4002–d4002. [CrossRef]
  101. Smutny, P. & Schreiberova, P. (2020). Chatbots for learning: a review of educational chatbots for the Facebook Messenger, Computers & Education, vol 151, 1-11, . [CrossRef]
  102. Soni, P. (2023). A study on artificial intelligence in finance sector. International Research Journal of Modernization in Engineering Technology and Science. [CrossRef]
  103. Song, P. & Wang, X. (2020). “A bibliometric analysis of worldwide educational artificial intelligence research development in recent twenty years,'' Asia Paci c Educ. Rev., vol. 21, no. 3, pp. 473 486.
  104. Su, J., Ng, D. T. K., & Chu, S. K. W. (2023). Artificial intelligence (AI) literacy in early childhood education: The challenges and opportunities. Computer and education. Artificial Intelligence, 4, 1–14. [CrossRef]
  105. Sun, Z., Anbarasan, M., & Kumar, D. P. (2020). Design of Online Intelligent English Teaching Platform Based on Artificial Intelligence Techniques. Computational Intelligence, 37, 1166-1180. [CrossRef]
  106. Sun, W. (2022). Design of auxiliary teaching system for preschool education speacialty courses based on artificial intelligence. Mathematical Problems in Engineering. [CrossRef]
  107. Tambuskar, S. (2022). Challenges and benefits of 7 ways artificial intelligence in education sector. Review of Artificial İntelligence in Education. [CrossRef]
  108. Tashakkori, A. ve Creswell, J. W. (2007). Editorial: Exploring the Nature of Research Questions in Mixed Methods Research. Journal of Mixed Methods Research, 1(3), 207-211. [CrossRef]
  109. Thalheimer, W. ve Cook, S. (2002). How to calculate effect sizes from published research articles: A simplified methodology. http://education.gsu.edu/coshima/EPRS8530/Effect_Sizes_pdf4.p.
  110. Tian, X., & Cui, S. (2022). The Application of Scientific Games by Artificial Intelligence in Preschool Education under the Smart City. Security and Communication Networks. [CrossRef]
  111. Tseng, T., Murai, Y., Freed, N., Gelosi, D., Ta, T. D., & Kawahara, Y. (2021, June). PlushPal: Storytelling with interactive plush toys and machine learning. In Interaction design and children (pp. 236–245).
  112. Turgut, K. (2024). Yapay zeka’nın yüksek öğretimde sosyal bilim öğretimine entegrasyonu.. Ankara Uluslararası Sosyal Bilimler Dergisi (Yapay Zeka ve Sosyal Bilimler Öğretimi), 1-7.
  113. Vartiainen, H., Tedre, M., & Valtonen, T. (2020). Learning machine learning with very young children: Who is teaching whom? International Journal of Child-Computer Interaction, 25, 100182.
  114. Veltri, G.A.Big data is not only about data: The two cultures of modelling. Big Data Soc. 2017, 4, 2053951717703997.
  115. Viera, A.J. ve Garrett, J.M. (2005) Understanding ınterobserver agreement: the kappa statistic. Family Medicine, 37, 360-363.
  116. Waljee, A. K., & Higgins, P. D. R. (2010). Machine learning in medicine: A primer for physicians. The American Journal of Gastroenterology, 105(6), 1224-1226. [CrossRef]
  117. Weiwei, S. (2022). Design of auxiliary teaching system for preschool education specialty courses based on artificial intelligence. Mathematical Problems in Engineering, . [CrossRef]
  118. Williams, R., Park, H., & Breazeal, C. (2019). A is for Artificial Intelligence: The Impact of Artificial Intelligence Activities on Young Children's Perceptions of Robots. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. [CrossRef]
  119. Wilson, D. B. (2009). Systematic coding. In H. Cooper, L. V. Hedges ve J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (pp. 159–176). Russell Sage Founda.
  120. Woo, H., LeTendre, G. K., Pham-Shouse, T., & Xiong, Y. (2021). The use of social robots in classrooms: A review of field-based studies. Educational Research Review, 33, 100388. [CrossRef]
  121. Xu, R., Sun, Y., Ren, M., Guo, S., Pan, R., Lin, H., Sun, L., & Han, X. (2024). AI for social science and social science of AI: A Survey. Inf. Process. Manag., 61, 103665. [CrossRef]
  122. Yetişensoy, O., & Karaduman, H., (2024). The effect of AI-powered chatbots in social studies education. EDUCATION AND INFORMATION TECHNOLOGIES , vol.1, 1-35.
  123. Yildirim, A. and Simsek, H. (2006) Sosyal Bilimlerde Nitel Arastirma Yöntemleri. Seçkin Yayincilik.
  124. Young, M. F., Slota, S., Cutter, A. B., Jalette, G., Mullin, G., Lai, B., Simeoni, Z., Tran, M., & Yukhymenko, M. (2012). Our princess is in another castle: A review of trends in serious gaming for education. Review of Educational Research, 82(1), 61–89.
  125. Zhang, J. ve Feng, S. (2021). Machine learning modeling: A new way to do quantitative research in social sciences in the era of AI. Journal of Web Engineering, 20(2), 281-302. [CrossRef]
  126. Zhou, L., Pan, S., Wang, J., & Athanasios V. Vasilakos. (2017). Machine learning on big data: opportunities and challenges, Neurocomputing, Volume 237, 350-361.
Figure 1. Multi-Complementary Approach (Batdı, 2018).
Figure 1. Multi-Complementary Approach (Batdı, 2018).
Preprints 148560 g001
Figure 2. Flow Diagram of the Studies included into Analyses
Figure 2. Flow Diagram of the Studies included into Analyses
Preprints 148560 g002
Figure 8 presents the observer comments on the "problems encountered and solutions in AI applications". The observer teachers reported that students faced difficulties during the drawing process. G1 noted, "Some students, especially those working on drawing tasks that required fine motor skills, struggled significantly," while G2 observed, "Some students had difficulty with drawing tasks that required visual-motor coordination." The observer teachers also mentioned that students' attention was easily distracted and they quickly became bored. Regarding this, G1 commented, "Repetitive activities caused students to lose focus and become bored," and G2 stated, "Activities that required prolonged attention led to students quickly losing interest." As for the solutions to the problems faced during the application process, the observer teachers suggested addressing material shortages, having a trial or draft drawing beforehand, limiting the number of students, not extending the time excessively, and conducting the activities with background music.
Figure 8 presents the observer comments on the "problems encountered and solutions in AI applications". The observer teachers reported that students faced difficulties during the drawing process. G1 noted, "Some students, especially those working on drawing tasks that required fine motor skills, struggled significantly," while G2 observed, "Some students had difficulty with drawing tasks that required visual-motor coordination." The observer teachers also mentioned that students' attention was easily distracted and they quickly became bored. Regarding this, G1 commented, "Repetitive activities caused students to lose focus and become bored," and G2 stated, "Activities that required prolonged attention led to students quickly losing interest." As for the solutions to the problems faced during the application process, the observer teachers suggested addressing material shortages, having a trial or draft drawing beforehand, limiting the number of students, not extending the time excessively, and conducting the activities with background music.
Preprints 148560 g008
Table 4. Comparison of Pre-Test and Post-Test Scores of Experimental Groups.
Table 4. Comparison of Pre-Test and Post-Test Scores of Experimental Groups.
Test
Type
Groups n sd df Levene t p
F p
pretest Experiment 14 18.64 1.94

26


2.28


.14


-14.17


.00
posttest Experiment 14 27.93 1.49
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated