Preprint
Article

This version is not peer-reviewed.

Learning AI Using STEAM-CT Approach for Deaf Students

Submitted:

12 February 2026

Posted:

15 February 2026

You are already at the latest version

Abstract
The STEAM-CT approach integrates Science, Technology, Engineering, Arts, and Mathematics with Computational Thinking (CT) to help students learn how to think, design, and solve problems. It gives students hands-on, interdisciplinary experiences where they apply logic and creativity through real-world applications. The purpose of this study is to foster the development of computational thinking among Deaf students by embedding Artificial Intelligence (AI) learning within a STEAM-CT approach. This learning program consisted of three main phases: (1) exploring AI processes and tools, (2) constructing an AI system, and (3) designing AI-driven innovations. Thirty-six Deaf students from seven Deaf schools participated in this program, which aims to enhance their CT abilities and cultivate their capacity to create AI-based solutions. Students’ progress was measured using a CT framework encompassing knowledge of concepts, applied practices and perspectives. Assessments included multiple-choice tests for CT concepts, task-based rubrics for CT practices, and interviews for CT perspectives. The results showed that Deaf students gained a better understanding of CT concepts, demonstrated advanced CT practices, and exhibited strong CT perspectives. These findings suggest that AI learning through a STEAM-CT approach can effectively promote Deaf students’ computational thinking abilities.
Keywords: 
;  ;  ;  
Subject: 
Social Sciences  -   Education

1. Introduction

Artificial Intelligence (AI) is one of the emerging technologies that changes demand for employment and workers’ desired skills. This leads to efforts to ensure that all individuals have equitable education, thus preparing them for future. The OECD defines an AI competencies framework for students (CFS), which comprises 12 competencies spanning four dimensions and three progression levels. The four dimensions being: Human-centered mindset, Ethics of AI, AI techniques and applications, and AI system design; the three progression levels being: Understand, Apply, and Create [1]. The AI CFS anchors the definition of AI competencies on core competencies for students which are knowledge, skills, attitudes and values [2]. Knowledge is defined as a fundamental building block of understanding; skills are defined as applying the knowledge; attitude and value are defined as the principles and beliefs in choosing, judging, and behaving [3,4,5]. Therefore, AI learning programs should be designed to provide excellent knowledge and skills as well as cultivate positive attitude and value of learners.
The study [6] indicates that the integration of AI into STEAM (Science, Technology, Engineering, Art, and Mathematics) education grew significantly during 2021 and 2022. The findings suggest that AI technologies help foster cognitive skills such as computational and analytical thinking; at the same time, it boosts students’ self-confidence, satisfaction, enjoyment, and overall understanding of STEAM subjects. AI learning spans across various areas of technologies such as natural language processing, large language models (ChatGPT, Gemini, Copilot, and Claude), and robotics [7,8,9]. Reviews of the use of robotics and mechatronics in STEM education revealed that using robotics and physical devices help engage students and assist the acquisition of crucial 21st century skills [10,11].
Computational thinking (CT) is defined as logical problem-solving in addressing challenges in everyday lives [12,13]. It draws on computer science principles involving four key techniques: (1) decomposition (2) pattern recognition, (3) abstraction and (4) algorithm design [14]. To solve complex real-world problems with CT requires a set of skills related to algorithmic thinking, critical thinking, creativity, problem-solving, and cooperativity [15,16]. There have been numerous efforts to incorporate CT into the K-12 curriculum to help cultivate the thinking process of students rather than giving instruction of coding or electronics devices [17,18].
STEAM-CT is the integration of STEAM and the computational thinking approach; it combines creative, interdisciplinary learning with logical and systematic problem-solving processes inspired by computer science [19]. Integrating CT in STEM/STEAM can enhance learners’ self-perceived CT skills [20], coding abilities [21,22], and creative thinking skills [23]. The STEAM-CT approach is commonly applied in K–12 education, but its implementation becomes more challenging when students have limited language skills. Deaf students, who primarily rely on sign language and facial expressions for communication, face additional barriers in reading and writing. This is particularly true in fields such as science, technology, and coding [24]. Observations in special education settings show that Deaf students generally exhibit limited vocabulary, difficulty with abstract concepts, slow reading speed, difficulty connecting learning to daily life, low retention, and a preference for visual or tactile learning materials [25]. To address these challenges, instructional strategies for Deaf learners should aim to minimize reading demands, strengthen conceptual understanding through hands-on activities, and link learning content to real-world experiences.
In this paper, the STEAM-CT approach is used to foster computational thinking abilities for Deaf students. STEAM is defined as the integration of science, technology, engineering, art, and mathematics. The “science” component represents the knowledge and understanding of the natural and social world. “technology” and “engineering” refer to expertise in AI technology and system, while “mathematics” emphasizes the application of logic and reasoning. The “art” component highlights artistic expression, critical and creative thinking, as well as system design. According to CT concepts, decomposition relates to the ability to break down AI systems into components for analyzing or designing solutions. Pattern recognition relates to understanding the notion of pattern-based decision-making for designing and interpreting AI. Abstraction relates to understanding how AI represents and generalizes data while algorithmic thinking relates to understanding how AI systems make decisions. The CT approach is designed to help Deaf students understand problems systematically, identify and generalize patterns, simplify and model real-world phenomena, create step-by-step solutions, and continuously refine solutions through feedback. Therefore, learning AI through STEAM-CT empower Deaf students to approach any problem logically and creatively.
This study investigates the effectiveness of using the STEAM-CT approach in AI learning program to enhance the CT skills of Deaf students. The strategies of learning program involve intensive hand-on activities and content linkage to real-world applications through tangible educational tool. Assessment of computational thinking (CT) abilities encompassed multiple dimensions—knowledge, skills, attitudes, and values—aligned with the core AI competencies and the AI Competencies Framework for Students (CFS).

2. Methods

2.1. Participants

Total of thirty-six Deaf students (those with hearing loss and communicate via sign language), including both males and females, participated in the program. The number of male students was slightly greater than number of female students, as shown in Table 1. The secondary school students from seven Deaf schools in Thailand were selected by their computer teachers based on sign language proficiency and prior experience with either text-based or block-based programming. At least two sign-fluent computer teachers from each school assisted in verifying and reinforcing the students’ comprehension. Additionally, there were two sign language interpreters that facilitated communication between instructors and students throughout the learning program.

2.2. Educational Tool

Effective education tools for Deaf learners is necessary to minimize reading demands by using block commands and enhance understanding concept through physical hands-on activities. The study employs KidBright µAI (pronounced "KidBright MicroAI") as an educational platform (hardware and software open-source platform) for teaching AI processes, guiding students through each stage of constructing an AI system: from problem definition to model deployment. The KidBright µAI microprocessor functions as an edge-AI computing device that encourages application of AI technology to real-life problem solving while at the same time fostering systematic, critical, and creative thinking. It features a single-core ARM Cortex™-A7 processor with an integrated camera module, microphone, Wi-Fi connectivity, and various built-in sensors, as illustrated in Figure 1. Additionally, the board includes input/output ports that allow the connection of external sensors for expanded functionality.
The KidBright µAI enables users to create and train AI models through the KidBright µAI IDE (Integrated Development Environment), a web-based application accessible at https://mai.kidbright.app.meca.in.th/, as shown in Figure 2.

2.3. Measurement Framework

Assessment plays a vital role in the learning process, serving to gauge developmental progress. Reviews of computational thinking (CT) assessments [26,27] indicated that most frameworks are based on Brennan and Resnick’s CT model [28] which are: computational concepts, computational practices, and computational perspectives. Building on this foundation, Kong [29] introduced components and methods for evaluating CT development in senior primary school learners. In this study, Kong’s CT methods were adopted as the measurement model for assessing the learning outcomes of Deaf students. The framework employs quantitative and qualitative methods to evaluate three dimensions: CT concepts, CT practices, and CT perspectives. These refer, respectively, to the computational concepts that learners develop during first two days, the problem-solving practices that learners demonstrate in innovation development, and the learners’ understanding of themselves and their relationships with others and the technological world through the process of expressing and questioning.
In the study, CT concepts were measured by pre-test and post-test containing multiple-choice questions to identify Deaf students’ learning outcomes. CT practices were measured by task-based assignments and CT perspectives were measured by interviews to identify learning outcomes. All pre-test questions designed to assess CT concepts align with the functionalities of the KidBright µAI, such as AI model training and the utilization of command blocks. The questions are presented in Table 2. Both pre-test and post-test contain the same set of questions, arranged in a different order.
The CT practice criteria, presented in Table 3, define five dimensions for evaluating an invention project: creativity in design, functionality of an innovation, complexity of an innovation, relevance to real-life problems, and underlying concepts of an innovation. Scores range from 5, representing the highest level of performance, to 1, representing the lowest. The criteria in Table 3 were developed based on the Organization for Economic Co-operation and Development (2021) guidelines, which include relevance, coherence, effectiveness, efficiency, impact, and sustainability. However, impact and sustainability are not applied in the evaluation, as they fall outside the defined scope of the learning program.
The interview questions in Table 4 were used to assess the CT perspective focused on several key areas: the motivation behind the innovation, the project’s development process, challenges encountered, problem-solving strategies, reflections on the project, and possible improvements. The questions are listed below. Each interview response was evaluated on a scale from 1 to 5. A score of 5 indicates a deep understanding, with detailed and relevant examples that reflect analytical and reflective thinking. A score of 4 represents a good grasp of the concepts, though with minor gaps or limited examples. A score of 3 reflects partial understanding, with incomplete responses. A score of 2 suggests limited or minimal understanding, with brief or off-topic answers. Finally, a score of 1 is given when no response is provided.

2.4. AI leaning Implementation

The AI learning program is designed to fulfil core competencies (knowledge, skills, attitude, and value) alongside the AI competencies framework for students (CFS). The program follows three learning stages: (1) exploring AI processes and tools, (2) constructing an AI system, and (3) designing AI-driven innovation. Its primary goal is to foster AI technology-driven problem-solving skills through the STEAM-CT approach. Online resources such as Introduction to AI, AI ethics, the KidBright µAI Handbook, and the KidBright AI simulator for practicing AI model creation are available for students. These materials are accessible at https://lms.mooc.meca.in.th/, allowing students to familiarize themselves with foundational AI knowledge prior to participation in the four-day, on-site learning program.
On the first day, the exploring AI processes and tools’ stage, students learn to use KidBright µAI IDE to train and implement AI models using command blocks. They must complete a set of multiple-choice pre-test questions using Google Forms. Each question listed in Table 2 was explained through sign language by interpreters, with approximately fifteen minutes to answer all questions. At the end of four-day learning program, students took a post-test comprising the same questions as the pre-test but in a different order to assess computational thinking, CT concepts. The same test taking procedure was followed for both the pre-test and post-test.
The second day, the ‘constructing an AI system’ stage, focuses on collecting data onboard and from external sensors as well as controlling the output peripherals of KidBright µAI board to create an AI system. The hand-on activities were assigned to strengthen conceptual understanding.
During the final two days, the ‘designing AI-driven innovations’ stage, students worked in groups of five or six to apply their knowledge by designing a prototype of an AI-based automated system that addresses a real-world problem. At the end, each group presented their innovations to the class. Experts in AI technology, automation systems, and STEAM education evaluated the innovations based on criteria listed in Table 3 to measure CT practices and used the questions in Table 4 to assess CT perspectives. The post-test was set at the end of the learning program for assessing CT concepts.

3. Results

This section presents the learning outcomes of all Deaf students after participating in the learning program, assessed across three dimensions: CT concepts, CT practices, and CT perspectives.

3.1. Measurement of CT concepts

The assessments of CT concepts were designed to measure students’ understanding of AI principles and their abilities to implement command blocks. Students took the pre-test at the beginning and the post-test at the end of learning program. The statistical results of the pre-test and post-test scores are summarized in Table 5.
The average pre-test score of male students was 40.47%. After completing the program, their understanding of CT concepts improved, as shown by the increase in mean of post-test scores to 61.90%. Meanwhile, female students achieved a higher average pre-test score of 55%, which increased significantly to 89.58% on the post-test. The higher pre-test scores among female students (Figure 3 (b)) may imply that they were better prepared, possibly due to prior self-study using the available online materials before attending the program. A comparison of male and female students’ pre-test and post-test scores (Figure 4) reveals that female students consistently outperformed male students in both pre-test and post-test assessments. This indicated that gender does not hinder the learning of AI concepts. A few male students demonstrated no progress in their post-test results. Students can gain the necessary prior knowledge through self-study using online resources which resulted in high pre-test scores.

3.2. Measurement of CT practices

After two days of concept training and constructing an AI system, all thirty-six students were grouped into seven teams, each with five to six members, to participate in the ‘designing AI-driven innovation’ stage. Students applied knowledge gained from first two days to develop innovations which solve real world problems. All members of each team brainstormed ideas for their innovation, then assigned tasks to each member. Since all members of participating teams were involved in all tasks, brainstorming, developing, and presenting, the evaluation of CT practices can be a representation of the individuals. All groups demonstrate their innovations to the class and three experts. There were sign language interpreters facilitating the translation between students and experts. Experts evaluated the innovation based on the criteria outlined in Table 3. The scores, ranging from 5 to 1 for each criterion, were recalculated to percents as shown in Table 6. The average score of “Creativity in design” criteria is 85%, “Innovation functionality” criteria is 69.29%, “Innovation complexity” criteria is 68.57%, “Relevance to real-life problems.” criteria is 77.14%, “Concepts of the innovations” criteria is 71.43%. The average scores of all criteria are 74.29% indicating students can apply the knowledge to solve real problems. Based on the average results, the CT practice evaluation in “Creativity in design” achieved the highest scores among all criteria. Creativity reflects innovative thinking to solve problems. All teams designed system functionalities and created their innovations using a variety of materials, effectively integrating art into technology and engineering. The “Innovation complexity” criteria scores lowest compared to other criteria. However, all innovations functioned correctly as the “Innovation functionality” criteria from all teams evaluate greater than 55% indicating that the teams thoroughly tested and debugged their innovations. All teams successfully applied their knowledge to develop innovations. Deaf students excelled in creativity, showing strong artistic expression and a fair ability to develop innovation in complexity.

3.3. Measurement of CT perspectives

The measurement of CT perspectives was evaluated along with the CT practices during the innovation demonstration. At the end of demonstration, students had to answer all interview questions listed in Table 4. As all members in the group were encouraged to answer questions, the evaluation of CT perspective can be a representation of the individuals. The scores, ranging from 5 to 1, were recalculated to percents as shown in Table 7. The scores were categorized into five levels of CT perspective strength: very strong (80-100%), strong (60-79%), medium (40-59%), weak (20-39%), and very weak (0-19%). The average of each question does not deviate far from the average of all questions, which is 73.58%. This implied that students expressed strong CT perspectives in all key areas; the motivation behind the innovation, the project’s development process, challenges encountered, problem-solving strategies, reflections on the project, and possible improvements.

3.4. Students’ AI-based automated system

The innovations developed by all groups are in Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11. Each innovation reflects the principles of the STEAM-CT approach, integrating science, technology, engineering, and engineering, and mathematics with art: artistic expression, design thinking, and system development. Students designed their innovations to address real-world problems they encountered in their daily lives.
As shown in Figure 5, the AI-based system, Save You Save Me, was created to address the frequent occurrence of motorcycle accidents near schools. Often, injured riders are unable to report accidents promptly, resulting in delayed assistance. This AI system detects motorcycle falls and automatically alerts emergency rescue teams. Additionally, it records cumulative data on accidents and visually presents the statistics through a dashboard interface.
Schools commonly maintain infirmary rooms to ensure student health and safety. The Smart Dispenser Box (Figure 6) assists nurses in efficiently dispensing medication and managing inventory. It categorizes pill types, tracks usage, and helps plan for restocking. When a nurse identifies a student’s symptoms and scans the corresponding symptom card, the device dispenses the suitable medication. Moreover, the box collects data on medicine usage, allowing for analysis of frequently occurring symptoms and overall student health monitoring.
As mobile usage during class often distracts students and hinders academic performance, many schools enforce restrictions. To help reinforce this policy, students developed the Smart Mobile Locker (Figure 7). Under this system, students store their mobiles in locker during school hours and may retrieve them after school. The locker system records each student’s usage statistics, enabling the school to monitor compliance with mobile regulations.
Waste management is a pressing global concern due to its environmental impact. The Automatic Waste Sorter (Figure 8) was designed to encourage proper waste separation and recycling habits among students. The system identifies different types of waste, opens the corresponding bin lid automatically, and records waste disposal data for further analysis.
For individuals who are deaf, voice-based hospital queuing systems can pose challenges. To address this, students developed a Queuing System for Deaf (Figure 9). Instead of announcing patient names verbally, nurses can display the names on wireless handheld devices, making the process more accessible.
Safety within school dormitories is also a significant concern. The Safety Dormitory system (Figure 10) uses facial recognition technology to track student entries and exists. This enables the school to maintain accurate records of dormitory occupancy and respond more effectively in emergency situations.
Finally, the AI-Based Monkey Species Classification system (Figure 11) is a handheld camera designed to identify monkey species. When the user captures an image, the system classifies the monkey into one of four species categories. The identification result is indicated through one of four LEDs corresponding to the recognized species.

4. Discussion

The AI learning program that adopted the STEAM-CT approach emphasized hands-on, experiential learning for students. Activities were designed and implemented across three successive stages: (1) exploring AI processes and tools, (2) constructing an AI system, and (3) designing AI-driven innovations.
The first stage ‘exploring AI processes and tools’ aimed to familiarize students with the command blocks and core functions of the KidBright µAI IDE. The second stage ‘constructing an AI system’ provided students with opportunities to develop practical AI solutions using physical devices that supported tangible, hands-on learning. The final stage ‘designing AI-driven innovations’ encouraged students to apply their knowledge to real-world problems while fostering responsible and ethical use of technology.
During the first stage, students engaged in activities such as displaying text, collecting data from onboard sensors (camera and microphone), and creating AI models. Continuous hands-on practice was incorporated to strengthen conceptual understanding in second stage. These practical experiences were particularly effective for Deaf students, helping them grasp abstract ideas. An improvement reflected in their performance on computational thinking (CT concept) assessments. Another key factor contributing to their success was the use of block-based programming, which reduced the cognitive load associated with traditional programming languages and made learning more accessible.
In terms of CT practices, students excelled particularly in the “Creativity in design” criterion. Their creativity was demonstrated through innovative problem-solving and the use of diverse materials in developing their innovations, showcasing effective integration of art into technology and engineering. Artistic elements not only enhanced engagement but also fostered interdisciplinary connections that extended beyond STEM fields.
Students’ attitudes and values, corresponding to the CT perspectives component, were assessed through interviews. The findings indicated that students developed positive attitudes toward overcoming challenges during the development process, applying their knowledge to address real-life issues, and approaching new problems with greater confidence.
Overall, the study identified four key factors that enhanced learning outcomes for Deaf students: (1) consistent incorporation of hands-on activities throughout the learning program, (2) the use of less language-intensive block-based programming, (3) the adoption of tangible educational tools, and (4) the integration of art into the learning experience. These factors are therefore recommended for future AI learning programs to achieve optimal educational outcomes.
Moreover, there is a strong alignment between the CT framework and the AI competency framework. For instance, CT concepts correspond to understanding AI, CT practices align with applying AI responsibly and effectively to solve problems, and CT perspectives relate to critically examining the impacts of AI on people and society. Consequently, students’ CT performance can also serve as an indicator of their AI competency.

5. Conclusions

In the digital era, computational thinking has become an essential skill for individuals. It serves as a foundation for developing creativity and effective problem-solving abilities. This study examines the effectiveness of applying the STEAM-CT approach in AI learning to enhance the computational thinking (CT) skills of Deaf students. Following the learning program, the students demonstrated significant improvement in CT, as reflected by their high scores across the three dimensions of the CT framework: CT concepts, CT practices, and CT perspectives.

Supplementary Materials

The following supporting information can be downloaded at the website of this paper posted on Preprints.org. Online resources such as Introduction to AI, AI ethic, KidBright µAI Handbook, and KidBright AI simulator for practicing AI model creation are accessible at https://lms.mooc.meca.in.th/.

Author Contributions

Conceptualization, S.K.; Methodology, S.K.; Validation, S.K., T.S. and W.P.; Formal analysis, S.K.; Investigation, A.S., T.S. and W.P.; Resources, A.S.; Writing – original draft, S.K.; Writing – review & editing, A.S., .TS and W.P.; Project administration, A.S.

Funding

This research was funded by National Science and Technology Development Agency.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to research that was conducted in collaboration with the Schools for the Deaf (accredited educational institutions). The research is related to regular teaching and learning processes aligned with the school's policies. The consent forms obtained from the Schools for the Deaf were used to uphold ethical standards, ensuring participant confidentiality and the voluntary nature of participation. Steps were taken to anonymize and secure all data to prevent the identification of individuals. Additionally, risks to participants were minimized, and the research adhered to applicable ethical guidelines and regulations.

Data Availability Statement

The original contributions presented in this study are included in the article, while any identifiable information has been anonymized to ensure protect privacy.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AI Artificial Intelligence
CFS Competencies Framework for Students
CT Computational Thinking
STEAM Science, Technology, Engineering, Art, and Mathematics

References

  1. UNESCO. AI competency framework for students; the United Nations Educational, Scientific and Cultural Organization; 7, place de Fontenoy, 75352 Paris 07 SP, France, 2024; pp. 1-80. [CrossRef]
  2. OECD. Core foundations for 2030; Organization of Economic Co-operation and Development; OECD Publishing, Paris, 2019; pp. 1-14. https://www.oecd.org/content/dam/oecd/en/about/projects/edu/education-2040/concept-notes/Core_Foundations_for_ 2030_concept_note.pdf. Paris.
  3. OECD. Knowledge for 2030; Organization of Economic Co-operation and Development; OECD Publishing, Paris, 2019; pp. 1-13. https://www.oecd.org/content/dam/oecd/en/about/projects/edu/education-2040/concept-notes/Knowledge_for_2030_concept_note.pdf. Paris.
  4. OECD. Skills for 2030; Organization of Economic Co-operation and Development; OECD Publishing, Paris, 2019; pp. 1-16. https://www.oecd.org/content/dam/oecd/en/about/projects/edu/education-2040/concept-notes/Skills_for_2030_concept_note.pdf. Paris.
  5. OECD. Attitude and Value for 2030; Organization of Economic Co-operation and Development; OECD Publishing, Paris, 2019; pp. 1-19. https://www.oecd.org/content/dam/oecd/en/about/projects/edu/education-2040/concept-notes/Attitudes_and_Values_for_2030_concept_note.pdf. Paris.
  6. Al-Zahrani, A.; Khalil, I.; Awaji, B.; Mohsen, M. AI Technologies in STEAM Education for Students: Systematic Literature Review. Journal of Ecohumanism 2024, 3, 3380–3397. [Google Scholar] [CrossRef]
  7. Luckin, R.; Holmes, W.; Griffiths, M.; Forcier, L.B. Intelligence unleashed: An argument for AI in education, Pearson: London, 2016; pp. 1-60. https://static.googleusercontent.com/media/edu.google.com/en//pdfs/Intelligence-Unleashed-Publication.pdf.
  8. Holmes, W.; Bialik, M.; Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign, 2019; pp. 1-242.
  9. Bender, E. M.; Gebru, T.; McMillan-Major, A.; Shmitchell, S. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency, Toronto, Canada, 3-10 March 2021. [Google Scholar] [CrossRef]
  10. Conde, M. Á.; Rodríguez-Sedano, F. J.; Fernández-Llamas, C.; Gonçalves, J.; Lima, J.; García-Peñalvo, F. J. Fostering STEAM through challenge-based learning, robotics, and physical devices: A systematic mapping literature review. Computer Applications in Engineering Education 2020, 29, 46–65. [Google Scholar] [CrossRef]
  11. Mejias, S.; Thompson, N.; Sedas, R. M.; Rosin, M.; Soep, E.; Peppler, K.; Roche, J.; Wong, J.; Hurley, M.; Bell, P.; Bevan, B. The trouble with STEAM and why we use it anyway. Science Education Journal 2021, 105, 209–231. [Google Scholar] [CrossRef]
  12. Grover, S.; Pea, R. Computational thinking in K–12: A review of the state of the field. Educational Researcher 2013, 42(1), 38–43. [Google Scholar] [CrossRef]
  13. Bocconi, S.; Chioccariello, A.; Dettori, G.; Ferrari, A.; Engelhardt, K.; Kampylis, P.; Punie, Y. Exploring the field of computational thinking as a 21St century skill. The 8th International Conference on Education and New Learning Technologies, Barcelona, Spain, 4–6 July 2016. [Google Scholar] [CrossRef]
  14. Tsarava, K.; Moeller, K.; Román-González, M.; Golle, J.; Leifheit, L.; Butz, M.V.; Ninaus, M. A cognitive definition of computational thinking in primary education. Computers and Education 2022, 179, 104425. [Google Scholar] [CrossRef]
  15. Shute, V. J.; Sun, C.; Asbell-Clarke, J. Demystifying computational thinking. Educational Research Review 2017, 22, 142–158. [Google Scholar] [CrossRef]
  16. Li, Y.; Schoenfeld, A. H.; diSessa, A. A.; Graesser, A. C.; Benson, L. C.; English, L. D.; Duschl, R. A. Computational thinking is more about thinking than computing. Journal for STEM Education Research 2020, 3(1), 1–18. [Google Scholar] [CrossRef] [PubMed]
  17. Hsu, T. C.; Chang, S. C.; Hung, Y. T. How to learn and how to teach computational thinking: Suggestions based on a review of the literature. Computers & Education 2018, 126, 296–310. [Google Scholar] [CrossRef]
  18. Mills, K. A.; Cope, J.; Scholes, L.; Rowe, L. Coding and computational thinking across the curriculum: A review of educational outcomes. Review of Educational Research 2024, 00346543241241327. [Google Scholar] [CrossRef]
  19. Mariana, E. P.; Kristanto, Y. D. Integrating STEAM education and computational thinking: analysis of students’ critical and creative thinking skills in an innovative teaching and learning. Southeast Asia Mathematics Education Journal 2023, 13, 1–18. [Google Scholar] [CrossRef]
  20. Li, X.; Xie, K.; Vongkulluksn, V.; Stein, D.; Zhang, Y. Developing and testing a design-based learning approach to enhance elementary students’ self-perceived computational thinking. Journal of Research on Technology in Education 2021, 55, 1–24. [Google Scholar] [CrossRef]
  21. Hutchins, N. M.; Biswas, G.; Maróti, M.; Lédeczi, Á.; Grover, S.; Wolf, R.; Blair, K. P.; Chin, D.; Conlin, L.; Basu, S.; McElhaney, K. C2STEM: A system for synergistic learning of physics and computational thinking. Journal of Science Education and Technology 2020, 29, 83–100. [Google Scholar] [CrossRef]
  22. Hsiao, H. S.; Lin, Y. W.; Lin, K. Y.; Lin, C. Y.; Chen, J. H.; Chen, J. C. Using robot-based practices to develop an activity that incorporated the 6E model to improve elementary school students’ learning performances. Interactive Learning Environments 2022, 30, 85–99. [Google Scholar] [CrossRef]
  23. Siew, N. M.; Ambo, N. The scientific creativity of fifth graders in a STEM project-based cooperative learning approach. Problems of Education in the 21st Century 2020 78, 627–643. [CrossRef]
  24. Moeller, M. P. An introduction to the outcomes of children with hearing loss study. Ear and Hearing 2015, 36 (Suppl. 1), 4S–12S. [Google Scholar] [CrossRef] [PubMed]
  25. Rusilowati, A.; Ulya, E.; Sumpono, L. STEAM-deaf learning model assisted by rube goldberg machine for deaf student in junior special needs school. Journal of Physics: Conference Series 2020, 1567(4), 042087. [Google Scholar] [CrossRef]
  26. Cutumisu, M.; Adams, C.; Lu, C. A Scoping Review of Empirical Research on Recent Computational Thinking Assessments. Journal of Science Education Technology 2019, vol. 28, 651–676. [Google Scholar] [CrossRef]
  27. Han, J. A systematic review of computational thinking assessment in the context of 21st century skills. In Proceedings of the 2nd International Conference on Humanities, Wisdom Education and Service Management (HWESM 2023), Shanghai, China, 10-12 March 2023. [Google Scholar]
  28. Brennan, K.; Resnick, M. New frameworks for studying and assessing the development of computational thinking. In Proceedings of the American Educational Research Association, Vancouver, Canada, 1-25 April 2012. [Google Scholar]
  29. Kong, S. C. Components and methods of evaluating computational thinking for fostering creative problem-solvers in senior primary school education. In Computational thinking education; Kong, S., Abelson, H., Eds.; Springer: Singapore, 2019; pp. 119–141. [Google Scholar] [CrossRef]
Figure 1. The KidBright µAI microprocessor board (a) front view of the board; (b) back view of the board.
Figure 1. The KidBright µAI microprocessor board (a) front view of the board; (b) back view of the board.
Preprints 198713 g001
Figure 2. The KidBright µAI IDE (a) IDE for training AI model; (b) IDE for deploying a trained model using block-based programming.
Figure 2. The KidBright µAI IDE (a) IDE for training AI model; (b) IDE for deploying a trained model using block-based programming.
Preprints 198713 g002
Figure 3. The histogram of CT concepts’ scores (a) Pre-test (blue bars) and post-test (orange bars) scores of male students; (b) Pre-test (blue bars) and post-test (orange bars) scores of female students.
Figure 3. The histogram of CT concepts’ scores (a) Pre-test (blue bars) and post-test (orange bars) scores of male students; (b) Pre-test (blue bars) and post-test (orange bars) scores of female students.
Preprints 198713 g003
Figure 4. Comparison of CT concept score distributions (a) Pre-test scores of male students (blue bars) and female students (orange bars); (b) Post-test scores of male students (blue bars) and female students (orange bars).
Figure 4. Comparison of CT concept score distributions (a) Pre-test scores of male students (blue bars) and female students (orange bars); (b) Post-test scores of male students (blue bars) and female students (orange bars).
Preprints 198713 g004
Figure 5. Group 1: Save You Save Me.
Figure 5. Group 1: Save You Save Me.
Preprints 198713 g005
Figure 6. Group 2: Smart Dispenser Box.
Figure 6. Group 2: Smart Dispenser Box.
Preprints 198713 g006
Figure 7. Group 3: Smart Mobile Locker.
Figure 7. Group 3: Smart Mobile Locker.
Preprints 198713 g007
Figure 8. Group 4: Automatic Waste Sorting.
Figure 8. Group 4: Automatic Waste Sorting.
Preprints 198713 g008
Figure 9. Group 5: Queuing System for Deaf.
Figure 9. Group 5: Queuing System for Deaf.
Preprints 198713 g009
Figure 10. Group 6: Safety Dormitory.
Figure 10. Group 6: Safety Dormitory.
Preprints 198713 g010
Figure 11. Group 7: AI-based Monkey Species Classification.
Figure 11. Group 7: AI-based Monkey Species Classification.
Preprints 198713 g011
Table 1. The participants’ demographic data.
Table 1. The participants’ demographic data.
Participants Males Females Total
Number of participants 21 15 36
Percentage of participants 58.33% 41.67% 100%
Table 2. The pre-test and post-test questions for measuring CT concepts.
Table 2. The pre-test and post-test questions for measuring CT concepts.
Question Choice Choice
1. Why should we learn about Artificial Intelligence (AI)? a. To develop essential learning and life skills such as analytical and creative thinking. b. To prepare ourselves for living in an AI-driven world.
c. To understand and use technologies effectively. d. All the above.
2. AI is becoming a bigger part of our daily routines. Which of the following is not a typical example of AI in everyday life? a. Automatic sliding doors. b. Chatbots.
c. Facial recognition systems. d. Route recommendation systems on Google Maps.
3. What is KidBright µAI? a. A controllable robot that responds to block commands. b. A learning tool designed to support AI education in STEM/STEAM approach.
c. A robot used in industrial factories. d. Coding software.
4. What are the working processes of KidBright µAI? a. Annotating data -> Collecting data -> Training AI model -> Implementing the trained model. b. Collecting data -> Training AI model -> Annotating data -> Implementing the trained model.
c. Implementing the trained model -> Training AI model -> Annotating data -> Collecting data. d. Collecting data -> Annotating data -> Training AI model -> Implementing the trained model.
5. In the context of machine learning, a camera functions as eyes, a microphone as ears, and wheels as legs. With this analogy, which human organ is most like KidBright µAI board? a. Mouth. b. Heart.
c. Nose. d. Brain.
6. Which categories of AI system are KidBright µAI belong to? a. Semi-supervised learning – Classification. b. Semi-supervised learning – Generative model.
c. Supervised learning – Classification. d. Supervised learning – Generative model.
7. Which practice is correct when gathering images for classification or object detection training? a. Capture all images of objects from afar for higher accuracy. b. Capture all images of objects from one fixed angle for higher accuracy.
c. Capture many clear images from various angles and distances to increase data diversity. d. Position objects close to lens to emphasize surface detail.
8. If identical labels are assigned to different objects, how would this affect training in the KidBright µAI system? a. No effect, since KidBright µAI trains its model using images only. b. A major effect, as KidBright µAI trains model from both object images and their labels.
c. A minor effect, as KidBright µAI may be confused but could learn to distinguish differences. d. None of the above.
9. Which following scenarios demonstrate the use of AI? a. Choojai uploads travel photos to social media platforms (Facebook, Instagram). b. Piti uses Siri on an iPhone to search for information.
c. Mana attends online classes during the COVID-19 outbreak. d. Mani orders food via online delivery service to her home.
10. Which button in KidBright µAI IDE is used for KidBright µAI board connection?Preprints 198713 i001 a. Button 1. b. Button 2.
c. Button 3. d. Button 4.
11. What is the purpose of the “Upload” button in KidBright µAI IDE? Preprints 198713 i002 a. To convert block-based commands into machine code and send converted code to the KidBright µAI board for execution. b. To send the AI model to the KidBright µAI board for execution.
c. To convert block-based commands into machine code and send both the converted code and AI model to the KidBright µAI board for execution. d. To send the image dataset used for training to the KidBright µAI board.
12. Which types of AI models can be created using the KidBright µAI IDE? a. Image classification. b. Object detection.
c. Voice classification. d. All the above.
13. Which plugin block is used to send data from KidBright µAI board to the cloud? a. iKB1 plugin. b. MQTT plugin.
c. I2C plugin. d. DHT plugin.
14. Which of the following statements are true? a. Image classification and object detection are the same. b. Image classification analyzes an entire image to identify what it is.
c. Object detection analyzes an image to locate and identify specific objects in an image. d. Both b and c are correct.
15. What is the purpose of the command blocks below?Preprints 198713 i003 a. Communicating between KidBright µAI board and external sensors. b. Performing conditional logic operations.
c. Displaying image and text on screen. d. Performing mathematical operations.
16. How do the below command blocks function?Preprints 198713 i004 a. Send data from KidBright µAI board to a digital sensor at PH6 port. b. Send data from KidBright µAI board to an analog sensor at PH6 port.
c. Read data from a digital sensor at PH6 port. d. Read data from an analog sensor at PH6 port.
Table 3. Measurement criteria of CT practice.
Table 3. Measurement criteria of CT practice.
Criteria Rating
1. Creativity in design. New ideas 5—Not new ideas 1.
2. Innovation functionality. Completed 5—incomplete 1.
3. Innovation complexity. Complex 5—uncomplex 1.
4. Relevance to real-life problems. Relevant 5—irrelevant 1.
5. Concepts of the innovations. Correct 5—incorrect 1.
Table 4. The interview questions for CT perspectives.
Table 4. The interview questions for CT perspectives.
Number Question
1. What motivated you to develop this innovation?
2. Which challenges did you face during the development process?
3. How did you address or overcome these challenges?
4. What are potential improvements that can be made to your innovation?
5. Can the concept behind your innovation be applied to solve other problems? If so, how?
Table 5. The statistics of pre-test and post-test scores of participants.
Table 5. The statistics of pre-test and post-test scores of participants.
Test Males Females Mean of Total Students
Pre-test Mean 40.47% 55% 47.74%
Post-test Mean 61.90% 89.58% 75.74%
Table 6. The measurement results of CT practice.
Table 6. The measurement results of CT practice.
Criteria Group 1 Group 2 Group 3 Group 4 Group 5 Group 6 Group 7 Average
1. Creativity in design. 75% 85% 75% 85% 95% 80% 100% 85%
2. Innovation functionality. 60% 60% 60% 75% 85% 55% 90% 69.29%
3. Innovation complexity. 60% 65% 55% 80% 75% 60% 85% 68.57%
4. Relevance to real-life problems. 70% 75% 70% 85% 80% 70% 90% 77.14%
5. Concepts of the innovations. 60% 65% 65% 75% 85% 65% 85% 71.43%
Table 7. The measurement results of CT perspectives.
Table 7. The measurement results of CT perspectives.
Question Group 1 Group 2 Group 3 Group 4 Group 5 Group 6 Group 7 Average
1. What motivated you to develop this innovation? 75% 75% 75% 75% 85% 70% 80% 76.43%
2. Which challenges did you face during the development process? 75% 65% 55% 80% 80% 70% 80% 72.14%
3. How did you address or overcome these challenges? 80% 70% 60% 75% 80% 65% 80% 72.86%
4. What are potential improvements that can be made to your innovation? 70% 70% 65% 75% 85% 65% 75% 72.14%
5. Can the concept behind your innovation be applied to solve other problems? If so, how? 60% 60% 60% 75% 85% 55% 90% 69.29%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated