Preprint
Article

This version is not peer-reviewed.

Framing and Evaluating Task‐Centered Generative Artificial Intelligence Literacy for Higher Education Students

A peer-reviewed article of this preprint also exists.

Submitted:

28 May 2025

Posted:

29 May 2025

You are already at the latest version

Abstract
The rise of generative artificial intelligence (GenAI) demands new forms of literacy among higher education students. This paper introduces a novel task-centered generative artificial intelligence literacy framework, which was developed collaboratively with academic and administrative staff at a large research university in Israel. The framework identifies eight skills which are informed by Bloom’s Taxonomy’s six cognitive domains. Based on this framework, we developed a measuring tool for students’ GenAI The literacy, and surveyed 1,667 students. Findings from the empirical phase show moderate GenAI use and medium-high literacy levels, with significant variations by gender, discipline, and age. Notably, 82% of students support formal GenAI instruction, favoring integration within curricula to prepare for broader digital society participation. The study offers actionable insights for educators and policymakers aiming to integrate GenAI into higher education responsibly and effectively.
Keywords: 
;  ;  
Subject: 
Social Sciences  -   Education

Introduction

The emergence of generative artificial intelligence (GenAI) has raised, yet again, the discourse about the transformative changes that various fields, including education, will undergo. With a host of GenAI-based applications that can ease students’ burden of doing almost anything related to their academic studies, e.g., reading learning materials, writing essays, coding, or solving problems, higher education institutions have found themselves challenged with issues of integrity and assessment (Jochim & Lenz-Kesekamp, 2025; Smolansky et al., 2023; Wood & Moss, 2024). Overall, higher education stakeholders have faced important questions about how to integrate these technologies into teaching and learning in meaningful, ethical, and pedagogically sound ways (Belkina et al., 2025; Kurtz et al., 2024). At the core of this discussion lies the issue of better preparing students for our digitally saturated world, with GenAI being its newest addition.
Literacy in the context of digital technologies has evolved in recent decades to include a range of skills, competencies, and dispositions. Whilst several frameworks have sought to articulate the skills needed for the digital or information age (Eshet, 2012; Eshet-Alkalai, 2004; Mioduser et al., 2008; Partnership for 21st Century Skills, 2009), the emergence of GenAI, with the unique affordances and obstacles it offers, call for an updated conceptualization (Chiu, 2024). Specifically, there is a growing recognition that students must be aware of GenAI tools, understand their capabilities and limitations, use them ethically, and integrate them effectively and efficiently into their academic work. However, existing definitions of AI literacy are often either overly broad or insufficiently grounded in the realities of student learning, leaving a gap in actionable guidance for both learners and educators.
In this paper, we address this gap by proposing a task-centered framework for GenAI literacy in the context of learning in higher education. Our framework is grounded in Bloom’s revised taxonomy and revolves around tasks students encounter in academic settings. Therefore, the purpose of this paper is twofold. First, to present the development of a Task-Centered Generative Artificial Intelligence Literacy framework for higher education students; second, to present the use of this framework to study students’ GenAI literacy and its associations with demographic-, academic-, and experience-related personal variables.

Literature Review

Preparing Students for the Digital Age

Over the last three decades or so, education researchers and practitioners have been pondering on how to prepare students for the post-industrial, digitally saturated age which is characterized by a heavy, constant use of information and communication technologies (ICT). It has been agreed-upon that the “3Rs”—reading, writing, arithmetic—are not enough anymore as the basic skills required for the new generations of learners, employees and citizens, and that a new set of skills was needed to be defined. However, the question of which skills should form this new set has been responded with many different answers. Furthermore, the very term “skill” has been debated, and others, e.g., literacy, or competency, have been suggested as more suitable alternatives.
These definitions, frameworks or taxonomies of skills or their derivatives assume that living in an ICT-saturated world necessitates new ways of thinking about how this world behaves and how we should behave in it. Importantly, they are not technology-dependent, and overall shed new light on humans’ way of thinking when surrounded by technology. For example, one of the first new skills suggested for the digital age was visual thinking (Dake, 1993). Moving from analog to digital imagery, with the latter being characterized by sheer volume and speed, enabled, maybe even necessitated, new ways of thinking, as individuals were conveniently equipped with the power of visual language. Visual thinking, as defined by Drake, included the understanding and implementation of: greater flexibility and manipulability, interactive dialogues with images, expanded ways of storing and retrieving, multilevel expression, and use and reuse of existing images.
These early components of a “new” thinking skill for the digital age from over 30 years ago echo later definitions of other new skills. For example, storing and retrieving images is considered integral to personal information management, an important digital age-related skill which is strongly associated with learning (Jaffe & Nachmias, 2011); and the notion of use and reuse is integral to computational thinking (Brennan & Resnick, 2012), a thinking skill that stemmed from humans’ interactions with computers. More than that, visual literacy will be later considered as one of a few digital literacies for the digital era in at least two important frameworks, each of which expands on some other components of Dake’s original viewpoints (Eshet-Alkalai, 2004; Mioduser et al., 2008).
Notably, a few terms have been used in the context of defining what is required for effectively and efficiently managing today’s and tomorrow's life. (Not to mention that a few terms have been used to describe what it is that we are to be prepared to, among of which are “digital age”, “digital era”, “information age”, “the 21st-century”, and “the fourth industrial revolution”.) The most common of which are skills, literacies, and competencies. Going beyond mastering knowledge, a skill is the ability to perform a certain physical or mental task that is functionally related to attaining a performance goal (Marin-Zapata et al., 2022). Continuing even further, competency is a set of skills, behaviors and attitudes needed to perform an activity or process in a competent manner (Lambert et al., 2014). As for literacy – this term was originally used in the context of reading and writing, and mostly revolved around the ability to understand and use information contained in various kinds of textual materials (Kirsch et al., 1993); later, this term was used in other contexts, e.g., when mathematical literacy or digital literacy were defined, the understanding and use of these domains were still crucial, hence literacy inherently involves critical examination of related materials. Obviously, these three concepts are intertwined. As we discuss the role GenAI plays for higher education students in the broader context of their academic studies, we find the term “GenAI literacy” suitable; and as our goal is to identify what is required to use GenAI in performing academic-relate tasks, we use the term “skills” to present components required for an effective such use.
A few years ago, a comprehensive scoping reviews of skills, competencies and literacies attributed to the new industrial landscape had highlighted the large variety of these, including a mix of soft skills, e.g., creativity, collaboration, or interpersonal skills; hard skills, e.g., data literacy, programming, or mathematical knowledge; interestingly, some identified skills in the literature were coded in this review as “unclear/vague”, e.g., multi-purpose skills, flexibility to perform adaptive abilities, or global awareness, and some were coded as “outliers”, e.g., user experience design skills, or investigative and experimental skills (Chacka, 2020). Hence, there is no one agreed-upon framework, and existing frameworks are very different from each other. Even if we focus on the term “digital literacy” alone, it may consist of several distinct aspects, including critical, cognitive, social, operative, emotional, and projective dimensions (Martínez-Bravo et al., 2022). This conundrum of concepts and definitions, which may represent either the richness of the topic or its ubiquitousness, is a challenge when suggesting a new framework. It also raises the question of why a new classification should be suggested at all to this already dense swamp, to which we refer in the next section.
Of the many existing frameworks for the skills, literacies or competencies required for the digitally-saturated world, we present three that seem to us relevant to today’s higher education world and to the new wave of technological advancements on which we focus here, as we detail below. The first was developed in the early 2000s by the non-profit Partnership for 21st Century Skills that included members of the national business community, education leaders, and policymakers. This framework consists of a few components that extend the traditional Three R’s (reading, writing, arithmetic) and define three new components: learning and innovation skills, life and career skills, and information, media, and technology skills; the learning and innovation skills are known as The Four C’s, namely, critical thinking, communication, collaboration, and creativity – and this is the component to which we will refer later (Partnership for 21st Century Skills, 2009). Due to the importance of these skills across educational, occupational, and civic contexts, they have been widely accepted and are still seen as relevant in the long journey towards modernizing education (Thornhill-Miller et al., 2023). Notably, each of these skills has a meaningful interface with GenAI: critical thinking may inform humans’ effective use of GenAI, and reciprocally, GenAI use may support the promotion of humans’ critical thinking (Cain, 2024; Premkumar et al., 2024; Sardi et al., 2025); human-machine communication lies at the core of using GenAI-based applications, and interpersonal communication may be meaningfully impacted by the use of GenAI (Akdilek et al., 2024; Gans, 2024); collaboration-wise, GenAI is often looked at as an aid to humans (Safari et al., 2024); and creativity may play an important role in promoting the synergy between humans and GenAI (Habib et al., 2024; Heigl, 2025; Rafner et al., 2023).
The second framework we find relevant is Eshet-Alkalai’s (2004) conceptual framework for survival skills in the digital era, which was later revised and extended (Eshet, 2012). This holistic framework, which includes six types of literacies relevant across domains and contexts, refers to how people should better get around in a digitally saturated world. These are: photo-visual literacy – understanding messages from graphical displays; reproduction literacy – utilizing digital tools to create new, meaningful materials from existing ones; branching literacy – constructing knowledge from non-linear online spaces; information literacy – critically assessing online information; socio-emotional literacy – understand and apply norms of communication in digital environments; and real-time thinking – process large volumes of stimuli simultaneously.
The third framework is Mioduser, Nachmias, and Forkosh-Baruch’s (2008) new set of literacies for the knowledge society. This set includes seven literacies: multimodal information processing – understand, produce, and negotiate meanings in a culture made up of words, images, and sounds; navigating the infospace – understand when to use information, where to find it and how to retrieve it, and how to decode and communicate it; interpersonal communication – how to effectively, efficiently, and ethically use various communication channels; visual literacy – use images to advance thinking, reasoning, decision making, and learning; hyper-literacy – the ability to deal with non-linear knowledge representations; personal information management (PIM) – storing information items to later retrieve them; and coping with complexity – perceive phenomena as complex, understand them and cope with this complexity. Despite some overlap with Eshet’s (2012) framework, we distinguish between them due to Mioduser et al.’s focus on knowledge construction across contexts and their broader view on the skills needed by individuals to learn, work, socially interact, and cope with the needs of everyday life. Both sets of literacies echo the new affordances and challenges that characterize the digital era, and due to their holistic nature and their relevance to learning settings, we find them both related to our goal of defining GenAI literacy. On the one hand, they both seem particularly relevant to the current GenAI era, which has already extended people’s abilities to engage with visuals (Essa & Lataifeh, 2024; Han & Cai, 2023), has implications for information retrieval and assessment (Choi et al., 2024; Hersh, 2024), and may impact socio-emotional learning (Henriksen et al., 2025; Ortega-Ochoa et al., 2024). On the other hand, issues of coping with complexity, real-time thinking, and hyper-literacy should be revisited in the current wave of tools that are able to analyze huge volumes of data, decrease complexity in a matter of seconds, and whose interface is largely linear.

Skills, Literacies, and Competencies for the Age of Generative Artificial Intelligence

Artificial Intelligence (AI) is not a new field, and its history as an academic discipline goes back to the 1950s (Haenlein & Kaplan, 2019); much earlier before that, it was imagined in mythological traditions and fictional texts, especially within dystopian contexts, which helps explain the fear and even resentment we encounter among faculty, students, and the broader public. Definitions of AI vary and take different perspectives. Some take a perspective a human interacting with an artificial machine, like Turing Test, which states that a machine would be considered intelligent if that human would not be able to distinguish the machine from human; others take a comparative approach and state that a machine is intelligent if a human doing the same things it does would be considered intelligent; and some are more operational and process-oriented, noting that intelligence of a system lies in its ability to correctly interpret external data, to learn from it, and to use this learning to achieve specific goals and tasks through adaptations (P. Wang, 2019).
The history of AI in education is as long as the history of AI itself, as soon after this technological advancement was achieved, it found its way into the education field. The first common implementation of AI in education was probably that of intelligent tutoring systems, which tailored personalized learning paths to students. Today, AI seamlessly serves as the basis for numerous advancements in education at all levels, from student and teacher levels to course and school levels to district and national levels, and for various tasks, including assessment and grading, personalization and adaptation, mass teaching, management, and much more (Chen et al., 2020; Doroudi, 2023).
The more recent thread of Generative AI (GenAI), with its core use of Large Language Models, has yet again been associated with its potential to dramatically impact education at large. GenAI is largely defined by its abilities to produce text, images, video, audio, and other forms of data. The current GenAI boom is mostly associated with the launch of OpenAI’s ChatGPT, a user-facing chatbot, in November 2022. Since then, we have witnessed a surge in applications for almost any aspect of life, with the education field not being an exceptional. Evidence for the wide impact of GenAI applications on our daily life is the fact that various dictionaries had chosen GenAI-related terms as Word of the Year for 2023, e.g., Macquarie Dictionary’s “Generative AI”, Cambridge’s Dictionary’s “hallucinate”, and Merriam-Webster’s take on the artificial era, “authentic” (Creamer, 2023; Italie, 2023; Macquarie Dictionary, 2022). Due to its high popularity and seemingly new nature, it is no surprise that a new mix of skills, literacies, and competencies has already been suggested for handling the GenAI-saturated world.
A common, broad definition of AI literacy is “the ability to be aware of and comprehend AI technology in practical applications; to be able to apply and exploit AI technology for accomplishing tasks proficiently; and to be able to analyze, select, and critically evaluate the data and information provided by AI, while fostering awareness of one’s own personal responsibilities and respect for reciprocal rights and obligations” (B. Wang et al., 2023, p. 1326). Building on this, Bozkurt (2024) defines GenAI literacy as follows:
AI literacy is the comprehensive set of competencies, skills, and fluency required to understand, apply, and critically evaluate AI technologies, involving a flexible approach that includes foundational knowledge (Know What), practical skills for effective real-world applications (Know How), and a deep understanding of the ethical and societal implications (Know Why), enabling individuals to engage with AI technologies in a responsible, informed, ethical, and impactful manner.
As comprehensive as this definition is, we find it too far away from learning and teaching practicalities. Like other suggested frameworks (e.g., Cha et al., 2024) it is too abstract for teachers and instructors who might want to teach or assess such literacy, and it is too general for students to understand how to act according to it. Furthermore, upon designing an assessment tool based on such a definition, the items may be too broad or context-independent (e.g., Jin et al., 2024). Sattelmaier et al.’s (2023) and Annapureddy et al.’s (2025) competence-based GenAI frameworks demonstrate a direction of defining GenAI in a way that is more actionable, however, the former is too broad to concretize, and the latter is not learning-focused.
Hence, we identified a gap in the current literature about GenAI literacy. We believe that a framework for GenAI literacy that would speak to students, teachers and instructors, and education scholars is still missing. Therefore, we take a task-centered approach in which GenAI literacy is concrete and operationalizable, and is about actual and potential uses of GenAI in familiar contexts of tasks that are integral to learning. The characteristics of our framework are detailed below, under the section Characteristics of The Framework.

Developing a Task-Centered Generative Artificial Intelligence Literacy Framework

In this section, we present our Task-Centered Generative Artificial Intelligence Literacy framework. We start by describing its characteristics regarding learning theories, relevance to students’ academic experience and to instructors’ views; continue by detailing methodological issues of its development; and finally introduce the resulting framework in the wider context of other relevant literacies and skills.

Characteristics of the Framework

Early in the process of developing a new GenAI literacy, we decided that such a framework should be based on the following characteristics: relevance to learning theories, relevance to students’ actual learning experience, relevance to instructors across disciplines, and being actionable for instructors.

Relevance to Learning Theories: Bloom’s Revised Taxonomy

After reviewing various learning theories, we decided to base the framework on the cognitive domain of Bloom’s revised taxonomy of education goals. This taxonomy, both in its original form and in its revised form, has been widely used in numerous educational contexts and for multiple purposes, including instructional design and assessment. The original taxonomy included six categories denoted by nouns: knowledge, comprehension, application, analysis, synthesis, and evaluation. The revised taxonomy defined six categories of observable knowledge, skills, attitudes, behaviors, and abilities; the six categories were defined by verbs: remember, understand, apply, analyze, evaluate, and create (Bloom et al., 1956; Krathwohl, 2002). Bloom’s taxonomy has also shown great relevance to the digital era, with many applications for choosing appropriate technologies for achieving educational goals and for assessing technology integration in education (e.g., Alaghbary, 2021; Churches, 2008; Coşgun Ögeyik, 2022; Faraon et al., 2023). Therefore, we found it suitable for our framework as well.
It is common to depict Bloom’s Taxonomy using a pyramid at the bottom of which is “remember” and at the top of which is “create”, which denotes on a hierarchical order between the categories; however, this image was not originally presented by Bloom, and the relationship between the levels of thinking is not necessarily hierarchical. In their original publication, although overall arguing, albeit quite cautiously, towards a hierarchical, cumulative order of the categories, Bloom and his colleagues explicitly asserted that evaluation, then the last category, is not necessarily the last step in thinking or problem solving, but rather it can quite possible be “the prelude to the acquisition of new knowledge, a new attempt at comprehension or application, or a new analysis and synthesis” (Bloom et al., 1956, p. 185). Later, when Churches (2008) presented Bloom’s Digital Taxonomy, he again asserted that learning processes must not begin at the lower taxonomic levels, but rather can be initiated at any point. When the original taxonomy was revised to be used in Biology education, while empirically examining hundreds of assessment items, its non-hierarchical structure was explicitly emphasized (Lo et al., 2016).
Empirical evidence of student performance has raised the question of hierarchy time and again. When Bloom and his colleagues had compared students’ performance across tasks that were categorized by different classes of their taxonomy, they found that it was more common to find that individuals had low scores on complex problems and high scores on less complex problems than the reverse; however, they explicitly, honestly stated that this evidence was “not entirely satisfactory” (Bloom et al., 1956, p. 19). Later, it was shown that individuals can indeed demonstrate higher performance on higher learning levels without necessarily demonstrating competency on lower levels (Bagchi & Rajeev Sharma, 2014).
Early attempts to test for the validity of the hierarchical notion of the Taxonomy showed either moderate support or inconsistency with the suggested order (Kunen et al., 1981; Seddon, 1978). Indeed, in the revised taxonomy, “the requirement of a strict hierarchy has been relaxed” (Krathwohl, 2002) – an assertion that was later demonstrated empirically (Lalwani & Agrawal, 2018). A relatively recent review showed that the assumed hierarchical order has little empirical evidence (Bruehler, 2018), and a large-scale analysis of an international comparative test in mathematics, showed no difference in student performance between lower-order and higher-order tasks (Mullis et al., 2016). Therefore, we will not assume a hierarchical order of the Bloom’s revised taxonomy’s components.

Relevance to Students’ Actual Learning Experience: A Task-Centered View

We assume that students experience learning in many ways, regardless of their course of study. Learning strategies, motivations, personal characteristics, and many other factors make it almost impossible to imagine a single learning experience to which we refer while imagining the use of GenAI-based tools. Therefore, we decided to revolve our framework around the notion of tasks students have to complete as part of their studies.
Along their course of study, students face many different tasks, depending, among other factors, on the disciplines, the level of the course, and the stage of the learning process. Examples for such tasks include reading a paper, solving a problem, designing an experiment, formulating an argument, writing a piece of text, analyzing data, or designing a visual artifact. Note that we mostly refer here to tasks that are directly related to the academic aspect of student life, but our framework may also be relevant to meta-cognitive aspects, e.g., regulating one’s learning, managing big projects, adopting problem-solving strategies, and even to socio-emotional and motivational aspects, like persisting in a task until its completion, settling disputes, or struggling with a failure. However, here we will mostly refer to academic aspects.

Relevance to Instructors Across Disciplines

Having the framework revolving around students’ tasks also makes it relevant to instructors across disciplines, as they can think of tasks they are familiar with while using it. Therefore, we made sure the framework is phrased in a content-independent manner that emphasizes the learning that should occur.

Being Actionable for Instructors

For being useful as an aid for instructors, we wished the framework to be actionable. That is, we wanted it to be relatively easily translated to actual pedagogical experiences. Of course, we do not assume that such a translation is a trivial task, and as with any new technology, it may take dedicated training and strong motivation for this process to succeed.

Methodology

The GenAI literacy framework was developed as part of a wider project held within a large research university in Israel. This endeavor was initiated and led by the Center for Excellence in Teaching [EXACT NAME DETRACTED FOR ANONYMITY] in this institution. The university is truly multidisciplinary, with about 30,000 undergraduate and graduate students and about 4500 faculty members across all academic disciplines.
A few faculty members were invited by the Center to lead each of the working groups that were defined to operate under this project; the first author had led the GenAI Literacy group. Three other working groups were part of the larger learning community, discussing GenAI tools as an aid to instructors; writing and assessment in text-heavy domains; and writing and solving problems in numerical-heavy domains.
A Call for Participation in the learning community was sent to all faculty members and relevant administrative staff, and each one of the respondents chose which working group they wished to take part. Each participant got from the Center a license to one GenAI-based system they chose for the duration of the process. The whole process was led by the Center staff, and working group leaders took part in extra meetings for coordination and process management.

Participants

Eleven faculty members and five administrative staff took an active role, albeit to various extents, in the working group. The faculty, with different levels of experience in teaching and technology integration in teaching, were from the Humanities and Arts (4), Social Sciences (2), Life Sciences (1), Exact Sciences (2), and Medicine (2); administrative staff were from the campus’ libraries (2), techno-pedagogues (2), and heads of administration within faculties (1).

Process

The group had met about ten times between March and August 2024 for one-hour brainstorming and working sessions, most of which were held remotely. Together, the group members discussed what GenAI literacy is, reviewed relevant literature and documents from other institutions worldwide, and eventually defined and refined their own framework.
Part of the defining and refining process of the group included authentic experience in the participants’ classes, and reflection and discussion about this process during group meetings. That is, participants tried to translate the literacy components into pedagogical interventions, which was what we strived for by setting up a definition in the first place. This, along with the other characteristics of the framework, is discussed in the following section.

GenAI Literacy Framework

Overall, the framework refers to GenAI literacy as the ability to effectively and efficiently use GenAI-based tools to simplify, enrich, or better learning-related tasks. It is based on the six categories of the cognitive domain of Bloom’s revised taxonomy; for matter of simplicity, each category was mapped to one or two skills, with an overall count of eight skills.
The framework revolves around tasks students face as part of their course of study; that is, the term “task” serves as a placeholder for any task one can think of regarding students’ learning. Looking at it from this perspective, we emphasize that the task is being carried out by students, and GenAI-based tools are to be considered as aids to this process. This perspective is in line with the notion of GenAI as an aid to complete tasks at various degrees of complexity (Safari et al., 2024) while keeping the human in command (De Stefano, 2019).
We now present the skills included in our framework by their relatedness to Bloom’s Taxonomy categories. While presenting them, we make links between these newly defined skills and existing skills and literacies for the digital era (Eshet, 2012; Eshet-Alkalai, 2004; Mioduser et al., 2008; Partnership for 21st Century Skills, 2009).

Know

Two skills were identified under this category. First, to get to know GenAI-based tools that can assist in performing a given task. One will not be literate in using GenAI-based tools if they do not know relevant tools that they could harness to perform a given task. Indeed, familiarity with technology, or lack thereof, is an important factor in adopting new educational technologies by students (e.g., Bringman-Rodenbarger & Hortsch, 2020; Byungura et al., 2018). Using such tools in educational contexts may be encouraged by a habitual use of them in other daily contexts (Strzelecki, 2024) or by integrating them into the curriculum by instructors (Szymkowiak et al., 2021). This skill may require learners to be proficient, to some degree at least, in two of Mioduser et al.’s (2008) literacies— navigating the infospace, and personal information management—for finding relevant tools, and keeping track of them for personal use for a long time.
The second skill is to stay updated on innovations in the world of GenAI. Like any other technology, the one discussed here too is constantly changing and developing; unlike many other technologies, this new technology demonstrates changes and developments quicker than ever. Therefore, being literate in the field of GenAI requires constant updates, which can be done in many different ways, and should involve both educators and students (Prensky, 2007). This skill resonates with Mioduser et al.’s (2008) coping with complexity, as the continuously developing GenAI phenomenon is surely a complex one; also, this skill requires a high level of information literacy (Eshet-Alkalai, 2004), considering the flood—even the tsunami—of information available regarding this phenomenon.

Understand

Under this category, we highlight the need to understand how to make the most of GenAI-based tools. This includes understanding their various features and different use cases, and an overall understanding of the role of learners in the human-GenAI partnership. This way, one may be able to effectively consider what they can achieve by using these tools. As had been demonstrated with previous technologies, students and educators may use technology in a rather shallow manner, hence may not fulfil their potential for meaningfully supporting learning, and may even be discouraged to use them (Cassidy et al., 2012; Dahlstrom et al., 2014). Learning about various uses of an existing tool may be a result of its constant use, hence the importance of keep experimenting with such tools (Dai et al., 2025). The ever-evolving world of GenAI-based tools, specifically, the constant appearance of new tools and new features to assist in various tasks, requires coping with complexity (Eshet-Alkalai, 2004) and being creative (Partnership for 21st Century Skills, 2009) in finding efficient solutions to task-related problems.

Apply

Here, we identified two skills. First, to formulate prompts that lead to the desired results. Prompts serve as the main means of communication between users and GenAI-based tools; hence, they enable the production of the tools’ output. But more than merely being a means for getting the job done, mastering prompting has the power to enhance learning by fostering personalized, engaging, and equitable educational experiences (Tu et al., 2024). Being familiar with how prompts work and how to engineer them will help learners to effectively and efficiently use GenAI tools as part of working on a task; indeed, it is a skill that needs to be taught and practiced (Federiakin et al., 2024; Oppenlaender et al., 2024). As prompting is, in its very basic sense, communicating with others, and as it requires collaboration (with others who are in this case machines), creativity (to get the desirable outcome effectively and efficiently), and critical thinking (to examine outcomes) - it may require all four “C”s (Partnership for 21st Century Skills, 2009). Furthermore, prompting is not a one-shot-one-outcome process, but rather an iterative one, during which previous prompts are fine-tuned and prompts suggested by others are modified; hence, this process requires reproduction literacy (Eshet-Alkalai, 2004).
The second skill is to use GenAI-based tools ethically in the context of a given task. The underlying assumption of our framework is that learners would use GenAI-based tools as part of their own process of completing a task; that is, the process would be led by the learners, and the final product would be considered as the learners’ (see the Create-related skill below). This is one aspect of working ethically with these tools. Another major ethical aspect of using GenAI-based tools is to comply with copyright laws and with privacy regulations in the context of uploading materials to these tools, i.e., not inputting these tools copyrighted materials or sensitive information. Also, it is important to refer to potential biases in the products of GenAI-based tools and to check their accuracy and credibility (Barzilay, 2018; Lobel, 2022). Additionally, it is important to be transparent when reporting on a task, mentioning how and to what extent GenAI-based tools have been used. Finally, it is imperative that students be aware that using GenAI-based tools may compromise their own privacy as institutions can gain control over the data that is gathered while using these toold (De Stefano, 2019; Doellgast et al., 2023). These are just some ethics-related issues that may arise in the context of working with GenAI-based tools, which makes it an important skill (Al-kfairy et al., 2024; Hagendorff, 2024; Nguyen, 2025), and implementing this skill will overall make the learners accountable for their use of these tools; more profoundly, ethics-related factors may influence students’ mere use of these tools (Zhu et al., 2025). This skill requires a high sense of critical thinking (Partnership for 21st Century Skills, 2009), coping with complexity (Mioduser et al., 2008), and information literacy (Eshet-Alkalai, 2004).

Analyze

Here, we highlight the need to compare the outputs of different GenAI-based tools to a given task. The notion of comparison is of great importance in learning, stemming from literacy; Olson and Astington (1990) concluded that talking about text may be as important as the skills of reading and writing, and comparing texts was found to be powerful in vocabulary acquisition (Hasannejad et al., 2014). In our context, this skill is crucially important, as GenAI-based tools do not work in a deterministic manner, hence may produce unexpected results. Furthermore, different tools may work differently, even when using the same prompt, depending on the model they use and other features (e.g., ChatGPT’s Temperature, which controls the “creativity” or randomness of the generated text). Therefore, analyzing different outputs--even from different runs on the same tool will promote a better understanding of the use of GenAI-based tools, and will enhance their use. This skill requires critical thinking to compare different outputs (Partnership for 21st Century Skills, 2009).

Evaluate

The relevant skill here is to verify the accuracy of the output given by GenAI-based tools by cross-checking with other sources and prior knowledge. Generally, assessing information credibility is a demanding task, which is influenced by a wide range of factors (e.g., Henkel et al., 2023; Metzger, 2007; Pornpitakpan, 2004). Due to known hallucinations and biases of GenAI-based tools, it is crucial to fact-check them while comparing their output to non-GenAI-based sources. This may also help learners to identify areas in which these tools are beneficial. Indeed, credibility assessment of AI-generated content may be derived from different motivations and can be handled in various ways (Chakraborty & Biswal, 2024; Ou et al., 2024). For this skill to be effectively carried out, one should be information literate (Eshet-Alkalai, 2004), and should use critical thinking (Partnership for 21st Century Skills, 2009).

Create

The relevant skill here is to produce quality output for a given task using GenAI-based tools; importantly, completing a task successfully and achieving a high-quality human-led product often requires the integration of various GenAI-based tools (Wei et al., 2025). This skill also has to do with self-regulated learning, which refers to the extent to which students intentionally and strategically adapt learning activities to achieve their learning goals (Panadero, 2017; Puustinen & Pulkkinen, 2001; Zimmerman, 2002), and which becomes ever more important in the context of technology-enhanced learning (Faza & Lestari, 2025; Prasse et al., 2024). Managing such high-level process necessitates planning (Ruan et al., 2023) and coping with the complexity of it in the face of the various options available and potential paths to be taken (Mioduser et al., 2008), and using critical thinking along the way to examine the output along the way, and being creative in the way GenAI-based tools are being incorporated in completing the task (Partnership for 21st Century Skills, 2009).

Exploring GenAI Literacy Among University Students

In this section, we report on an empirical study whose goal was to explore the extent to which university students are GenAI literate according to the framework we developed. For meeting this goal, we set up the following research questions:
To what extent do higher-education students use GenAI-based tools for academic purposes, and how is it associated with demographic- and academic-related personal variables?
How is GenAI Literacy characterized among higher-education students, and how is it associated with demographic-, academic-, and experience-related personal variables?
To what extent and in which forms do higher-education students wish GenAI Literacy to be taught during their academic studies, and how is it associated with demographic-, academic-, and experience-related personal variables?

Methodology

Research Field and Research Population

This study was conducted in a large, multidisciplinary research university in Israel, with about 30,000 undergraduate and graduate students, and about 4,500 faculty members; it was approved by the institution’s Ethics Committee (0010309-1). Our sample included N=1667 students who filled out an online, anonymous survey. Of the participants, we had 695 males (42%), 877 females (53%), and 95 missing values for gender (5%), with an average age of 31.6 (SD=11.7, N=1597). Participants came from faculties across the campus, with N=804 (48%) from the STEM disciplines, and N=863 (52%) from the Humanities and Social Sciences. Over half of the participants were studying towards a Bachelor’s degree (N=889, 53%), thirty percents – towards a Master’s degree (N=499), and fourteen percents – towards Ph.D. (N=230); the rest (N=49, 3%) were studying under other programs.

Research Variables

Independent variables included:
Gender
Age
Education Level [Bachelor’s, Master’s, Ph.D., Other]
Faculty – participants could choose from 10 values, based on the faculties that exist in the studied university. Later, we coded these into two broad categories: 1) STEM, including faculties of Engineering, Medical & Health Sciences, Exact Sciences, Life Sciences, and Neurosciences; 2) Humanities and Social Sciences, including also faculties of Arts, Law, and Management.
Dependent variables included:
GenAI Literacy [1-5] – based on the framework we developed (see next section)
GenAI Use – answer to the question “To what extent do you actually use GenAI tools for carrying out tasks related to your studies?” [5-point Likert scale, from 1 – “To a Very Low Extent to 5 – “To a Very High Extent”]
Teaching GenAI – answer to the question “Do you think the use of GenAI tools should be officially taught at [UNIVERSITY NAME DETRACTED]?” [Yes/No]
Teaching GenAI Approach – only for those who responded “Yes” under Teaching GenAI, answer to the question “In which courses should the use of GenAI tools be taught?” [In a Dedicated Course for That Topic, As Part of a Few Existing Courses, In Every Course]
Importance of Teaching GenAI - only for those who responded “Yes” under Teaching GenAI, answer to the question “Why is it important to teach the use of GenAI tools at the university? (Multiple choice) [So I Could Study at The University More Efficiently, To Prepare Me for the Job Market, So I Will Be Ready for a Technology-Saturated World, Without Regard to My Job]

Research Tool, Procedure, and Analysis

Data was collected using an online, anonymous self-report questionnaire (in Google Forms). The questionnaire included items for all variables, as described above, with a dedicated questionnaire to measure GenAI Literacy, as described below. Data collection took place during July-August 2024, towards the end of the Spring Semester and the beginning of the Summer Semester. An email with an invitation to participate was sent out via a mailing list of all students at the university. All statistical analyses were performed using JASP software, Version 0.19.3.

GenAI Literacy Assessment

The tool for measuring GenAI Literacy consisted of eight items, measuring the extent of agreement with the presented statements, each of which was ranked on a 5-point Likert scale from 1 to 5 [“Disagree Strongly”, “Disagree”, “Neither Agree nor Disagree”, “Agree Strongly”]. The items were developed following our definition of GenAI Literacy, while going through rounds of discussions and refinements within the working group. Eventually, two categories were assigned with two items each, and four categories – with a single item each. This questionnaire is presented in Table 2.
As our GenAI Literacy framework is task-focused, we asked the participants to first choose a task to which they would refer. This was done by the following stimulus: “Choose one task with which you are required to deal with during your studies. You can choose from the list or write down another task below. Refer to this task when ranking your agreement in the items below”; the given list was as follows: Studying New Material; Working on Home Assignments; Searching, Reading, or Summarizing Articles; Writing or Editing Texts; Summarizing Lectures; Studying Towards an Exam; Time Management.
For checking the structure of the questionnaire, we ran both Principal Component Analysis and Exploratory Factor Analysis, both using Oblimin oblique rotation. Principal Component Analysis resulted with a single component (χ2=595, at p<0.001, df=20), with component loadings of 0.80 or above for all items. Exploratory Factor Analysis resulted with a single factor (χ2=484, at p<0.001, df=20), with factor loadings of 0.76 or above for all items. Testing for reliability, we used McDonald’s ω (Hayes & Coutts, 2020; Kalkbrenner, 2023), which yielded a very high value of 0.94 (N=1528). Therefore, we use an average of all items to compute a single GenAI Literacy variable.

Findings

Using GenAI-Based Tools and Demographics, Academic Profile (RQ1)

Overall, we found a medium average value of using GenAI-based tools for academic purposes, M=2.9 (SD=1.4, N=1667). We tested its associations with demographics- and academic-related personal variables.

Demographics (Gender, Age)

Testing for gender differences, we found that males (M=3.0, SD=1.3, N=695) scored higher than females (M=2.8, SD=1.4, N=877), with t(1570)=3.1, at p<0.01, which denotes a small effect size of Cohen’s d=0.16.
Age was negligibly negatively associated with GenAI use, with r=-0.08, at p<0.01.

Academic Characteristics (Faculty, Education Level)

Testing for differences by academic discipline, we found that STEM students (M=3.0, SD=1.3, N=804) scored higher on GenAI Literacy than Humanities and Social Science students (M=2.7, SD=1.4, N=863), with t(1665)=5.0, at p<0.001, which denotes a small effect size of Cohen’s d=0.24.
Using ANOVA test, we found no association between GenAI use and Education Level, with F(3)=1.3, at p=0.29 (N=1667).

GenAI Literacy and Demographics, Academic Profile, GenAI Use (RQ2)

Overall, we found a medium-high average value of GenAI Literacy, M=3.3 (SD=1.1, N=1665). We tested its associations with demographics- and academic-related personal variables.

Demographics (Gender, Age)

Testing for gender differences, we found that males (M=3.5, SD=1.1, N=694) scored higher than females (M=3.1, SD=1.1, N=876), with t(1568)=6.8, at p<0.001, which denotes a small-medium effect size of Cohen’s d=0.35.
Age was weakly negatively associated with GenAI Literacy, with r=-0.15, at p<0.001.

Academic Characteristics (Faculty, Education Level)

Testing for differences by academic discipline, we found that STEM students (M=3.4, SD=1.1, N=804) scored higher on GenAI Literacy than Humanities and Social Science students (M=3.2, SD=1.2, N=861), with t(1663)=4.4, at p<0.001, which denotes a small effect size of Cohen’s d=0.22.
Using the ANOVA test, we found no association between GenAI Literacy and Education Level, with F(3)=2.5, at p=0.06 (N=1665). Post-hoc Tukey test revealed no significant differences between any pair of Education Levels. Omitting the “Other” Education Level, i.e., considering only Bachelor’s, Master’s, and Ph.D. academic programs, we found a significant association, with F(2)=3.4, at p<0.05 (N=1617); however, using a post-toc Tukey test, we found no significant difference between pairs of Education Levels.

Experience in Using GenAI Use

Not surprisingly, GenAI Literacy was found to be strongly, positively significantly correlated with GenAI Use, with r=0.65, at p<0.001 (N=1665).

Teaching GenAI Literacy (RQ3)

A vast majority of participants thought that the use of GenAI-based tools should be taught as part of their university studies (1370 of 1667, 82%).

Demographics (Gender, Age)

There was a significant difference based on gender, with higher proportions among females (749 of 877, 85%) than among males (544 of 695, 78%), with χ2=14.5, at p<0.001.
Of those who responded “Yes”, the average age was higher than those who responded “No” (M=32.4, SD=12.1, N=1312, compared with M=27.9, SD=8.9, N=285, respectively). This difference is significant, with t(1595)=6.0, at p<0.001, which denotes a small-medium effect size of Cohen’s d=0.39.

Academic Characteristics (Faculty, Education Level)

There was a significant difference based on academic discipline, with higher proportions among Humanities and Social Sciences (739 of 863, 86%) than among STEM disciplines (631 of 804, 78%), with χ2=14.5, at p<0.001.
There was a decreasing rate of “Yes” respondents when considering Education Level, from Ph.D. students (206 of 230, 90%) to Master’s students (431 of 499, 86%) to Bachelor’s students (697 of 889, 78%); finally, the response of “Yes” among participants who study in other programs was the lowest (36 of 49, 73%). This difference is significant, with χ2=25.8, at p<0.001.

How and Why Should the Use of GenAI Be Taught?

Of those participants who stated that the use of GenAI should be taught as part of their course of study at the university, over forty percent thought it should be taught in a dedicated course (600 of 1367, 44%), and almost forty percent thought it should be taught across relevant courses (528 of 1367, 39%); The other 17% (239 of 1367) thought it should be taught in every course of their studies.
As for the reasons to teach the use of GenAI—again, only from those who stated that it should be taught in some form—81% (1112 of 1370) stated that it was important for getting around in today’s digitally saturated world without relation to their job, 76% (1043 of 1370) stated that the reason is to help them better study at the university, and 58% (793 of 1370) stated that it was important for them to being prepared to the job market. Testing for associations with academic discipline, we found that better studying was also chosen more commonly among STEM participants (498 of 631, 79%) than among Humanities and Social Sciences students (545 of 739, 74%), with χ2=5.0, at p<0.05. Preparedness for the job market was also chosen more commonly among STEM participants (413 of 631, 65%) than among Humanities and Social Sciences students (380 of 739, 51%), with χ2=27.5, at p<0.001. Preparedness for today’s digitally saturated world showed no difference, with χ2=1.5, at p=0.22.
Regarding age, we found that those who thought it was important for today’s digitally saturated world were younger than those who did not chose this response (M=32.0, SD=11.5, N=1062, compare with M=34.0, SD=14.2, N=250, respectively), with t=2.3, at p<0.05, which denotes a small effect size with Cohen’s d=0.16. Those who thought it was important for their studies were younger than those who did not chose this response (M=31.9, SD=11.8, N=998, compare with M=34.0, SD=12.8, N=314, respectively), with t=2.7, at p<0.01, which denotes a small effect size with Cohen’s d=0.18. Finally, those who thought it was important for the job market were younger than those who did not chose this response (M=29.9, SD=9.2, N=757, compare with M=35.8, SD=14.5, N=555, respectively), with t=9.0, at p<0.001, which denotes a medium effect size with Cohen’s d=0.50.
No differences were found based on Education Level, with χ2=2.3, at p=0.51, or gender, with χ2=0.06, at p=0.81.

Discussion

In this paper, we introduced the development of a task-centered generative artificial intelligence (GenAI) literacy framework for higher-education students. The development of this framework should be seen as part of the ongoing discussion within higher-education institutions around the world about the ways by which GenAI may change the landscape of academic studies. This important discourse usually involves a mix of concerns—mostly, about students’ dishonesty and about the need to modify assessment—and of hopes that the new technology would finally make academia reconsidering its societal and professional role and would help it shift towards preparing students to the current digitally saturated era (Luo (Jess), 2024; H. Wang et al., 2024; Yusuf et al., 2024).
Our attempt at framing GenAI literacy in the context of higher-education studies is also part of an already long line of research, despite the relatively short time of the current GenAI boom (e.g., Annapureddy et al., 2025; Bozkurt, 2024; Jin et al., 2024; O’Dea, Tsz Kit Ng, et al., 2024). Our unique approach helped us to define a framework that is relevant and actionable across disciplines and for all students and instructors. Choosing a task-centric perspective that builds on Bloom’s Revised Taxonomy (Krathwohl, 2002), we made sure that our framework is relevant for instructors for both designing pedagogical interventions that would support this literacy and assessing it. Considering this, our exploration of GenAI literacy, using an assessment tool that is directly derived from our framework, helped us highlight some important issues regarding its association with demographic-, academic-, and experience-related issues, as we will discuss below.

Equipping Students with Skills, Competencies, and Literacies for the Digital Age

Our framework sheds some important light on what is currently required from higher-education students, as seen by a cohort of campus-wide faculty and staff. It is task-focused, and refers to GenAI literacy as the ability to effectively and efficiently use GenAI-based tools to simplify, enrich, or better learning-related tasks. Overall, we identify eights skills that are mapped to the six cognitive levels: get to know GenAI-based tools that can assist in performing a given task; stay updated on innovations in the world of GenAI (both related to Know); understand how to make the most of GenAI-based tools (Understand); formulate prompts that lead to the desired results;use GenAI-based tools ethically in the context of a given task (both related to Apply); compare the outputs of different GenAI-based tools to a given task (Analyze); verify the accuracy of the output given by GenAI-based tools by cross-checking with other sources and prior knowledge (Evaluate); produce quality output for a given task using GenAI-based tools (Create).
Linking this framework with existing frameworks of skills, competencies, and literacies for the digital age (Eshet, 2012; Eshet-Alkalai, 2004; Mioduser et al., 2008; Partnership for 21st Century Skills, 2009) helps us emphasize its uniqueness. On the one hand, we observe that task-centered GenAI literacy is closely related to various facets of existing frameworks, hence the importance of referring to these foundational skills, including “soft skills”, in higher education institutions (Alvarado-Bravo et al., 2024; Mohammed & Ozdamli, 2024). On the other hand, the distinctive mix of such skills when particularly discussing students’ use of GenAI-based tools highlights that this technology—like many others—should be examined through the lens of its own characteristics; as such, the integration of a specific technology into education should carefully consider affordances and challenges related to that technology and to its integration. Even in the seemingly narrow perspective of GenAI, the “How?” of the implementation has an impact on students’ performance, higher-order thinking, and perceptions (J. Wang & Fan, 2025). In a broader sense, we point out to the constant need in examining what new technologies brings to education arenas, and what it requires from students and educators.

GenAI Literacy Among Higher-Education Students

The use of GenAI for academic purposes was found to be an overall medium, which echoes findings regarding higher-education students in the UK (Arowosegbe et al., 2024), and from another recent UK study, according to which about half of the participating students had used or considered using GenAI-based tools for academic purposes (Johnston et al., 2024). Our data demonstrated weak negative correlations of age to both GenAI use and GenAI Literacy, and Arowosegbe et al.’s (2024) analysis of age-dependent perceptions of GenAI may shed light on this; their analysis revealed that young students (18-24y/o) tend to be less positive towards GenAI than students in the 25-24y/o age group, however older students (35-44y/o) are more neutral than positive. Their study also revealed that students’ perception of GenAI was overall increasing from Bachelor’s to Master’s to Ph.D. students – an effect that we did not observe in our analyses regarding either GenAI use or GenAI Literacy. It is worth mentioning that in the Israeli context, where this study was taken, university students tend to be older than in many other countries, mostly due to post-high school mandatory military service of large parts of the population (OECD, 2014).
Based on our findings, female students had less experience with GenAI and reported lower GenAI Literacy than males, which is in line with a previous exploration (O’Dea, Ng, et al., 2024). O’Dea et al.’s findings differ from ours regarding association with academic discipline; while we found that STEM students reported on higher GenAI Literacy levels, they found that Business and Education students expressed a higher level of comfort with AI compared to students from other subject disciplines, including computing and technology ones. Besides age-related issues, as mentioned above, this discrepancy may be attributed to cultural issues, as their analysis also showed differences between students from the UK and Hong Kong, with students from Hong Kong reporting on higher levels of GenAI literacy than UK students. Overall, Hongkonger students reported on higher GenAI literacy than found in our study, and for them – GenAI use was only weakly correlated with GenAI literacy (Chan & Hu, 2023).
A vast majority of students in our sample thought that they should be taught how to use GenAI-based tools as part of their academic studies. Together with the medium experience and medium-high GenAI literacy reported, this finding has important implications for higher-education: students seek institutions to adopt this new technology. This is in line with recent studies according to which higher-education students expect institutions to embrace GenAI as a means to make their learning more effective and to prepare them to the future, however not without considering concerns about accuracy, privacy, ethics, and the impact on personal and societal aspects (Arowosegbe et al., 2024; Chan & Hu, 2023; Jochim & Lenz-Kesekamp, 2025; Johnston et al., 2024).

Teaching GenAI Literacy in Higher-Education

So, how to teach GenAI use? In our sample, there was a tendency towards either a dedicated course or integrating GenAI in a few relevant courses, with a very high proportion of participants suggesting these approaches combined; the rate we found is similar to the rate that was found while surveying students in Romania, where only about 20% stated that they were not interested in learning or using any AI models (Țala et al., 2024). It was shown that a single course, even a single workshop, may improve students’ GenAI literacy (Sullivan et al., 2024; Wood & Moss, 2024), however, integrating GenAI as part of existing curricula requires much consideration as to where, when, to what extent, and how to implement this integration (Cordero et al., 2024; Kelly et al., 2025; Kurtz et al., 2024; Riaz & Mushtaq, 2024). Indeed, recent literature reviews of integration of GenAI in higher-education showed that modes of incorporation widely differ between various implementations, benefiting either students, instructions, or both, and facilitating different aspects of the educational experience, hence fostering a host of skills and competencies in learners (Belkina et al., 2025; Prather et al., 2025).
Interestingly, students in our sample, when thinking about the reasons to teach GenAI as part of their higher education, appreciated most the purpose of being ready to live in a digitally saturated world, without regarding their job. Second was ranked being assisted during academic studies, and third, in a meaningful gap, was ranked getting ready for the job market. This broad view may represent students’ understanding of the role new technologies play in shaping our future society, across contexts. They are aware of the broader societal implications of GenAI of which learning and employability are only part of a bigger picture. Therefore, higher education stakeholders should keep in mind that it is not enough to teach students how to use GenAI-based tools for their current, academic-related purpose or for the purpose of serving their future jobs, but rather to teach them how to use this technology responsibly and transparently, as well as showing them how to discern when and where to utilize AI tools (Hashmi & Bal, 2024). This is in line with current views of the important societal roles of higher education, which go beyond learning and working (Alvarado-Bravo et al., 2024; Mohammed & Ozdamli, 2024).

Conclusions and Implications

In this paper, we presented the development and study of a task-centered generative artificial intelligence (GenAI) literacy for higher education students. We believe that the way we framed this literacy has a major strength of being anchored in literature, relevant in actual educational settings, and actionable for instructors. Nevertheless, we must note some limitations. First, our study took place in one institution within one country, hence certain educational, technological, and cultural factors may have impacted on the resulting framework and the way it is manifested in our population; even within this local context, our research sample may not be representative of the studied institution. Moreover, the structure of and the dynamics within the group of campus-wide academic and administrative staff that were gathered as a working group in the first place, and which together defined the framework, may have had an impact on the resulting framework. Therefore, we recommend replicating this working process in other settings, in order to enrich our understanding of how higher education staff refer to students’ literacy in the GenAI era.
Still, we believe that our work is of significance, and that it has a few important implications. At the theoretical level, our task-centered approach paves the way for many studies that will explore associations between GenAI literacy and different tasks, across demographics, disciplines, and educational contexts. In a broader sense, this point of view may help understanding the integration of technology in education in a more nuanced manner. At the practical level, our framework can help higher education educators and policy makers define educational goals and design pedagogical interventions and assessment in the context of students’ use of GenAI-based tools. This may also lead to relevant professional development of educators for supporting these processes.

References

  1. Akdilek, S., Akdilek, I., & Punyanunt-Carter, N. M. (2024). The influence of generative AI on interpersonal communication dynamics (pp. 167–190). [CrossRef]
  2. Alaghbary, G.S. (2021). Integrating technology with Bloom’s revised taxonomy: Web 2.0-enabled learning designs for online learning. Asian EFL Jouranal, 28(1), 10–37.
  3. Al-kfairy, M., Mustafa, D., Kshetri, N., Insiew, M., & Alfandi, O. (2024). Ethical challenges and solutions of generative AI: An interdisciplinary perspective. Informatics, 11(3), 58. [CrossRef]
  4. Alvarado-Bravo, N., Aldana-Trejo, F., Duran-Herrera, V., Rasilla-Rovegno, J., Suarez-Bazalar, R., Torres-Quiroz, A., Paredes-Soria, A., Gonzales-Saldaña, S. H., Tomás-Quispe, G., & Olivares-Zegarra, S. (2024). Artificial intelligence as a tool for the development of soft skills: A bibliometric review in the context of higher education. International Journal of Learning, Teaching and Educational Research, 23(10), 379–394. [CrossRef]
  5. Annapureddy, R., Fornaroli, A., & Gatica-Perez, D. (2025). Generative AI literacy: Twelve defining competencies. Digital Government: Research and Practice, 6(1), 1–21. [CrossRef]
  6. Arowosegbe, A., Alqahtani, J. S., & Oyelade, T. (2024). Students’ perception of generative AI use for academic purpose in UK higher education. [CrossRef]
  7. Bagchi, S. N., & Rajeev Sharma. (2014). Hierarchy in Bloom’s Taxonomy: An empirical case-based exploration using MBA students. Journal of Case Research, 5(2), 57–79.
  8. Barzilay, A. R. (2018). Discrimination without discriminating: Learned gender inequality in the labor market and gig economy. The Cornell Law School, 28(1), 545–568.
  9. Belkina, M., Daniel, S., Nikolic, S., Haque, R., Lyden, S., Neal, P., Grundy, S., & Hassan, G. M. (2025). Implementing generative AI (GenAI) in higher education: A systematic review of case studies. Computers and Education: Artificial Intelligence, 100407. [CrossRef]
  10. Bloom, B. S., Engelhart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive domain. David McKay Company.
  11. Bozkurt, A. (2024). Why generative AI literacy, why now and why it matters in the educational landscape? Kings, queens and GenAI dragons. Open Praxis, 16(3), 283–290. [CrossRef]
  12. Brennan, K., & Resnick, M. (2012). New frameworks for studying and assessing the development of computational thinking. 2012 Annual Meeting of the American Educational Research Association, 1–25.
  13. Bringman-Rodenbarger, L., & Hortsch, M. (2020). How students choose e-learning resources: The importance of ease, familiarity, and convenience. FASEB BioAdvances, 2(5), 286–295. [CrossRef]
  14. Bruehler, B.B. (2018). Traversing Bloom’s taxonomy in an introductory scripture course. Teaching Theology & Religion, 21(2), 92–109. [CrossRef]
  15. Byungura, J. C., Hansson, H., Muparasi, M., & Ruhinda, B. (2018). Familiarity with technology among first-year students in Rwandan tertiary education. 16(1), 30–45. Available online: www.ejel.org.
  16. Cain, W. (2024). Prompting change: Exploring prompt engineering in large language model AI and its potential to transform education. TechTrends, 68(1), 47–57. [CrossRef]
  17. Cassidy, E. D., Martinez, M., & Shen, L. (2012). Not in love, or not in the know? Graduate student and faculty use (and non-use) of e-books. The Journal of Academic Librarianship, 38(6), 326–332. [CrossRef]
  18. Cha, Y., Dai, Y., Lin, Z., Liu, A., & Lim, C. P. (2024). Empowering university educators to support generative AI-enabled learning: Proposing a competency framework. Procedia CIRP, 128, 256–261. [CrossRef]
  19. Chacka, C. (2020). Skills, competencies and literacies attributed to 4IR/Industry 4.0: Scoping review. IFLA Journal, 46(4), 369–399.
  20. Chakraborty, U., & Biswal, S. K. (2024). Is ChatGPT a responsible communication: A study on the credibility and adoption of conversational artificial intelligence. Journal of Promotion Management, 30(6), 929–958. [CrossRef]
  21. Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1), 43. [CrossRef]
  22. Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review. IEEE Access, 8, 75264–75278. [CrossRef]
  23. Chiu, T. K. F. (2024). Future research recommendations for transforming higher education with generative AI. Computers and Education: Artificial Intelligence, 6, 100197. [CrossRef]
  24. Choi, W., Bak, H., An, J., Zhang, Y., & Stvilia, B. (2024). College students’ credibility assessments of GenAI-generated information for academic tasks: An interview study. Journal of the Association for Information Science and Technology. [CrossRef]
  25. Churches, A. (2008). Bloom’s Digital Taxonomy. Available online: http://burtonslifelearning.pbworks.com/f/BloomDigitalTaxonomy2001.pdf.
  26. Cordero, J., Torres-Zambrano, J., & Cordero-Castillo, A. (2024). Integration of generative artificial intelligence in higher education: Best practices. Education Sciences, 15(1), 32. [CrossRef]
  27. Coşgun Ögeyik, M. (2022). Using Bloom’s digital taxonomy as a framework to evaluate webcast learning experience in the context of Covid-19 pandemic. Education and Information Technologies, 27(8), 11219–11235. [CrossRef]
  28. Creamer, E. (2023, November 15). ‘Hallucinate’ chosen as Cambridge dictionary’s word of the year. The Guardian. Available online: https://www.theguardian.com/books/2023/nov/15/hallucinate-cambridge-dictionary-word-of-the-year.
  29. Dahlstrom, E., Brooks, D. C., & Bichsel, J. (2014). The current ecosystem of learning management systems in higher education: Student, faculty, and IT perspectives.
  30. Dai, Y., Xiao, J.-Y., Huang, Y., Zhai, X., Wai, F.-C., & Zhang, M. (2025). How generative AI enables an online project-based learning platform: An applied study of learning behavior analysis in undergraduate students. Applied Sciences, 15(5), 2369. [CrossRef]
  31. Dake, D. M. (1993). Visual thinking skills for the digital age. Visual Literacy in the Digital Age: Selected Readings from the Annual Conference of the International Visual Literacy Association.
  32. De Stefano, V. (2019). “Negotiating the algorithm”: Automation, artificial intelligence, and labor protection. Comparative Labor Law & Policy Journal, 41(1), 15–46.
  33. Doellgast, V., Wagner, I., & O’Brady, S. (2023). Negotiating limits on algorithmic management in digitalised services: Cases from Germany and Norway. Transfer: European Review of Labour and Research, 29(1), 105–120. [CrossRef]
  34. Doroudi, S. (2023). The intertwined histories of artificial intelligence and education. International Journal of Artificial Intelligence in Education, 33(4), 885–928. [CrossRef]
  35. Eshet, Y. (2012). Thinking in the digital era: A revised model for digital literacy. Issues in Informing Science and Information Technology, 9.
  36. Eshet-Alkalai, Y. (2004). Digital literacy: A conceptual framework for survival skills in the digital era. Journal of Educational Multimedia and Hypermedia, 13(1), 93–106.
  37. Essa, A., & Lataifeh, M. (2024). Evaluating generative AI tools for visual asset creation - An educational approach (pp. 269–282). [CrossRef]
  38. Faraon, M., Granlund, V., & Rönkkö, K. (2023). Artificial intelligence practices in higher education using Bloom’s digital taxonomy. 2023 5th International Workshop on Artificial Intelligence and Education (WAIE), 53–59. [CrossRef]
  39. Faza, A., & Lestari, I. A. (2025). Self-regulated learning in the digital age: A systematic review of strategies, technologies, benefits, and challenges. The International Review of Research in Open and Distributed Learning, 26(2), 23–58. [CrossRef]
  40. Federiakin, D., Molerov, D., Zlatkin-Troitschanskaia, O., & Maur, A. (2024). Prompt engineering as a new 21st century skill. Frontiers in Education, 9. [CrossRef]
  41. Gans, J.S. (2024). How will generative AI impact communication? Economics Letters, 242, 111872. [CrossRef]
  42. Habib, S., Vogel, T., Anli, X., & Thorne, E. (2024). How does generative artificial intelligence impact student creativity? Journal of Creativity, 34(1), 100072. [CrossRef]
  43. Haenlein, M., & Kaplan, A. (2019). A brief history of artificial intelligence: On the past, present, and future of artificial intelligence. California Management Review, 61(4), 5–14.
  44. Hagendorff, T. (2024). Mapping the ethics of generative AI: A comprehensive scoping review. Minds and Machines, 34(4), 39. [CrossRef]
  45. Han, A., & Cai, Z. (2023). Design implications of generative AI systems for visual storytelling for young learners. Proceedings of the 22nd Annual ACM Interaction Design and Children Conference, 470–474. [CrossRef]
  46. Hasannejad, M. R., Bahador, H., & Kazemi, S. A. (2014). Powerful vocabulary acquisition through texts comparison. International Journal of Applied Linguistics & English Literature, 4(2), 213–220. [CrossRef]
  47. Hashmi, N., & Bal, A. S. (2024). Generative AI in higher education and beyond. Business Horizons, 67(5), 607–614. [CrossRef]
  48. Hayes, A. F., & Coutts, J. J. (2020). Use Omega rather than Cronbach’s Alpha for estimating reliability. But… Communication Methods and Measures, 14(1), 1–24. [CrossRef]
  49. Heigl, R. 2025). Generative artificial intelligence in creative contexts: a systematic review and future research agenda. Management Review Quarterly. [CrossRef]
  50. Henkel, M., Jacob, A., & Perrey, L. (2023). What shapes our trust in scientific information? A review of factors influencing perceived scientificness and credibility. 8th European Conference on Information Literacy, 107–118. [CrossRef]
  51. Henriksen, D., Creely, E., Gruber, N., & Leahy, S. (2025). Social-emotional learning and generative AI: A critical literature review and framework for teacher education. Journal of Teacher Education. [CrossRef]
  52. Hersh, W. (2024). Search still matters: Information retrieval in the era of generative AI. Journal of the American Medical Informatics Association, 31(9), 2159–2161. [CrossRef]
  53. Italie, L. (2023, November 27). What’s Merriam-Webster’s word of the year for 2023? Hint: Be true to yourself. AP News. Available online: https://apnews.com/article/merriam-webster-word-of-year-2023-a9fea610cb32ed913bc15533acab71cc.
  54. Jaffe, S. H., & Nachmias, R. (2011). Personal information management and learning. International Journal of Technology Enhanced Learning, 3(6), 570. [CrossRef]
  55. Jin, Y., Martinez-Maldonado, R., Gašević, D., & Yan, L. (2024). GLAT: The generative AI literacy assessment test. ArXiv.
  56. Jochim, J., & Lenz-Kesekamp, V. K. (2025). Teaching and testing in the era of text-generative AI: Exploring the needs of students and teachers. Information and Learning Sciences, 126(1/2), 149–169. [CrossRef]
  57. Johnston, H., Wells, R. F., Shanks, E. M., Boey, T., & Parsons, B. N. (2024). Student perspectives on the use of generative artificial intelligence technologies in higher education. International Journal for Educational Integrity, 20(1), 2. [CrossRef]
  58. Kalkbrenner, M. T. (2023). Alpha, Omega, and H internal consistency reliability estimates: Reviewing these options and when to use them. Counseling Outcome Research and Evaluation, 14(1), 77–88. [CrossRef]
  59. Kelly, A., Sullivan, M., & Strampel, K. (2025). Teaching generative AI in higher education: Strategies, implications, and reflective practices. In J. R. Corbeil & M. E. Corbeil (Eds.), Teaching and learning in the age of generative AI: Evidence-based approaches to pedagogy, ethics, and beyond (pp. 213–234). Routledge. [CrossRef]
  60. Kirsch, I. S., Jungeblut, A., Jenkins, L., & Kolstad, A. (1993). Adult literacy in America.
  61. Krathwohl, D.R. (2002). A revision of Bloom’s taxonomy: An overview. Theory Into Practice, 41(4), 212–218. [CrossRef]
  62. Kunen, S., Cohen, R., & Solman, R. (1981). A levels-of-processing analysis of Bloom’s Taxonomy. Journal of Educational Psychology, 73(2), 202–211.
  63. Kurtz, G., Amzalag, M., Shaked, N., Zaguri, Y., Kohen-Vacs, D., Gal, E., Zailer, G., & Barak-Medina, E. (2024). Strategies for integrating generative AI into higher education: Navigating challenges and leveraging opportunities. Education Sciences, 14(5), 503. [CrossRef]
  64. Lalwani, A., & Agrawal, S. (2018). Validating revised Bloom’s taxonomy using deep knowledge tracing. Artificial Intelligence in Education, 225–238. [CrossRef]
  65. Lambert, B., Plank, R. E., Reid, D. A., & Fleming, D. (2014). A Competency Model for Entry Level Business-to-Business Services Salespeople. Services Marketing Quarterly, 35(1), 84–103. [CrossRef]
  66. Lo, S. M., Larsen, V. M., & Yee, A. T. (2016). A two-dimensional and non-hierarchical framework of Bloom’s taxonomy for biology. The FASEB Journal, 30(S1). [CrossRef]
  67. Lobel, O. (2022). The equality machine: Harnessing digital technology for a brighter, more Inclusive future . PublicAffairs.
  68. Luo (Jess), J. (2024). A critical review of GenAI policies in higher education assessment: A call to reconsider the “originality” of students’ work. Assessment & Evaluation in Higher Education, 49(5), 651–664. [CrossRef]
  69. Macquarie Dictionary. (2022, October 31). Announcing the Macquarie Dictionary Word of the Year 2023. Macquarie Dictionary. Available online: https://www.macquariedictionary.com.au/the-word-of-the-year-2022-is/.
  70. Marin-Zapata, S. I., Román-Calderón, J. P., Robledo-Ardila, C., & Jaramillo-Serna, M. A. (2022). Soft skills, do we know what we are talking about? Review of Managerial Science, 16(4), 969–1000. [CrossRef]
  71. Martínez-Bravo, M. C., Sádaba Chalezquer, C., & Serrano-Puche, J. (2022). Dimensions of digital literacy in the 21st century competency frameworks. Sustainability, 14(3), 1867. [CrossRef]
  72. Metzger, M. J. (2007). Making sense of credibility on the web: Models for evaluating online information and recommendations for future research. Journal of the American Society for Information Science and Technology, 58(13), 2078–2091. [CrossRef]
  73. Mioduser, D., Nachmias, R., & Forkosh-Baruch, A. (2008). New literacies for the knowledge society. In J. Voogt & G. Knezek (Eds.), International Handbook of Information Technology in Primary and Secondary Education (pp. 23–42). Springer. [CrossRef]
  74. Mohammed, F. S., & Ozdamli, F. (2024). A systematic literature review of soft skills in information technology education. Behavioral Sciences, 14(10), 894. [CrossRef]
  75. Mullis, I. V. S., Martin, M. O., Foy, P., & Hooper, M. (2016). TIMSS 2015 International Results in Mathematics.
  76. Nguyen, K.V. (2025). The use of generative AI tools in higher education: Ethical and pedagogical principles. Journal of Academic Ethics. [CrossRef]
  77. O’Dea, X., Ng, D. T. K., O’Dea, M., & Shkuratskyy, V. (2024). Factors affecting university students’ generative AI literacy: Evidence and evaluation in the UK and Hong Kong contexts. Policy Futures in Education.
  78. O’Dea, X., Tsz Kit Ng, D., O’Dea, M., & Shkuratskyy, V. (2024). Factors affecting university students’ generative AI literacy: Evidence and evaluation in the UK and Hong Kong contexts. Policy Futures in Education. [CrossRef]
  79. OECD. (2014). At what age do university students earn their first degree?
  80. Olson, D. R., & Astington, J. W. (1990). Talking about text: How literacy contributes to thought. Journal of Pragmatics, 14(5), 705–721. [CrossRef]
  81. Oppenlaender, J., Linder, R., & Silvennoinen, J. (2024). Prompting AI art: An investigation into the creative skill of prompt engineering. International Journal of Human–Computer Interaction, 1–23. [CrossRef]
  82. Ortega-Ochoa, E., Sabaté, J.-M., Arguedas, M., Conesa, J., Daradoumis, T., & Caballé, S. (2024). Exploring the utilization and deficiencies of generative artificial intelligence in students’ cognitive and emotional needs: A systematic mini-review. Frontiers in Artificial Intelligence, 7. [CrossRef]
  83. Ou, M., Zheng, H., Zeng, Y., & Hansen, P. (2024). Trust it or not: Understanding users’ motivations and strategies for assessing the credibility of AI-generated information. New Media & Society. [CrossRef]
  84. Panadero, E. (2017). A review of self-regulated learning: Six models and four directions for research. Frontiers in Psychology, 8. [CrossRef]
  85. Partnership for 21st Century Skills. (2009). P21 framework definitions. Available online: http://www.p21.org/our-work/p21-framework.
  86. Pornpitakpan, C. (2004). The persuasiveness of source credibility: A critical review of five decades’ evidence. Journal of Applied Social Psychology, 34(2), 243–281. [CrossRef]
  87. Prasse, D., Webb, M., Deschênes, M., Parent, S., Aeschlimann, F., Goda, Y., Yamada, M., & Raynault, A. (2024). Challenges in promoting self-regulated learning in technology supported learning environments: An umbrella review of systematic reviews and meta-rnalyses. Technology, Knowledge and Learning, 29(4), 1809–1830. [CrossRef]
  88. Prather, J., Leinonen, J., Kiesler, N., Gorson Benario, J., Lau, S., MacNeil, S., Norouzi, N., Opel, S., Pettit, V., Porter, L., Reeves, B. N., Savelka, J., Smith, D. H., Strickroth, S., & Zingaro, D. (2025). Beyond the hype: A comprehensive review of current trends in generative AI research, teaching practices, and tools. 2024 Working Group Reports on Innovation and Technology in Computer Science Education, 300–338. [CrossRef]
  89. Premkumar, P. P., Yatigammana, M. R. K. N., & Kannangara, S. (2024). Impact of generative AI on critical thinking skills in undergraduates: A systematic review. Journal of Desk Research Review and Analysis, 2(1), 199–215. [CrossRef]
  90. Prensky, M. (2007). How to teach with technology: Keeping both teachers and students comfortable in an era of exponential change. Emerging Technologies for Learning, 2(4), 40–46.
  91. Puustinen, M., & Pulkkinen, L. (2001). Models of self-regulated learning: A review. Scandinavian Journal of Educational Research, 45(3), 269–286. [CrossRef]
  92. Rafner, J., Beaty, R. E., Kaufman, J. C., Lubart, T., & Sherson, J. (2023). Creativity in the age of generative AI. Nature Human Behaviour, 7(11), 1836–1838. [CrossRef]
  93. Riaz, S., & Mushtaq, A. (2024). Optimizing generative AI integration in higher education: A framework for enhanced student engagement and learning outcomes. 2024 Advances in Science and Engineering Technology International Conferences (ASET), 1–6. [CrossRef]
  94. Ruan, J., Chen, Y., Zhang, B., Xu, Z., Bao, T., Du, G., Shi, S., Mao, H., Li, Z., Zeng, X., & Zhao, R. (2023). TPTU: Large language model-based AI agents for task planning and tool usage. NeurIPS 2023 Foundation Models for Decision Making Workshop.
  95. Safari, N., Techatassanasoontorn, A. A., & Diaz Andrade, A. (2024). Auto-pilot, co-pilot and pilot: Human and generative AI configurations in software development. International Conference on Information Systems.
  96. Sardi, J., Darmansyah, Candra, O., Yuliana, D. F., Habibullah, Yanto, D. T. P., & Eliza, F. (2025). How generative AI influences students’ self-regulated learning and critical thinking skills? A systematic review. International Journal of Engineering Pedagogy (IJEP), 15(1), 94–108. [CrossRef]
  97. Sattelmaier, L., & Pawlowski, J. M. (2023). Towards a generative artificial intelligence competence framework for schools. In M. D. Sulistiyo & R. A. Nugraha (Eds.), Proceedings of the International Conference on Enterprise and Industrial Systems (ICOEINS 2023), Advances in Economics, Business and Management Research 270 (pp. 291–307). [CrossRef]
  98. Seddon, G. M. (1978). The properties of Bloom’s Taxonomy of educational objectives for the cognitive domain. Review of Educational Research, 48(2), 303–323.
  99. Smolansky, A., Cram, A., Raduescu, C., Zeivots, S., Huber, E., & Kizilcec, R. F. (2023). Educator and student perspectives on the impact of generative AI on assessments in higher education. Proceedings of the Tenth ACM Conference on Learning @ Scale, 378–382. [CrossRef]
  100. Strzelecki, A. (2024). To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of technology. Interactive Learning Environments, 32(9), 5142–5155. [CrossRef]
  101. Sullivan, M., McAuley, M., Degiorgio, D., & McLaughlan, P. (2024). Improving students’ generative AI literacy: A single workshop can improve confidence and understanding. Journal of Applied Learning & Teaching, 7(2). [CrossRef]
  102. Szymkowiak, A., Melović, B., Dabić, M., Jeganathan, K., & Kundi, G. S. (2021). Information technology and Gen Z: The role of teachers, the internet, and technology in the education of young people. Technology in Society, 65, 101565. [CrossRef]
  103. Țala, M. L., Muller, C. N., Nastase, I. A., State, O., & Gheorghe, G. (2024). Exploring university students’ perceptions of generative artificial intelligence in education. Amfiteatru Economic, 26(65), 71. [CrossRef]
  104. Thornhill-Miller, B., Camarda, A., Mercier, M., Burkhardt, J.-M., Morisseau, T., Bourgeois-Bougrine, S., Vinchon, F., El Hayek, S., Augereau-Landais, M., Mourey, F., Feybesse, C., Sundquist, D., & Lubart, T. (2023). Creativity, critical thinking, communication, and collaboration: Assessment, certification, and promotion of 21st century skills for the future of work and education. Journal of Intelligence, 11(3), 54. [CrossRef]
  105. Tu, J., Hadan, H., Wang, D. M., Sgandurra, S. A., Mogavi, R. H., & Nacke, L. E. (2024). Augmenting the author: Exploring the potential of AI collaboration in academic writing. GenAICHI: CHI 2024 Workshop on Generative AI and HCI.
  106. Wang, B., Rau, P.-L. P., & Yuan, T. (2023). Measuring user competence in using artificial intelligence: Validity and reliability of artificial intelligence literacy scale. Behaviour & Information Technology, 42(9), 1324–1337. [CrossRef]
  107. Wang, H., Dang, A., Wu, Z., & Mac, S. (2024). Generative AI in higher education: Seeing ChatGPT through universities’ policies, resources, and guidelines. Computers and Education: Artificial Intelligence, 7, 100326. [CrossRef]
  108. Wang, J., & Fan, W. (2025). The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: Insights from a meta-analysis. Humanities and Social Sciences Communications, 12(1), 621. [CrossRef]
  109. Wang, P. (2019). On defining artificial intelligence. Journal of Artificial General Intelligence, 10(2), 1–37. [CrossRef]
  110. Wei, X., Wang, L., Lee, L.-K., & Liu, R. (2025). The effects of generative AI on collaborative problem-solving and team creativity performance in digital story creation: an experimental study. International Journal of Educational Technology in Higher Education, 22(1), 23. [CrossRef]
  111. Wood, D., & Moss, S. H. (2024). Evaluating the impact of students’ generative AI use in educational contexts. Journal of Research in Innovative Teaching & Learning, 17(2), 152–167. [CrossRef]
  112. Yusuf, A., Pervin, N., & Román-González, M. (2024). Generative AI and the future of higher education: A threat to academic integrity or reformation? Evidence from multicultural perspectives. International Journal of Educational Technology in Higher Education, 21(1), 21. [CrossRef]
  113. Zhu, W., Huang, L., Zhou, X., Li, X., Shi, G., Ying, J., & Wang, C. (2025). Could AI ethical anxiety, perceived ethical risks and ethical awareness about AI influence university students’ use of generative AI products? An ethical perspective. International Journal of Human–Computer Interaction, 41(1), 742–764. [CrossRef]
  114. Zimmerman, B.J. (2002). Becoming a self-regulated learner: An overview. Theory Into Practice, 41(2), 64–70. [CrossRef]
Table 2. GenAI Literacy Questionnaire – the anchor reads “To what extent do you agree with each of the following statements? Every time a ‘task’ is mentioned – refer to the task you chose above.”.
Table 2. GenAI Literacy Questionnaire – the anchor reads “To what extent do you agree with each of the following statements? Every time a ‘task’ is mentioned – refer to the task you chose above.”.
Item # Category Item
1 Know I am familiar with GenAI tools that can help me accomplish this task
2 Know I am up-to-date on new GenAI tools that can help me with this task
3 Understand I understand how to get the most out of GenAI tools to complete this task
4 Apply I know how to write prompts that will give me the desired results for carrying out this task
5 Apply I know how to make ethical use of GenAI tools for the purpose of carrying out this task
6 Analyze I know how to compare the outputs of different GenAI tools when performing this task
7 Evaluate I know how to assess the correctness of the output of GenAI tools when performing this task, referring to the limitations of AI, to other sources, and to my previous knowledge
8 Create I know how to produce an optimal outcome for this task using a variety of GenAI tools
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated