Preprint
Article

This version is not peer-reviewed.

AI Literacy Framework (ALiF): A Comprehensive Approach to Developing AI Competencies in Educational and Healthcare Settings

Submitted:

17 March 2025

Posted:

18 March 2025

You are already at the latest version

Abstract
Introduction: Artificial intelligence rapidly transforms professional environments, yet structured approaches to developing AI literacy across diverse stakeholder groups remain limited. This paper introduces the AI Literacy Framework (ALiF), addressing the critical gap between AI adoption and competency development in educational and healthcare settings. Methods: The development of ALiF involved a comprehensive literature review, analysis of empirical studies, and synthesis of existing frameworks. The methodology identified recurring competency patterns, progression pathways, and role-specific needs across educational and healthcare contexts. Results: The framework comprises five core components: Technical Understanding, Critical Evaluation, Practical Application, Ethical Considerations, and Data Literacy. These components are structured across three progression levels (Foundation, Intermediate, Advanced) and adapted into five role-specific frameworks for learners, educators, researchers, clinicians, and administrators. A multi-modal assessment approach was developed, incorporating self-assessment, practical tasks, portfolio evaluation, and peer review. Discussion: ALiF addresses the limitations of existing frameworks by integrating technical skills with ethical awareness, establishing clear progression pathways, and providing role-specific adaptations. The framework offers a systematic approach to AI literacy development tailored to institutional resources and priorities, supporting effective, ethical, and innovative use of AI across professional contexts.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  
Subject: 
Social Sciences  -   Education

Introduction

Artificial intelligence rapidly transforms professional environments across various sectors, significantly impacting education and healthcare. Generative AI tools have created unprecedented opportunities and complex challenges, prompting institutions to adapt swiftly. Recent studies indicate that while AI adoption is accelerating, structured programs to enhance AI literacy remain scarce. Although 76% of higher education institutions have adopted AI technologies, only 24% offer formal initiatives to equip stakeholders with essential skills [1]. This gap is further highlighted in the EDUCAUSE Horizon Report [7], which identifies AI literacy as a critical area requiring institutional attention and strategic development.
This gap between adoption and literacy presents significant challenges related to governance, teaching methods, institutional readiness, and ethical implementation. Current approaches to AI literacy often have several limitations, including stakeholder exclusivity, lack of progression pathways, limited validation, and insufficient integration of technical skills with critical thinking and ethics awareness. Early systematic reviews of AI in higher education identified that educators were often missing from AI implementation processes [8], contributing to the disconnect between technical capabilities and pedagogical integration.
Many existing frameworks focus on specific groups without considering the interconnected needs of all institutional roles [3]. Moreover, only 12% of current frameworks provide clear pathways for developing skills from foundational literacy to advanced competencies [5]. There is also a lack of empirically validated strategies, with only 8% of frameworks based on robust methodologies [4].
As conceptualized by Long and Magerko [2], AI literacy encompasses both the technical competencies and the critical thinking skills needed to interact with AI systems in meaningful ways effectively. The AI Literacy Framework (ALiF) addresses these challenges by offering a comprehensive, structured approach to developing AI literacy. The framework establishes clear progression paths from basic awareness to leadership capability, supports various professional contexts through role-specific frameworks, integrates technical skills with critical thinking and ethical awareness, emphasizes practical application in specific professional domains, and creates a universal language for discussing and developing AI competencies.

Methods

Framework Development Process

The development of ALiF involved a thorough literature review, analysis of peer-reviewed publications, and synthesis of expert perspectives. This development followed a structured protocol established specifically for building AI literacy frameworks in educational contexts [6]. The literature review included publications on AI literacy, competency frameworks, educational approaches, and implementation strategies across various disciplines. To ensure complete coverage, systematic searches were performed across major academic databases, such as Web of Science, Scopus, ERIC, and IEEE Xplore.
The literature review identified key competency areas, recurring themes, progression patterns, and role-specific needs in educational and healthcare settings. The methodology included content analysis of existing frameworks, thematic analysis of implementation case studies, and comparative analysis of assessment approaches. Specific attention was given to frameworks and models that demonstrated practical application in these settings.
The synthesis process involved mapping recurring competency patterns across the literature, identifying gaps in existing frameworks, and developing an integrated structure that addressed these limitations. This approach aligns with Kumar and Sharma’s [9] comprehensive framework for AI literacy in diverse higher education contexts, emphasizing contextual adaptation’s importance. From this analysis, five core components—Data Literacy, Technical Understanding, Critical Evaluation, Ethical Considerations, and Practical Application—emerged as consistent themes across diverse sources. The progression levels—Foundation, Intermediate, and Advanced—were established based on the patterns of competency development observed in existing frameworks and educational settings models.
Role-specific frameworks were created through a targeted analysis of literature focusing on distinct stakeholder groups. This process entailed identifying unique competency requirements, contextual challenges, and application scenarios for each role and tailoring the core framework components to these contexts while preserving their structure consistency.

Conceptual Framework for Assessment

The assessment approach for ALiF was conceived based on the educational assessment principles and practices outlined in the literature. Drawing on established pedagogical models for technology competency development [11], the assessment design incorporates progressive skill demonstration aligned with educational outcomes. The development process involved reviewing competency-based assessment models, analyzing existing AI literacy measurement instruments, and synthesizing best practices in professional development evaluation.
Multiple assessment methods addressed various facets of AI literacy and accommodated different institutional contexts. The concept of a self-assessment questionnaire was crafted as a practical tool for baseline assessment and progress monitoring, with items aligned to specific competencies within the framework to ensure thorough coverage. The questionnaire design incorporated principles of competency-based assessment, including clear performance indicators and increasing complexity across levels.
The practical assessment task approach was included to address the application aspect of AI literacy, recognizing the limitations of knowledge-based assessments alone. The portfolio assessment component was created to capture growth over time and provide evidence of real-world application. Peer and expert review elements were incorporated based on successful practices identified in professional development literature.

Results

Framework Structure

The ALiF framework comprises five essential components that work together to develop comprehensive AI literacy: Data Literacy (DAT), Technical Understanding (TEC), Critical Evaluation (EVA), Ethical Considerations (ETH), and Practical Application (PRA).
Figure 1 illustrates the structure of the ALiF framework. The pentagon shape represents the five core components: Data Literacy (DAT), Technical Understanding (TEC), Critical Evaluation (EVA), Ethical Considerations (ETH), and Practical Application (PRA). The concentric layers indicate the three progression levels: Foundation (L1) at the center, Intermediate (L2) in the middle, and Advanced (L3) as the outer layer. This visualization highlights how all components are interconnected and build upon each other through the progression levels.
Data literacy involves understanding data as the foundation for AI systems. This includes evaluating data quality, biases, and limitations; managing data collection, processing, and storage; analyzing and interpreting data for decision-making; and communicating data insights effectively visualization.
Technical understanding includes the knowledge and skills necessary to comprehend AI systems and their capabilities. This includes recognizing how AI tools function, identifying their appropriate applications and limitations, discovering effective ways to engage with AI systems, and learning how to incorporate AI tools into professional settings workflows.
Critical evaluation focuses on systematically assessing AI outputs, processes, and impacts. This includes verifying the accuracy of AI-generated content, evaluating the effectiveness of AI tools for specific purposes, understanding potential biases and limitations, and measuring the impact of AI use in professional settings.
Ethical considerations focus on AI’s responsible and principled use in professional contexts. This includes understanding ethical guidelines and institutional policies, managing potential risks and challenges, developing ethical approaches to AI implementation, and leading efforts to establish ethical AI practices.
Practical Application focuses on the effective implementation of AI tools and processes in real-world contexts, including the application of AI tools to domain-specific tasks, the development of efficient AI-enhanced workflows, problem-solving with AI capabilities, and the creation of innovative approaches to professional practice challenges.
Table 1 offers a thorough description of the five core components of the ALiF framework. The table outlines the scope of each component, identifies key elements, and includes examples of specific competencies. This comprehensive overview clarifies how each component contributes to overall AI literacy and the specific skills developed within each area.

Progression Levels

The framework offers three distinct levels of development, each building on prior learning: Foundation Level (L1), Intermediate Level (L2), and Advanced Level (L3).
  • The Foundation Level acts as the entry point, emphasizing essential competencies such as a basic understanding of AI concepts and tools, fundamental verification and evaluation skills, essential practical applications within a professional context, and core ethical awareness and compliance.
  • The Intermediate Level builds on the foundation with improved capabilities, including a more advanced understanding and integration of tools, systematic evaluation methods, complex implementation strategies, and developed ethical frameworks and compliance systems.
  • The Advanced Level fosters leadership and innovation capabilities, including the strategic implementation of AI systems, the development of evaluation and assessment frameworks, innovation leadership in the professional realm, and the creation of ethical guidance and policy.
Figure 2 depicts the progression pathways across all five components of the ALiF framework. For each component (Data Literacy, Technical Understanding, Critical Evaluation, Ethical Considerations, and Practical Application), the diagram illustrates how competencies evolve from the Foundation Level (L1) through the Intermediate Level (L2) to the Advanced Level (L3). This visualization highlights the sequential development of skills and knowledge, demonstrating how each level builds upon and enhances the capabilities of the previous one.
Table 2 displays a matrix illustrating how competencies evolve across the three progression levels for each of the five framework components. The matrix shows that knowledge, skills, and capabilities build on one another, progressing from basic understanding and application at the Foundation level to leadership and innovation at the Advanced level. This organized progression offers clear pathways for development in all areas of AI literacy.

Role-Specific Frameworks

Based on the literature review and analysis of case studies, five tailored frameworks were developed for learners, educators, researchers, clinicians, and administrators. These frameworks adapt the core components to role-specific contexts while maintaining a consistent structure.
The Learner framework aims to develop AI literacy skills in educational contexts, spanning from K-12 through higher education and lifelong learning. Key competencies include explaining fundamental AI principles, identifying common educational AI tools, using appropriate tools for academic tasks, implementing AI-assisted workflows, verifying AI-generated content against reliable sources, applying quality control measures, analyzing AI’s impact on learning outcomes, utilizing AI tools for academic assignments, enhancing study workflows, addressing complex learning challenges, demonstrating proper attribution of AI contributions, adhering to institutional policies, recognizing potential risks associated with academic AI use, understanding data types used in AI, organizing educational data, conducting basic analyses, and creating effective resources visualizations.
The Educator framework addresses the essential AI literacy skills required for teaching, curriculum development, and guiding students. Key competencies include explaining AI principles to students, selecting appropriate tools to achieve learning objectives, creating effective prompts, designing AI-enhanced teaching methodologies, verifying AI-generated teaching materials, implementing quality control procedures, analyzing the impact on teaching effectiveness, integrating AI across curriculum components, optimizing teaching workflows, addressing educational challenges with AI tools, modeling appropriate AI usage for students, implementing transparency protocols, developing AI policies for educational settings, evaluating educational datasets, designing data activities for learning, and creating assessment frameworks for data literacy. Appendix A provides a detailed breakdown of sub-competencies for the Educator role across all components and levels, demonstrating the granular application of the framework in education contexts.
The Researcher framework develops the AI literacy skills necessary for academic research, scientific inquiry, and knowledge advancement. Key competencies include explaining AI principles relevant to research domains, selecting suitable AI tools for research tasks, implementing AI-enhanced methodologies, rigorously verifying AI-generated research content, ensuring quality control for research outputs, measuring research impact, integrating AI across research phases, optimizing research workflows, addressing complex research challenges, maintaining transparent documentation of AI use in research, developing compliance guidelines for research programs, creating risk assessment protocols, evaluating research datasets with methodological rigor, implementing data collection protocols, and applying advanced statistical methods.
The Clinician framework emphasizes the AI literacy skills essential for healthcare delivery, patient care, and medical decision-making. Key competencies include explaining AI principles relevant to clinical practice, implementing AI assistance in clinical documentation, designing AI-enhanced clinical methodologies, verifying AI-generated clinical information, applying quality standards to clinical AI outputs, measuring the impact on patient outcomes, integrating AI across all aspects of care delivery, optimizing clinical workflows, developing solutions for complex clinical challenges, implementing transparent AI documentation in patient care, providing appropriate disclosure to patients, creating risk assessment protocols for clinical settings, evaluating clinical datasets with a focus on privacy, implementing secure data management systems, and designing effective visualizations for health data.
The Administrator framework cultivates the AI literacy skills necessary for operational efficiency, administrative processes, and workplace productivity. Key competencies include identifying AI tools for operational tasks, crafting effective prompts for administrative purposes, designing efficient AI-enhanced business processes, verifying AI-generated business content, implementing systematic review procedures, analyzing efficiency gains, integrating AI across operational areas, optimizing business workflows, developing solutions for complex administrative challenges, ensuring transparent AI documentation in business processes, formulating compliance guidelines for operational AI use, establishing risk assessment protocols, evaluating business datasets for operational relevance, structuring data collection for organizational needs, and producing effective visualizations for business information.
Figure 3 presents a radar chart that compares how the five ALiF components are manifested across various stakeholder roles. This visualization highlights the unique emphasis that each role places on different aspects of AI literacy. Researchers prioritize Technical Understanding, while Educators focus more on Critical Evaluation. Clinicians emphasize Practical Application, Administrators consider Ethical Considerations, and Learners develop foundational Data Literacy skills. This comparison underscores how the framework adapts to diverse professional needs while maintaining a consistent structure.
Table 3 presents the implementation priorities for each stakeholder role within the ALiF framework. For each role, the table highlights primary focus areas, key challenges, and specific implementation priorities. This role-based perspective aids in tailoring AI literacy development to the distinct needs and contexts of various stakeholders while keeping alignment with the overall framework structure.
Appendix B provides a cross-role comparison for the Ethical Considerations component, demonstrating how ethical competencies are adapted across various professional contexts while preserving the conceptual framework alignment.
Assessment Approach
The proposed assessment approach for ALiF incorporates multiple methods to comprehensively evaluate AI literacy across various dimensions and contexts. Figure 4 illustrates the ALiF assessment methodology as a comprehensive, multimodal approach. The flowchart shows how the assessment starts with an initial baseline measurement and then incorporates four complementary methods: Self-Assessment Questionnaires, Practical Tasks, Portfolio Evidence Collection, and Peer & Expert Review. These assessments are integrated to create a complete competency profile, which determines level placement (Foundation, Intermediate, or Advanced). The results guide appropriate learning pathways, from full program participation for those at the Foundation level to leadership modules for those at the Advanced level.
Self-assessment questionnaires feature core questions relevant to all roles, role-specific modules tailored for specialized contexts, and support for multi-role assessment. Add: “The design of these questionnaires incorporates principles from established AI literacy scales [12], which have demonstrated reliability in measuring competency across multiple dimensions. This approach acknowledges that many professionals hold multiple roles and need different competencies across their various responsibilities and contexts.
Practical assessment tasks feature hands-on activities that demonstrate real-world application. They incorporate role-specific scenarios with increasing difficulty aligned to framework levels, offering evidence of practical capability beyond theoretical knowledge.
Portfolio assessment entails documenting the application of AI literacy, showcasing the progression across different competency areas, and including artifacts that demonstrate practical implementation. This longitudinal approach captures development over time and provides authentic evidence of capability.
Peer and expert review consist of organized feedback from colleagues, validation by subject matter experts, and a collaborative evaluation of advanced implementations. This social aspect recognizes the cooperative nature of AI implementation in professional settings.
The assessment design facilitates level placement and tracks progression, establishing precise requirements for each level. This approach is supported by validated assessment methodologies such as the Artificial Intelligence Literacy Scale [14], demonstrating the effectiveness of structured competency evaluation. The Foundation Level requires a demonstrated understanding of basic concepts, implementing fundamental skills, documentation of AI usage, and adherence to ethical guidelines. The Intermediate Level demands advanced application of AI in complex scenarios, the development of improved processes, systematic evaluation, and leadership within team environments. The Advanced Level necessitates strategic implementation at the organizational level, the development of innovative approaches, the creation of frameworks adopted by the broader community, and leadership in policy governance.
Table 4 compares the four assessment methods used within the ALiF framework. It outlines each method’s primary purpose, advantages, limitations, and appropriate applications. This comparison emphasizes the complementary nature of these assessment approaches and their collective contribution to providing a comprehensive evaluation of AI literacy.
Examples of specific assessment criteria used to determine Foundation Level placement across all five components are included in Appendix C, providing clear guidelines for the practical implementation of the assessment framework.

Discussion

Implementation Considerations

The ALiF framework offers a comprehensive foundation for fostering AI literacy across various institutional contexts. Its multi-level, role-specific approach allows for tailored implementation while maintaining a consistent competency structure. This flexibility is especially valuable considering stakeholder groups’ diverse AI literacy needs and starting points.
The implementation of ALiF can be adapted to institutional resources, priorities, and existing educational structures. In resource-constrained settings, the framework can be rolled out incrementally, beginning with foundational competencies across key roles. Institutions with more resources may enact comprehensive programs that address all components and levels simultaneously.
The assessment approach provides a structured method for measuring baseline AI literacy, tracking progress, and pinpointing areas for targeted development. The ability to create both individual and institutional profiles supports focused interventions and resource allocation.
This staged approach to AI literacy development aligns with recommendations from recent literature. De Silva et al. [17] emphasize the importance of modular strategies for AI literacy that allow for flexible implementation based on institutional context. Similarly, Wiljer et al. [15] highlight the value of progressive competency development for healthcare professionals engaging with AI technologies. Figure 5 illustrates the ALiF Implementation Pathway Model, which provides a structured approach for organizations beginning to adopt the framework. The model outlines five sequential phases: Institutional Readiness, Baseline Assessment, Program Development, Implementation, and Continuous Improvement. Each phase includes essential activities, estimated timeframes, and key stakeholders involved. This pathway offers a comprehensive roadmap for systematic implementation, from initial organizational preparation to ongoing enhancement and expansion.

Comparison with Existing Frameworks

ALiF addresses several limitations of existing frameworks. Unlike models focusing exclusively on technical skills or specific stakeholder groups, ALiF integrates technical, ethical, and practical dimensions while providing role-specific adaptations. This holistic approach aligns with recent calls for comprehensive AI literacy development [20].
The clear progression pathways from foundational to advanced levels signify an improvement over existing frameworks, which often lack structured development paths. This progressive approach promotes the continuous development of AI literacy rather than viewing it as a static skill set.
Integrating ethical considerations and data literacy as core components rather than supplementary elements acknowledges the vital role these aspects play in responsible AI use. This positions ALiF as a framework that prioritizes ethical and responsible AI implementation.
Recent research on AI literacy and readiness in higher education has identified similar gaps in current approaches. Surveys among stakeholder groups consistently reveal low to moderate levels of AI literacy, particularly in technical skills and ethical understanding [16]. ALiF addresses these gaps with its comprehensive structure and tailored, role-specific frameworks.
In healthcare contexts, there is substantial evidence for the necessity of structured AI literacy frameworks. Studies indicate significant gaps in AI knowledge and training among healthcare professionals despite the widespread implementation of AI technologies in clinical settings [18]. The Clinician framework within ALiF offers a focused approach to tackling these specific needs.

Anticipated Challenges

Several challenges can be anticipated in implementing the framework based on patterns observed in the literature. Resource constraints, particularly faculty expertise and time, may hinder comprehensive implementation. Many institutions lack enough AI-literate faculty to support widespread training initiatives, creating a “chicken and egg” problem in scaling implementation.
Cultural resistance to AI adoption may pose challenges in various institutional contexts. Concerns regarding job displacement, academic integrity, and the role of AI in education and healthcare may necessitate thoughtful engagement beyond mere technical training. The ethical components of the framework are intended to address these concerns but will require considerable time and effort in dialogue.
The rapid evolution of AI technologies presents a continuous challenge for maintaining the relevance of frameworks. As new AI technologies emerge, these frameworks will require regular updates to stay current. A periodic review and framework revision process should be established to address this challenge.
The significant resource investments needed for complete implementation may restrict adoption, especially in resource-limited environments. Tackling this issue will necessitate innovative strategies for resource sharing, inter-institutional collaboration, and prioritizing high-impact components [19].

Future Directions

Future development of ALiF should focus on several key areas. First, empirical validation across diverse institutional contexts will strengthen the framework’s evidence base. Studies examining the relationship between framework competencies and practical outcomes would be particularly valuable.
Creating supportive resources such as curriculum materials, case studies, and implementation guides would encourage wider adoption. These resources would be especially beneficial for institutions with limited AI expertise education.
Another area for expansion is adapting the framework to additional contexts, including industry, government, and nonprofit sectors. While the current framework focuses on educational and healthcare settings, the core structure can be modified for other professional contexts.
Investigating the relationship between AI literacy development and outcomes like educational innovation, research productivity, and healthcare quality would offer valuable insights into the impact of implementing such frameworks. These outcome studies would bolster the argument for institutional investment in AI literacy.

Conclusions

The AI Literacy Framework provides a comprehensive, adaptable structure for developing essential AI competencies across various professional contexts. By addressing the five core components of AI literacy—Technical Understanding, Critical Evaluation, Practical Application, Ethical Considerations, and Data Literacy—the framework offers a holistic approach to building the capabilities needed in an AI-enhanced world.
Through its progressive level structure and role-specific adaptations, ALiF supports individuals and organizations in systematically developing the knowledge, skills, and ethical awareness needed to leverage AI technologies effectively. As artificial intelligence continues transforming education, research, healthcare, and organizational operations, frameworks like ALiF become essential for ensuring that professionals can confidently navigate, implement, and lead in this evolving landscape.
The framework provides a foundation for future empirical studies examining the effectiveness of different approaches to AI literacy development. As institutions increasingly recognize the importance of AI literacy, ALiF offers a structured, literature-informed approach to developing these essential competencies.

References

  1. García-Peñalvo, F.J.; Corell, A.; Abella-García, V.; Grande-de-Prado, M. Artificial Intelligence in Higher Education: A Systematic Mapping. Sustainability 2022, 14, 1493. [Google Scholar] [CrossRef]
  2. Long, D.; Magerko, B. (2020). What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–16). [CrossRef]
  3. Eaton, S.E.; Turner, K.L. Academic integrity and AI: Bridging the digital divide with education. Innovations in Education and Teaching International 2023, 60, 456–471. [Google Scholar] [CrossRef]
  4. Hwang, G.J.; Chen, P.Y. Artificial intelligence in education: A review of methodological approaches. Educational Research Review 2023, 38, 100474. [Google Scholar] [CrossRef]
  5. Prinsloo, P.; Slade, S. Promoting digital literacy in the age of AI: Frameworks for ethical decision-making. Journal of Learning Analytics 2023, 10, 7–22. [Google Scholar] [CrossRef]
  6. Zary, N. (2024). AI Literacy Framework (ALiF): A Progressive Competency Development Protocol for Higher Education. protocols.io. [CrossRef]
  7. EDUCAUSE Horizon Report. (2023). Teaching and Learning Edition. [CrossRef]
  8. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education--where are the educators? International Journal of Educational Technology in Higher Education 2019, 16, 1–27. [Google Scholar] [CrossRef]
  9. Kumar, V.; Sharma, R. AI literacy in Indian higher education: A comprehensive framework. Education and Information Technologies 2023, 28, 12789–12805. [Google Scholar] [CrossRef]
  10. Chan, T.W.; Looi, C.K.; Chen, W.; Wong, L.H.; Chang, B.; Liao, C.C.; Ogata, H. AI in education in Asia: A comparative framework analysis. Research and Practice in Technology Enhanced Learning 2023, 18, 1–24. [Google Scholar] [CrossRef]
  11. Kong, S.C.; Lai, M.; Sun, D. Teacher development in computational thinking: Design and learning outcomes of programming concepts, practices and pedagogy. Computers & Education 2020, 151, 103872. [Google Scholar] [CrossRef]
  12. Yuan, C.; Fan, S.; Chin, J.; Cha, W. Developing the artificial intelligence literacy scale (AILS). Interactive Learning Environments 2021, 1–16. [Google Scholar] [CrossRef]
  13. Aşiksoy, G. Artificial intelligence literacy scale development and investigation of university students according to gender, department, and academic achievement. International Journal of Educational Technology in Higher Education 2023, 18, 1–24. [Google Scholar] [CrossRef]
  14. Ma, S.; Li, Z.; Chen, Y. The Artificial Intelligence Literacy Scale for Chinese College Students (AILS-CCS): Scale Development and Validation. Education and Information Technologies 2023, 28, 1–22. [Google Scholar] [CrossRef]
  15. Wiljer, D.; Tavares, W.; Mylopoulos, M.; Kapralos, B.; Charow, R.; Faieta, J.; Disperati, F. Developing AI Literacy: A Critical Competency for Healthcare Professionals. Academic Medicine 2023, 98, 672–680. [Google Scholar] [CrossRef]
  16. Chen, X.; Zou, D.; Xie, H.; Wang, F.L. Artificial intelligence in higher education: A systematic review of research trends, applications, and challenges. Education and Information Technologies 2023, 28, 1–35. [Google Scholar] [CrossRef]
  17. De Silva, D.; Sriratanaviriyakul, N.; Warusawitharana, A.; Sarkar, S. (2022). AI Literacy: A Modular Approach. IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE), 7-10. [CrossRef]
  18. Holmes, W.; Porayska-Pomsta, K.; Holstein, K.; Sutherland, E.; Baker, T.; Shum, S.B.; Koedinger, K.R. Ethics of AI in Education: Towards a Community-Wide Framework. International Journal of Artificial Intelligence in Education 2022, 32, 357–383. [Google Scholar] [CrossRef]
  19. Mansoor, H.M.; Al-Said, T.; Al-Anqoudi, Z.; Rahim, N.F. A.; Al-Badi, A. Cross-regional exploration of artificial intelligence literacy and readiness in higher education. Education and Information Technologies 2023, 28, 11729–11751. [Google Scholar] [CrossRef]
  20. Zawacki-Richter, O.; Marín, V.I.; Bond, M. Systematic review of research on artificial intelligence applications in higher education -- Update and extension. International Journal of Educational Technology in Higher Education 2023, 20, 1–42. [Google Scholar] [CrossRef]
Figure 1. The ALiF framework structure.
Figure 1. The ALiF framework structure.
Preprints 152676 g001
Figure 2. Progression pathways across all five components of the ALiF framework.
Figure 2. Progression pathways across all five components of the ALiF framework.
Preprints 152676 g002
Figure 3. Role-Specific Framework Comparison.
Figure 3. Role-Specific Framework Comparison.
Preprints 152676 g003
Figure 4. ALiF assessment methodology.
Figure 4. ALiF assessment methodology.
Preprints 152676 g004
Figure 5. ALiF Implementation Pathway Model.
Figure 5. ALiF Implementation Pathway Model.
Preprints 152676 g005
Table 1. Core Components Detailed Description.
Table 1. Core Components Detailed Description.
Component Definition Key Elements Example Competencies
Data Literacy (DAT) Understanding data as the foundation for AI systems
  • Data quality assessment
  • Data management practices
  • Analysis methodologies
  • Communication approaches
  • Evaluates quality and limitations of datasets
  • Manages data collection and processing appropriately
  • Analyzes and interprets data for decision-making
  • Communicates data insights through effective visualization
Technical Understanding (TEC) Knowledge and skills required to understand AI systems and their capabilities
  • AI principles and concepts
  • Tool operations and applications
  • Integration methods
  • Limitations and constraints
  • Explains AI concepts in context-appropriate terms
  • Identifies suitable AI tools for specific tasks
  • Integrates AI tools into professional workflows
  • Recognizes and articulates limitations of AI systems
Critical Evaluation (EVA) Systematic assessment of AI outputs, processes, and impacts
  • Output verification methods
  • Quality assessment approaches
  • Bias identification
  • Impact measurement
  • Verifies accuracy of AI-generated content
  • Evaluates effectiveness of AI tools for specific purposes
  • Identifies potential biases in AI outputs
  • Measures impact of AI use in professional contexts
Ethical Considerations (ETH) Responsible and principled use of AI in professional contexts
  • Ethical principles and frameworks
  • Policy awareness and compliance
  • Risk management
  • Governance approaches
  • Understands and applies ethical guidelines in AI use
  • Adheres to institutional policies on AI
  • Identifies and mitigates potential risks
  • Contributes to ethical AI governance
Practical Application (PRA) Effective implementation of AI tools and processes in real-world contexts
  • Domain-specific implementations
  • Workflow optimization
  • Problem-solving approaches
  • Innovation development
  • Applies AI tools to domain-specific tasks
  • Develops efficient AI-enhanced workflows
  • Solves complex problems using AI capabilities
  • Creates innovative approaches to professional challenges
Table 2. Progression Level Competency Matrix.
Table 2. Progression Level Competency Matrix.
Level Data Literacy Technical Understanding Critical Evaluation Ethical Considerations Practical Application
Foundation (L1) Basic evaluation of data types and structures; organization of data using standard methods; performance of simple analyses; creation of basic visualizations Basic understanding of AI concepts and tools; identification of common applications; simple prompt creation; awareness of limitations Fundamental verification of AI-generated content; use of basic quality checks; identification of common errors; tracking of AI impact on tasks Core ethical awareness and guidelines; proper attribution of AI contributions; adherence to policies; identification of basic risks Essential applications of AI in professional context; implementation of simple workflows; documentation of AI use; resolution of basic challenges
Intermediate (L2) Advanced data management techniques; implementation of data quality improvement measures; application of appropriate analytical methods; design of effective visualizations for complex data Advanced understanding of AI capabilities; use of multiple tools for complex tasks; customization of settings; development of prompt templates Systematic evaluation methodologies; implementation of quality control procedures; development of error detection systems; analysis of efficiency gains Developed ethical frameworks; creation of documentation systems; implementation of risk assessment protocols; assistance to peers on compliance Complex implementation strategies; optimization of workflows; process mapping; solution development for multifaceted challenges
Advanced (L3) Comprehensive data governance strategies; development of methodologies for data stewardship; creation of advanced analytical frameworks; leadership in data communication initiatives Strategic implementation leadership; development of innovative applications; creation of training materials; design of scalable frameworks Framework development for evaluation; establishment of quality standards; design of comprehensive impact studies; leadership in assessment methodology Ethical guidance and policy development; leadership in compliance initiatives; design of risk management frameworks; development of system-wide safeguards Innovation leadership in professional domain; development of transformative workflows; design of novel solutions; leadership in implementation projects
Table 3. Role-Specific Implementation Priorities.
Table 3. Role-Specific Implementation Priorities.
Stakeholder Role Primary Focus Areas Key Challenges Implementation Priorities
Learners Academic tool use; critical evaluation of AI-generated content; ethical compliance with institutional policies; basic data literacy for academic contexts Academic integrity concerns; distinguishing between AI assistance and plagiarism; developing confidence in verifying AI outputs; balancing tool use with skill development Foundation-level technical and ethical competencies; hands-on practice with academic AI tools; verification skills development; integration of AI into study methods
Educators Curriculum integration; assessment adaptation; student guidance on appropriate AI use; instructional design with AI tools Maintaining academic standards while embracing technology; designing assessments in AI-rich environments; differentiating instruction using AI; modeling appropriate AI use for students Technical-ethical balance with pedagogical applications; AI-enhanced teaching methodologies; development of AI-aware assessment strategies; creation of AI policies for educational contexts
Researchers Research methodology enhancement; literature analysis; data processing; validation of AI-generated content in scholarly contexts Maintaining research integrity; ensuring transparency in AI use; addressing methodological questions; managing complex data with AI assistance Advanced technical understanding with strong ethical foundation; rigorous verification protocols; methodological innovation with AI tools; transparent documentation practices
Clinicians Patient care enhancement; clinical decision support; documentation efficiency; healthcare data analysis Balancing technology with human care; ensuring patient privacy and consent; maintaining clinical judgment; integrating AI into existing workflows Practical applications with strong ethical focus; patient-centered AI use; clinical workflow integration; data privacy and security emphasis
Administrators Operational efficiency; process automation; decision support; organizational governance of AI Developing appropriate policies; managing change resistance; ensuring fair and consistent AI implementation; addressing job displacement concerns Balanced competencies across all framework components; focus on governance structures; risk assessment protocols; staff development support
Table 4. Assessment Methods Comparison.
Table 4. Assessment Methods Comparison.
Assessment Method Primary Purpose Advantages Limitations Appropriate Uses
Self-Assessment Questionnaire Baseline measurement of AI literacy across framework components; identification of strengths and gaps; tracking of progress over time Scalable and efficient; provides immediate feedback; covers all framework components; adaptable to different roles Self-reporting bias; limited verification of actual capabilities; potential for misinterpretation of competency levels Initial assessment; progress tracking; large-scale implementation; self-directed development planning
Practical Assessment Tasks Demonstration of applied skills; verification of capability to implement AI tools in authentic contexts; observation of problem-solving approaches Authentic evidence of capability; demonstrates application in context; reveals practical limitations and strengths; assesses integration of multiple competencies Resource-intensive; difficult to standardize; may not assess all framework components equally; requires expert assessment Competency verification; certification processes; summative assessment; performance evaluation
Portfolio Assessment Documentation of AI literacy development over time; collection of evidence demonstrating capability across contexts; reflection on learning process Captures development trajectory; provides authentic artifacts; encourages reflection; supports personalized evidence collection Time-intensive to develop and assess; variable quality and comprehensiveness; requires clear assessment criteria Longitudinal development tracking; comprehensive capability assessment; professional development planning; evidence-based certification
Peer & Expert Review Collaborative assessment of AI literacy; verification of capabilities by colleagues or subject matter experts; feedback on performance in authentic contexts Incorporates multiple perspectives; provides targeted feedback; leverages distributed expertise; addresses blind spots in self-assessment Potential for inconsistency across reviewers; logistical challenges in coordination; may be influenced by interpersonal factors Advanced-level assessment; community-building around AI literacy; formative feedback; validation of complex capabilities
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated