Preprint
Article

This version is not peer-reviewed.

Generative Artificial Intelligence and Its Role in the Development of Clinical Cases in Medical Education: A Scoping Review Protocol

Submitted:

13 January 2025

Posted:

14 January 2025

You are already at the latest version

Abstract

Introduction: Integrating generative AI into medical education addresses challenges in developing clinical cases for case-based learning (CBL), a method that enhances critical thinking and learner engagement through realistic scenarios. Traditional CBL is resource-intensive and less scalable. Generative AI can produce realistic text and adapt to learning needs, offering promising solutions. This scoping review maps existing literature on generative AI's use in creating clinical cases for CBL and identifies research gaps. Methods: This review follows Arksey and O'Malley’s (2005) framework, enhanced by Levac et al. (2010), and aligns with PRISMA-ScR guidelines. A systematic search will occur across major databases like PubMed, Scopus, Web of Science, EMBASE, ERIC, and CINAHL, along with gray literature. Inclusion criteria focus on studies published in English between 2014 and 2024, examining generative AI in case-based learning (CBL) in medical education. Two independent reviewers will screen and extract data, iteratively charted using a standardized tool. Data will be summarized narratively and thematically to identify trends, challenges, and gaps. Results: The review will present a comprehensive synthesis of current applications of generative AI in CBL, focusing on the types of models utilized, educational outcomes, and learner perceptions. Key challenges, including ethical and technical barriers, will be emphasized. The findings will also outline future directions and recommendations for integrating generative AI into medical education. Discussion: This review will enhance understanding of generative AI's role in improving CBL by addressing resource constraints and scalability challenges while maintaining pedagogical integrity. The findings will guide educators, policymakers, and researchers on best practices, emerging opportunities, and areas needing further exploration. Conclusion: Generative AI has significant potential to revolutionize competency-based learning (CBL) in medical education. By mapping current evidence, this review will offer valuable insights into its potential applications, effectiveness, and challenges, paving the way for innovative and adaptive educational strategies.

Keywords: 
;  ;  ;  ;  ;  ;  ;  
Subject: 
Social Sciences  -   Education

1. Introduction

1.1. Background and Rationale

Medical education has undergone significant transformations over recent decades due to technological advancements, evolving healthcare needs, and innovations in adaptive learning methods (de Oliveira et al., 2024). Among various pedagogical approaches, case-based learning (CBL) has emerged as a dynamic educational strategy, linking theoretical knowledge with real-world clinical applications through inquiry-based learning. CBL promotes critical thinking, deeper learner engagement, and improved clinical reasoning by immersing students in contextual, collaborative clinical scenarios that enhance their problem-solving and decision-making skills (Thistlethwaite et al., 2012). This approach is crucial in preparing healthcare professionals for the complexities of clinical contexts practice.
Despite its advantages, conventional CBL frameworks face notable challenges in creating, developing, and disseminating clinical cases. Producing high-quality, authentic cases often requires extensive resources, including:
  • Substantial clinical content.
  • Contributions from multiple subject matter experts.
  • Significant time to develop and provide personalized feedback (Sultana et al., 2024).
Moreover, traditional CBL faces limitations in scalability. Generating diverse case sets for larger cohorts of learners requires continuous input from educators, who are constrained in their ability to adapt to unique learner needs (Kaur et al., 2020). These challenges underscore an urgent need for resource-efficient, scalable, and adaptive methods that can enhance the pedagogical impact of CBL without compromising educational outcomes quality.
Generative artificial intelligence (AI) has recently emerged as a promising solution to these challenges. Generative AI refers to advanced machine learning models capable of creating realistic, context-specific content, including text, images, audio, and video (Jowsey et al., 2023; Preiksaitis & Rose, 2023). These models, such as ChatGPT, have demonstrated potential in synthesizing complex clinical information to develop diverse, authentic clinical cases that align with specific learning objectives while providing real-time, personalized feedback (Hale et al., 2024). By integrating generative AI into CBL, educators can ease the burden of case creation, enhance scalability, and deliver adaptive, learner-centric education experiences.
However, despite the potential benefits, limited research explicitly explores the use of generative AI in CBL within medical education. Furthermore, questions remain regarding the best practices for integrating AI into this framework, its impact on educational outcomes, and the ethical and technical challenges that may arise. This scoping review aims to address these gaps by mapping the existing evidence on generative AI’s role in CBL and identifying opportunities for future research and practical implementation.

1.2. Research Objectives

This scoping review aims to map the existing literature and provide a comprehensive overview of the use of generative artificial intelligence (AI) in developing clinical cases for case-based learning (CBL) within medical education. The review seeks to identify overarching themes, assess challenges, and highlight opportunities for future research and practical implementation by synthesizing evidence from various educational contexts and settings. Specifically, this review has the following objectives:
Map Current Applications of Generative AI in CBL
  • Identify the types of generative AI models used (e.g., ChatGPT and Gemini) and their specific roles in developing clinical cases.
  • Explore the educational contexts in which these models are applied, including undergraduate education, postgraduate training, and continuing professional development.
Assess Educational Outcomes
  • Investigate the reported impact of generative AI-assisted CBL on learner outcomes, such as:
    Clinical reasoning.
    Knowledge acquisition.
    Problem-solving and decision-making skills.
    Learner engagement and satisfaction.
Examine Learner and Educator Perspectives
  • Analyze perceptions, experiences, and feedback from learners and educators regarding the use of generative AI in CBL.
  • Identify factors influencing AI-generated clinical cases' acceptance, usability, and trustworthiness.
Identify Challenges and Ethical Considerations
  • Highlight technical, pedagogical, and logistical barriers to implementing generative AI in CBL.
  • Explore ethical concerns, including data bias, accuracy, academic integrity, and professional development implications.
Highlight Research Gaps and Future Directions
Identify gaps in the existing body of literature related to the role of generative AI in CBL.
Suggest areas for further investigation, including:
Enhancing the scalability and adaptability of AI-generated clinical cases.
Developing guidelines for ethical and effective integration of generative AI into medical curricula.
Propose Recommendations for Practice and Policy
  • Provide evidence-based recommendations for stakeholders, including educators, administrators, and policymakers, to guide the integration of generative AI technologies into health professions education.

2. Methodology

2.1. Scoping Review Framework

Our scoping review methodology will be guided by the framework proposed by Arksey and O’Malley (2005) (Arksey & O'Malley, 2005), which was further enhanced by Levac et al. (2010) (Levac et al., 2010) to include six distinct and scaffolding stages that outline key steps undertaken by the researchers. In terms of data reporting, we will conform to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) reporting standards (Tricco et al., 2018). Finally, to disseminate our work with the broader scientific community and to ensure transparency in the review process, we plan to register and publish our current protocol with the Open Science Framework (OSF). We will follow the following six stages in our scoping review:
  • Stage 1: Identifying the research questions (outlined below)
  • Stage 2: Identifying relevant studies (search strategy and databases)
  • Stage 3: Study selection (eligibility criteria)
  • Stage 4: Charting the data (organization of findings)
  • Stage 5: Collating, summarizing and reporting the results
  • Stage 6 (optional): Consultation with relevant stakeholders

2.2. Identifying the Research Questions

The primary question guiding our scoping review is: What is the potential of generative AI in the development of clinical cases for CBL in medical education? Furthermore, we seek to answer the following secondary questions:
  • What generative AI applications (such as ChatGPT-4 and ChatGPT-3) currently exist for the development of clinical cases?
  • Does this technology have a clear and established role in enhancing the educational outcomes of learners?
  • Are there any reported learner perceptions associated with using generative AI to solve cases in medical education?
  • What ethical concerns and technical barriers might arise when implementing AI for CBL?
  • What gaps are identified in the current literature on the role of generative AI in CBL, and what future policies, directions, and research are needed?

2.3. Search Strategy

To ensure a comprehensive and systematic exploration of relevant academic and gray literature, the search strategy will follow the recommendations of the JBI scoping review methodology (Peters et al., 2015). The process will include collaboration with an expert health sciences librarian to refine and optimize the search terms and strategies. The following steps outline the revised search strategy:
Databases: The review will include searches across the following electronic databases:
  • PubMed/MEDLINE
  • Scopus
  • Web of Science
  • EMBASE
  • ERIC
  • CINAHL
  • Cochrane Library
  • Google Scholar (to capture gray literature)
Search Terms: The search strategy will use a combination of MeSH terms and keywords relevant to generative AI, medical education, and case-based learning. Boolean operators (AND, OR) will be used to combine terms, ensuring a broad and comprehensive search. These terms will include:
  • "Generative AI"
  • "Large Language Model"
  • "ChatGPT"
  • "Case-Based Learning"
  • "Medical Education"
  • "Clinical Cases"
Inclusion and Exclusion Criteria:
  • Inclusion Criteria:
    Articles published in English between 2014 and 2024.
    Peer-reviewed primary studies (e.g., observational, qualitative, quantitative, mixed-methods) and secondary studies (e.g., systematic reviews, meta-analyses).
    Studies focused on generative AI applications in developing clinical cases for CBL in medical education.
    Studies involving health professions students or practicing healthcare professionals.
  • Exclusion Criteria:
    Articles not related to generative AI or medical education.
    Studies focusing exclusively on non-generative AI.
    Publications in languages other than English.
Gray Literature:
In addition to academic databases, gray literature sources such as reports, policy papers, and conference proceedings will be explored through Google Scholar and institutional repositories. Including gray literature will help capture emerging evidence and practical insights that may not be available in peer-reviewed journals.
Search Strategy Refinement:
The search strategy will be peer-reviewed using the PRESS (Peer Review of Electronic Search Strategies) guidelines (McGowan et al., 2016) to ensure its validity and reliability. Iterative refinements will be documented and shared with the research team for transparency.
Search Management:
Search results will be imported into a reference management software (EndNote v.21) for organization and removal of duplicates. A systematic hand-search will complement database searches to ensure no relevant studies are missed.
Documentation:
A detailed log of search strings, iterations, and database outputs will be maintained to enhance transparency and reproducibility.

2.4. Study Selection

Two independent reviewers will conduct the screening process to minimize potential selection bias and ensure consistency. They will screen the titles, abstracts, and subsequently the full-text articles if they meet the eligibility criteria outlined below. The search results will be managed and stored using the Covidence systematic review software. Regular team meetings will be held to discuss progress, pilot testing, and potential challenges, as we expect multiple iterations during this process. Any discrepancies will be resolved by consensus or through consultation with a third reviewer until an agreement is reached. Finally, the research results and study selection phase will be detailed in the final scoping review and presented in a PRISMA-ScR flow diagram.
Criterion Inclusion Criteria Exclusion Criteria
Study design Primary research studies (observational cohort, quantitative and qualitative studies, mixed-methods, random clinical trials) and
secondary study types (systematic reviews, meta-analyses, conference proceedings/abstracts, dissertation and theses)
Editorials, opinions, commentaries (articles lacking original data)
Population Healthcare professionals (e.g. medical doctors, nurses, and allied health professionals) and health professions students in training (undergraduate, postgraduate education and continuing professional development)
Professionals not engaged in the healthcare field (e.g finance, business and art) or students pursuing undergradaute and postgraduate degrees not related to health disciplines
Focus of study
(intervention)
Studies specifically discussing the use, application and impact of generative AI in developing cases in CBL framework in medical education or training Studies focusing on non-generative AI usage in relation to CBL application, implementation and impact in medical education
Outcomes Changes in learners’ perceptions or documented alterations in their clinical reasoning abilities, engagement, knowledge acquisition,
problem-solving and decision-making skills or any progression outcomes
No changes in learners’ perceptions or documented alterations in their clinical reasoning, engagement, knowledge acquisition, problem-solving and decision-making skills or no indicator on progression outcomes
Language Articles published in the English language Articles published in languages other than English (Unless translation is permittable)
Publication date Articles published in the last 10 years (2014 – present) Articles published before 2014

2.5. Charting of Data

The data charting process will be conducted in accordance with the JBI methodology (Peters et al., 2015) to ensure consistency and comprehensiveness in capturing relevant information from the included studies. Two independent reviewers will use a standardized data extraction tool, collaboratively developed by the research team. This tool will be piloted by reviewing a sample of articles to ensure it captures all pertinent details and aligns with the research objectives. The charting process will be iterative, allowing for modifications based on emerging insights during data extraction.
The following categories will be included in the data extraction tool:
  • Study Characteristics: Study ID, year of publication, country of origin, journal name, author(s), study title, study design, sample size, educational context (e.g., academic, clinical), and study duration.
  • Population Details: Demographic information (e.g., age, gender), type of participants (e.g., students, healthcare professionals), and level of training (e.g., undergraduate, postgraduate, or continuing professional development).
  • Generative AI Model Characteristics: Specific models used (e.g., ChatGPT or Gemini), language and platform utilized, topic or focus of case-based learning (CBL), learning objectives of the clinical cases, and the duration and frequency of the clinical cases.
  • Educational Outcomes: Reported outcomes for learners, including clinical reasoning, knowledge retention, problem-solving skills, and learner engagement. Evaluation metrics (e.g., surveys, tests) and time points of measurements will also be recorded.
  • Key Findings: Main results related to generative AI applications, any statistical analysis conducted, and thematic findings (if applicable).
  • Challenges and Limitations: Ethical concerns, technical barriers, and pedagogical limitations identified by the authors.
  • Future Directions: Recommendations for future research, implications for clinical practice, and gaps or suggestions proposed by the authors.
Regular team meetings will be held to discuss progress, resolve discrepancies, and refine the data charting process. Consensus or consulting a third reviewer will resolve disagreements between reviewers. All extracted data will be recorded in a master Excel spreadsheet for ease of analysis and synthesis. To enhance transparency, any adjustments to the data extraction tool will be documented in the final review.

2.6. Collating, Summarizing, and Reporting the Results

The findings from the extracted forms will be collated, summarized, and organized into several tables, figures, and illustrations for easy data visualization. This will provide a comprehensive overview of the current base of published evidence. Following the data charting, the team of authors will adopt a systematic and narrative approach to present the trends (after compiling them into a master Excel spreadsheet) related to generative AI applications in creating clinical cases using key themes and specific categories. The focus of the studies, educational context, included populations, and educational outcomes will be explored and narrated thematically.
Additionally, we will identify areas where there is a lack of research for future recommendations and suggestions. The research question and objectives will guide this entire process, and we will employ a coding framework to categorize the analysis of quantitative, qualitative, and mixed methods studies. This approach will help in identifying commonalities and differences in the articles' main findings and will assist in thematic analysis. To summarize the results, we will use both quantitative and qualitative analyses to capture relevant variables and information, all while adhering to the PRISMA-ScR guidelines.

2.7. Consultation with Stakeholders (Optional stage)

In this optional stage of the scoping review, we aim to consult various stakeholders who are actively engaged in the field of medical education. This includes discussions with AI education experts, faculty, technicians, clinical educators, and students. Their perspectives will provide us with a comprehensive view of our synthesized findings and assist us in suggesting key recommendations for future implications and research.

3. Dissemination Plans

3.1. Ethics

Scoping reviews do not involve active or direct engagement with human participants (lack of primary data collection), and therefore, they do not require formal ethical approval from an institutional board. Nevertheless, our scoping review process, including screening, extraction and synthesis of findings, will maintain the highest level of academic ethical integrity. Furthermore, we will be transparent in reporting the findings, describing their limitations or challenges and future implications. Proper handling of articles will include avoiding plagiarism and removing potential duplicates. Finally, the team of authors will disclose and manage any potential conflict of interest that might arise at any stage of the review process.

3.2. Dissemination Plans

The dissemination of findings from this scoping review will be guided by a comprehensive strategy aimed at maximizing visibility and impact within the medical education community. The plan includes the following components:
  • Academic Publications: The scoping review will be submitted to a high-impact, peer-reviewed journal focused on medical education or educational technology to ensure wide dissemination among researchers and educators. Supplemental materials, including thematic maps and data visualizations, will enhance the accessibility and interpretability of the findings.
  • Conference Presentations: The results will be presented at major international and regional conferences, such as AMEE (Association for Medical Education in Europe), Ottawa Conference, or local symposia on health professions education. These presentations will include both oral sessions and interactive workshops to engage diverse stakeholders.
  • Stakeholder Engagement: Briefings and summaries of the findings will be prepared for key stakeholders, including educators, administrators, policymakers, and AI experts. Tailored recommendations will be provided to guide decision-making regarding the integration of generative AI into medical curricula.
  • Social Media Outreach: Key findings will be shared through professional social media platforms, such as LinkedIn, to reach a global audience of medical educators and technology enthusiasts. Short, visually engaging posts with infographics and highlights will be designed to encourage broader discussion and knowledge sharing.
  • Collaborative Knowledge Translation: Partnerships will be sought with organizations and institutions in medical education to develop practical guidelines or training modules based on the review findings. Workshops and webinars will be organized to facilitate the translation of evidence into actionable practices.
  • Local and Regional Dissemination: Efforts will be made to disseminate the findings within the MENA region, leveraging local networks and forums to promote the relevance of generative AI applications in regional educational contexts. Multilingual summaries may be created to address language diversity in the region.
  • Public Engagement: Accessible summaries of the findings will be shared with the broader public through blogs, podcasts, or media interviews to foster awareness of the potential of generative AI in improving medical education outcomes.

3.3. Timeline

Preprints 146066 i001
A clear timeline will be discussed and agreed upon by the team of authors; we will ensure that periods of flexibility are included, especially in the initial data charting and pilot-testing phase, as well as in later stages during the article submission and revision process. We will conduct regular bi-weekly meetings amongst the team to offer opportunities for discussions and updates. A detailed breakdown of the final scoping review is outlined below.
  • Protocol Development and Registration: 5 weeks (Weeks 1-5)
  • Literature Search: 4 weeks (Weeks 6-9)
  • Study Selection: 6 weeks (Weeks 10-15)
  • Data Charting: 4 weeks (Weeks 16-19)
  • Data Analysis and Synthesis: 8 weeks (Weeks 20-27)
  • Manuscript Writing: 6 weeks (Weeks 28-33)
  • Dissemination: 8 weeks (Weeks 34-41)

3.4. Limitations

This scoping review acknowledges several limitations inherent in its methodology and scope. Firstly, the restriction to articles published in English may exclude valuable insights from non-English studies, potentially introducing a language bias. Similarly, the time frame, which limits the review to studies published between 2014 and 2024, could omit earlier foundational works or recent studies that are not yet indexed in databases.
The inclusion of gray literature seeks to fill gaps in peer-reviewed publications but may introduce variability in the quality and rigor of the included sources. Additionally, the diversity of studies on generative AI in case-based learning (CBL) across various contexts and educational levels may create challenges in synthesizing findings and identifying generalizable results themes.
As a scoping review, the emphasis is on mapping the breadth of evidence rather than performing a detailed appraisal or synthesis of study quality. This approach limits the ability to draw definitive conclusions about the effectiveness or impact of generative AI on educational outcomes. Furthermore, reliance on available published data implies that unpublished or proprietary applications of generative AI in CBL may not be included, potentially underrepresenting the full scope of the current landscape practices.
Finally, the novelty of generative AI technologies in medical education means that the body of evidence may still be emerging and incomplete. As such, findings should be interpreted cautiously, and their applicability across various educational settings might differ. Despite these limitations, the review offers a valuable foundation for understanding the potential of generative AI in CBL and emphasizes critical areas for future research and practice.

3.5. Implications for Future Research

By disseminating our protocol, we seek to instigate scholarly conversations regarding the application of generative AI in CBL, and subsequently work on developing a comprehensive scoping review that synthesises the existing literature, identifies critical research gaps and provides evidence-based key recommendations for the integration of this technology into various levels of medical education. Rigorous, long-term studies and original research articles will be crucial to assess the effectiveness of this technology, particularly its influence on learners’ engagement, knowledge acquisition and enhancement of clinical reasoning. Future research should focus on the pedagogical design and implementation strategies while concurrently evaluating the acceptability and usability of generative AI among both educators and learners, to ultimately provide a nuanced understanding of its potential. Lastly, comparative studies comparing traditional CBL and AI-assisted cases could provide evidence of its relative efficacy and contribution to educational outcomes.

Appendix A. Draft of Initial Search Strategy in Some Databases

This appendix presents the initial draft of the search strategy for identifying relevant literature on generative AI applications in case-based learning (CBL) within medical education. The search will be conducted across major electronic databases, including PubMed, Scopus, Web of Science, EMBASE, ERIC, CINAHL, and Google Scholar (for gray literature). This strategy will be refined in collaboration with an expert health sciences librarian to ensure comprehensive and systematic topic coverage.
Database Query
*(AND)
Date limitation Langauge restriction
Publication type
PubMed ("Generative AI"[MeSH] OR "Generative Artificial Intelligence"[MeSH] OR "Large Languge Model"[MeSH]) OR "CHATGPT"[MeSH]) OR ("Clinical Case"[MeSH] OR "Case-Based Learning"[MeSH]) OR ("Medical Education"[MeSH]) OR ("Medical Training"[MeSH]) AND
("2014/01/01"[publication date]
AND English [language] AND (Journal
Article[publication type] OR
Review[publication type])
Scopus ( TITLE-ABS-KEY ( generative AND ai ) OR TITLE-ABS-KEY ( chatgpt ) OR ( TITLE-ABS-KEY ( lange AND language AND model ) OR TITLE-ABS-KEY ( clinical AND case ) OR TITLE-ABS-KEY ( medical AND education ) OR TITLE-ABS-KEY ( case AND based AND learning ) AND PUBYEAR >
2014
AND (LIMITTO(
LANGAUGE,
"English"))
AND (LIMITTO(
DOCTYPE,
"ar") OR LIMITTO(
DOCTYPE,
"re"))
EMBASE ('generative ai'/exp OR 'generative ai' OR (generative AND ai) OR 'clinical case':ab,ti) OR (chatgpt:ab,ti) OR 'medical education':ab,ti) OR medical learning:ab,ti) OR ('case AND based AND learning:ab,ti) OR (healthcare AND training:ab,ti) AND [2014-2024]/py AND [english]/lim AND ([article]/lim OR [review]/lim)
CINAHL ("Generative AI" OR "Chatgpt" OR "AI") OR ("Clinical Case" OR "Case-Based Learning" OR "CBL") OR ("Medical Education") OR ("Deep Learning") OR ("clinical training") OR ("health education") AND
("2014/01/01"[publication date]
AND LA
"English"
AND (PT "Journal
Article" OR PT
"Review")

Appendix B. Preliminary Data Charting Form

Reviewer name:
Date of review:
Additional notes or comments:
Study characteristics Study ID
Year of publication
Country of origin
Journal name
Author(s)
Study title
Study design
Sample size
Educational context (e.g. academic, clinical)
Study duration
Population details Demographic data (age, gender, etc..)
Type of participants (students, healthcare professionals)
Level of training (undergraduate, postgraduate or continuing professional development)
Generative AI models (intervention) characteristics Specific models used (e.g. ChatGPT-4 or ChatGPT-3)
Language used
Platform utilized (if any)
Topic/focus of CBL
Learning objectives of the clinical cases
Duration and frequency of the clinical cases
Educational outcomes Outcomes for students (e.g. learner engagement, knowledge retention, clinical reasoning, and problem-solving)
Evaluation metrics (self-reported surveys or any other measurement tool)
Time-points of measurements
Relevant key findings Main reported results related to the intervention
Any statistical analysis reported
Thematic analysis (if any)
Challenges and limitations (reported by authors) Ethical concerns
Technical barriers
Pedagogical limitations
Future implications (proposed by authors) Recommendations for future research
Implications for clinical practice
Any identified gaps and suggestions

References

  1. Arksey, H., and L. O'Malley. 2005. Scoping studies: towards a methodological framework. International Journal of Social Research Methodology 8, 1: 19–32. [Google Scholar] [CrossRef]
  2. Berbenyuk, A., L. Powell, and N. Zary. 2024. Feasibility and Educational Value of Clinical Cases Generated Using Large Language Models. Stud Health Technol Inform 316: 1524–1528. [Google Scholar] [CrossRef] [PubMed]
  3. de Oliveira, M. A. C., A. Miles, and J. E. Asbridge. 2024. Modern medical schools curricula: Necessary innovations and priorities for change. J Eval Clin Pract 30, 2: 162–173. [Google Scholar] [CrossRef] [PubMed]
  4. Hale, J., S. Alexander, S. T. Wright, and K. Gilliland. 2024. Generative AI in Undergraduate Medical Education: A Rapid Review. Journal of Medical Education and Curricular Development 11: 23821205241266697. [Google Scholar] [CrossRef]
  5. Harrison, H., S. J. Griffin, I. Kuhn, and J. A. Usher-Smith. 2020. Software tools to support title and abstract screening for systematic reviews in healthcare: an evaluation. BMC Med Res Methodol 20, 1: 7. [Google Scholar] [CrossRef] [PubMed]
  6. Jowsey, T., J. Stokes-Parish, R. Singleton, and M. Todorovic. 2023. Medical education empowered by generative artificial intelligence large language models. Trends Mol Med 29, 12: 971–973. [Google Scholar] [CrossRef] [PubMed]
  7. Kaur, G., J. Rehncy, K. S. Kahal, J. Singh, V. Sharma, P. S. Matreja, and H. Grewal. 2020. Case-Based Learning as an Effective Tool in Teaching Pharmacology to Undergraduate Medical Students in a Large Group Setting. J Med Educ Curric Dev 7: 2382120520920640. [Google Scholar] [CrossRef] [PubMed]
  8. Levac, D., H. Colquhoun, and K. K. O'Brien. 2010. Scoping studies: advancing the methodology. Implement Sci 5: 69. [Google Scholar] [CrossRef] [PubMed]
  9. McGowan, J., M. Sampson, D. M. Salzwedel, E. Cogo, V. Foerster, and C. Lefebvre. 2016. PRESS Peer Review of Electronic Search Strategies: 2015 Guideline Statement. In J Clin Epidemiol. vol. 75, pp. 40–46. [Google Scholar] [CrossRef]
  10. McLean, S. F. 2016. Case-Based Learning and its Application in Medical and Health-Care Fields: A Review of Worldwide Literature. J Med Educ Curric Dev 3. [Google Scholar] [CrossRef] [PubMed]
  11. Peters, M. D., C. M. Godfrey, H. Khalil, P. McInerney, D. Parker, and C. B. Soares. 2015. Guidance for conducting systematic scoping reviews. Int J Evid Based Healthc 13, 3: 141–146. [Google Scholar] [CrossRef] [PubMed]
  12. Preiksaitis, C., and C. Rose. 2023. Opportunities, Challenges, and Future Directions of Generative Artificial Intelligence in Medical Education: Scoping Review. JMIR Med Educ 9: e48785. [Google Scholar] [CrossRef] [PubMed]
  13. Sultana, T. S., R. M. Gite, D. A. Tawde, C. Jena, K. Khatoon, and M. Kapoor. 2024. Advancing Healthcare Education: A Comprehensive Review of Case-based Learning. Indian Journal of Continuing Nursing Education 25, 1: 36–41. [Google Scholar] [CrossRef]
  14. Thistlethwaite, J. E., D. Davies, S. Ekeocha, J. M. Kidd, C. MacDougall, P. Matthews, J. Purkis, and D. Clay. 2012. The effectiveness of case-based learning in health professional education. A BEME systematic review: BEME Guide No. 23. Med Teach 34, 6: e421–e444. [Google Scholar] [CrossRef] [PubMed]
  15. Tricco, A. C., E. Lillie, W. Zarin, K. K. O'Brien, H. Colquhoun, D. Levac, D. Moher, M. D. J. Peters, T. Horsley, L. Weeks, S. Hempel, E. A. Akl, C. Chang, J. McGowan, L. Stewart, L. Hartling, A. Aldcroft, M. G. Wilson, C. Garritty, and S. E. Straus. 2018. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann Intern Med 169, 7: 467–473. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated