Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

The Thrills and Chills of ChatGPT: Implications for Assessments in Undergraduate Dental Education

Version 1 : Received: 23 February 2023 / Approved: 28 February 2023 / Online: 28 February 2023 (08:18:33 CET)

How to cite: ALI, K.; Barhom, N.; Marino, F.T.; Duggal, M. The Thrills and Chills of ChatGPT: Implications for Assessments in Undergraduate Dental Education. Preprints 2023, 2023020513. https://doi.org/10.20944/preprints202302.0513.v1 ALI, K.; Barhom, N.; Marino, F.T.; Duggal, M. The Thrills and Chills of ChatGPT: Implications for Assessments in Undergraduate Dental Education. Preprints 2023, 2023020513. https://doi.org/10.20944/preprints202302.0513.v1

Abstract

Background and Purpose: Open-source Artificial intelligence (AI) applications are fast transforming access to information and allow students to prepare assignments and offer quite accurate responses to a wide range of exam questions which are routinely used in assessments of students across the board including undergraduate dental students. This study aims to evaluate the performance ChatGPT, an AI-based application, on a wide range of dental assessments and discusses the implications for undergraduate dental education. Methods: This was an exploratory study investigating the accuracy of ChatGPT to attempt a range of recognized assessments in undergraduate dental curricula. ChatGPT was used to attempt ten items based on each of the five commonly used question formats including single-best answer (SBA) multiple-choice questions (MCQs); short answer questions (SAQs); short essay questions (SEQs); True/False questions and fill in the blanks items. In addition, ChatGPT was used to generate reflective reports based on multisource feedback (MSF); research methodology; critical appraisal of the literature. Results: ChatGPT application provided accurate responses to majority of knowledge-based assessments based on MCQs, SAQs, SEQs, Tue/False and fill in the blanks items. However, it was only able to answer text-based questions and did not allow processing of questions based on images. Responses generated to written assignments were also of good quality apart from those for critical appraisal of literature. Word count was the key limitation observed in outputs by ChatGPT as it was only able to produce reports limited to approximately 650 words. Conclusion: Notwithstanding their current limitations, AI-based applications have the potential to revolutionize virtual learning. Instead of treating it as a threat, dental educators need to adapt teaching and assessments in dental education to the benefits of the learners whilst mitigating against dishonest use of AI-based applications.

Keywords

ChatGPT; artificial intelligence; chatbot; education technology; machine learning; dental education; natural language processing

Subject

Medicine and Pharmacology, Dentistry and Oral Surgery

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.