Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Evaluating ChatGPT Efficacy in Navigating the Spanish Medical Residency Entrance Examination (MIR): A New Horizon for AI in Clinical Medicine

Version 1 : Received: 18 September 2023 / Approved: 19 September 2023 / Online: 19 September 2023 (08:34:31 CEST)

How to cite: Guillen-Grima, F.; Guillen-Aguinaga, S.; Guillen-Aguinaga, L.; Alas-Brun, R.; Onambele, L.; Ortega, W.; Montejo, R.; Aguinaga-Ontoso, E.; Barach, P.; Aguinaga-Ontoso, I. Evaluating ChatGPT Efficacy in Navigating the Spanish Medical Residency Entrance Examination (MIR): A New Horizon for AI in Clinical Medicine. Preprints 2023, 2023091272. https://doi.org/10.20944/preprints202309.1272.v1 Guillen-Grima, F.; Guillen-Aguinaga, S.; Guillen-Aguinaga, L.; Alas-Brun, R.; Onambele, L.; Ortega, W.; Montejo, R.; Aguinaga-Ontoso, E.; Barach, P.; Aguinaga-Ontoso, I. Evaluating ChatGPT Efficacy in Navigating the Spanish Medical Residency Entrance Examination (MIR): A New Horizon for AI in Clinical Medicine. Preprints 2023, 2023091272. https://doi.org/10.20944/preprints202309.1272.v1

Abstract

The rapid progress in artificial intelligence, machine learning, and natural language processing has led to the emergence of increasingly sophisticated large language models (LLMs) enabling their use in healthcare. The study assesses the performance of two LLMs: the GPT-3.5 and GPT-4 models in passing the medical examination for access to medical specialist training in Spain MIR. Our objectives included gauging the model's overall performance, analyzing discrepancies across different medical specialties, discerning between theoretical and practical questions, estimating error proportions, and assessing the hypothetical severity of errors committed by a physician. We studied the 2022 Spanish MIR examination after excluding those questions requiring image evaluations or having acknowledged errors. The remaining 182 questions were presented to the LLM ChatGPT4 and GPT-3.5 in Spanish and English. Logistic regression models analyzed the relationships between question length and question sequence d performance. GPT-4 outperformed GPT -3.5, scoring 86.81% in Spanish (p<0.001). English translations had a slightly enhanced performance. Among medical specialties, GPT-4 achieved a 100% correct response rate in several areas, with specialties like Pharmacology, ICU, and Infectious Diseases showing lower performance. The error analysis revealed that while a 13.2% error rate existed, gravest categories like "error requiring intervention to sustain life" and "error resulting in death" had a 0% rate. Conclusions: GPT-4 performs robustly on the Spanish MIR examination, varying its capability to discriminate knoweldge across specialties. While the model's high success rate is commendable, understanding the error severity is critical, especially when considering AI's potential role in real-world medical practice and its implication on patient safety.

Keywords

Machine learning; Artificial inteligence; ChatGPT; GPT 3.5; GPT-4; Medical education; quality of care

Subject

Medicine and Pharmacology, Clinical Medicine

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.