Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Beyond Human Understanding: Benchmarking Language Models for Polish Cariology Expertise

Version 1 : Received: 15 September 2023 / Approved: 15 September 2023 / Online: 18 September 2023 (13:37:48 CEST)

How to cite: Wojcik, S.; Rulkiewicz, A.; Pruszczyk, P.; Lisik, W.; Poboży, M.; Pilchowska, I.; Domienik-Karlowicz, J. Beyond Human Understanding: Benchmarking Language Models for Polish Cariology Expertise. Preprints 2023, 2023091100. https://doi.org/10.20944/preprints202309.1100.v1 Wojcik, S.; Rulkiewicz, A.; Pruszczyk, P.; Lisik, W.; Poboży, M.; Pilchowska, I.; Domienik-Karlowicz, J. Beyond Human Understanding: Benchmarking Language Models for Polish Cariology Expertise. Preprints 2023, 2023091100. https://doi.org/10.20944/preprints202309.1100.v1

Abstract

The growing dependence on large language models (LLM)s highlights the urgent need to deepen trust in these technologies. Regular, rigorous validation of their expertise, especially in nuanced and intricate scenarios, is essential to ensure their readiness for clinical applications. Our study pioneers the exploration of LLM utility in the field of cardiology. We stand at the cusp of a transformative era where mature AI and LLMs, notably ChatGPT, GPT-4, and Google Bard, are poised to influence healthcare significantly. Recently, we put three available LLMs, OpenAI's ChatGPT-3.5, GPT-4.0, and Google's Bard, to the test against a significant Polish medical special-ization licensing exam (PES). The exams cover the scope of completed specialist training, focusing on diagnostic and therapeutic procedures, excluding invasive medical procedures and interven-tions. In our analysis, GPT-4 consistently outperformed the others, ranking first, with and Google Bard and ChatGPT- 3.5 following, respectively. The performance metrics underscore GPT-4's no-table potential in medical applications. Given a score improvement of over 23.5 % between two AI models released just four months apart, clinicians must stay informed and up-to-date about these rapidly evolving tools and their potential applications to clinical practice. Our results provide a snapshot of the current capabilities of these models, highlighting the nuanced performance dif-ferences when confronted with identical questions

Keywords

ChatGPT; google bard; innovations; AI in medicine; health IT; artificial intelligence; large language model; medical education; language processing; virtual teaching assistant

Subject

Public Health and Healthcare, Public Health and Health Services

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.