Working Paper Article Version 1 This version is not peer-reviewed

Do GPs Trust Artificial Intelligence Insights and What Could This Mean for Patient Care? A Case Study on GPs Skin Cancer Diagnosis in the UK

Version 1 : Received: 30 April 2021 / Approved: 3 May 2021 / Online: 3 May 2021 (10:40:01 CEST)
Version 2 : Received: 4 May 2021 / Approved: 7 May 2021 / Online: 7 May 2021 (12:58:42 CEST)

How to cite: Micocci, M.; Borsci, S.; Thakerar, V.; Walne, S.; Manshadi, Y.; Edridge, F.; Mullarkey, D.; Buckle, P.; Hanna, G. Do GPs Trust Artificial Intelligence Insights and What Could This Mean for Patient Care? A Case Study on GPs Skin Cancer Diagnosis in the UK. Preprints 2021, 2021050005 Micocci, M.; Borsci, S.; Thakerar, V.; Walne, S.; Manshadi, Y.; Edridge, F.; Mullarkey, D.; Buckle, P.; Hanna, G. Do GPs Trust Artificial Intelligence Insights and What Could This Mean for Patient Care? A Case Study on GPs Skin Cancer Diagnosis in the UK. Preprints 2021, 2021050005

Abstract

Every year, General Practitioners (GPs) see over 13 million patients for dermatological concerns making dermatology the highest referring speciality. Artificial Intelligence (AI) systems could improve system efficiency by supporting clinicians in making appropriate referrals, but they are, like human clinicians, imperfect and there may be a trade-off between sensitivity and specificity that is likely to result in false negatives. In this paper, a study is presented to explore two areas. Firstly, the aptitude of GPs to trust appropriately (or not trust) the outputs of a fictitious AI-based decision support tool when assessing skin lesions. Secondly, to identify which individual characteristics could make GPs less prone to adhere to erroneous diagnostics results and to refrain from passive adherence to AI. Findings suggest that when the AI is correct, there is a positive effect on GPs’ performance and confidence suggesting the potential to reduce referrals for benign lesions. However, when an inexperienced GP is presented with a false-negative result, they may passively deviate from their initial clinical judgement to accept the wrong diagnosis provided. AI systems will have a false-negative rate and, when adopting new technologies, this needs to be acknowledged and fed into risk-benefit discussions and considerations around additional safety measures.

Keywords

Artificial Intelligence; Trust, Passive Adherence, Human Factors

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.