Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

COVID-DenseNet: A Deep Learning Architecture to Detect COVID-19 from Chest Radiology Images

Version 1 : Received: 7 May 2020 / Approved: 9 May 2020 / Online: 9 May 2020 (04:54:34 CEST)
Version 2 : Received: 14 January 2021 / Approved: 15 January 2021 / Online: 15 January 2021 (12:59:20 CET)
Version 3 : Received: 12 February 2022 / Approved: 18 February 2022 / Online: 18 February 2022 (14:44:55 CET)

How to cite: Sarker, L.; Islam, M.M.; Hannan, T.; Ahmed, Z. COVID-DenseNet: A Deep Learning Architecture to Detect COVID-19 from Chest Radiology Images. Preprints 2020, 2020050151. https://doi.org/10.20944/preprints202005.0151.v1 Sarker, L.; Islam, M.M.; Hannan, T.; Ahmed, Z. COVID-DenseNet: A Deep Learning Architecture to Detect COVID-19 from Chest Radiology Images. Preprints 2020, 2020050151. https://doi.org/10.20944/preprints202005.0151.v1

Abstract

Coronavirus disease (COVID-19) is a pandemic infectious disease that has a severe risk of spreading rapidly. The quick identification and isolation of the affected persons is the very first step to fight against this virus. In this regard, chest radiology images have been proven to be an effective screening approach of COVID-19 affected patients. A number of AI based solutions have been developed to make the screening of radiological images faster and more accurate in detecting COVID-19. In this study, we are proposing a deep learning based approach using Densenet-121 to effectively detect COVID-19 patients. We incorporated transfer learning technique to leverage the information regarding radiology image learned by another model (CheXNet) which was trained on a huge Radiology dataset of 112,120 images. We trained and tested our model on COVIDx dataset containing 13,800 chest radiography images across 13,725 patients. To check the robustness of our model, we performed both two-class and three-class classifications and achieved 96.49% and 93.71% accuracy respectively. To further validate the consistency of our performance, we performed patient-wise k-fold cross-validation and achieved an average accuracy of 92.91% for three class task. Moreover, we performed an interpretability analysis using Grad-CAM to highlight the most important image regions in making a prediction. Besides ensuring trustworthiness, this explainability can also provide new insights about the critical factors regarding COVID-19. Finally, we developed a website that takes chest radiology images as input and generates probabilities of the presence of COVID-19 or pneumonia and a heatmap highlighting the probable infected regions. Code and models' weights are availabe.

Keywords

deep learning; CNN; DenseNet; COVID-19; transfer learning

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.