Preprint
Review

This version is not peer-reviewed.

From Data to Diagnosis: A Deep Dive into Deep Learning for COVID-19 Detection

Submitted:

09 December 2024

Posted:

17 December 2024

You are already at the latest version

Abstract
The COVID-19 pandemic has highlighted the need for rapid and accurate diagnostic tools to manage and contain infectious diseases effectively. Deep learning (DL) has emerged as a promising solution for COVID-19 detection, particularly through the analysis of medical imaging modalities such as chest X-rays (CXR) and computed tomography (CT) scans. This paper provides a comprehensive review of the state-of-the-art in deep learning for COVID-19 detection, covering key datasets, methodologies, challenges, and future directions.The review discusses widely used datasets, such as the COVID-19 Image Data Collection, BIMCV-COVID19+, and MosMedData, highlighting their strengths and limitations. Common challenges include the scarcity of annotated data, class imbalance, and variations in imaging protocols, all of which hinder model robustness and generalizability. Additionally, issues such as model interpretability, clinical validation, and regulatory approval are examined, emphasizing the importance of explainability and real-world applicability.Promising advancements in synthetic data generation, data augmentation, explainable AI, and federated learning are explored as potential solutions to these challenges. Furthermore, multi-modal and multi-task learning approaches are discussed as avenues for creating comprehensive diagnostic systems that integrate imaging data with clinical and demographic information. The paper also underscores the importance of seamless integration of AI tools into clinical workflows to ensure their practical use in healthcare settings.This review concludes by discussing the broader implications of these technologies beyond COVID-19, emphasizing their potential to revolutionize medical diagnostics and prepare for future public health challenges. By addressing current limitations and leveraging collaborative efforts, deep learning can play a pivotal role in enhancing global healthcare systems, ensuring better outcomes for patients and clinicians alike.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Overview

The emergence of COVID-19, caused by the SARS-CoV-2 virus, has escalated into a worldwide public health emergency since its initial detection in late 2019. Characterized by high transmissibility and potentially severe outcomes, the pandemic has overwhelmed healthcare systems globally, underscoring the importance of swift diagnosis and isolation of cases to mitigate its spread [1]. The reverse transcription-polymerase chain reaction (RT-PCR) test, while considered the diagnostic benchmark, is constrained by factors such as cost, availability, and dependence on specialized facilities. Moreover, its accuracy is influenced by variables like sampling techniques and the timing of specimen collection [2].
Given these constraints, alternative diagnostic tools have garnered attention, particularly imaging modalities like chest X-rays (CXR) and computed tomography (CT) scans. These methods highlight characteristic pulmonary anomalies associated with COVID-19, including ground-glass opacities and consolidation patterns, aiding in the identification of infected individuals [3]. However, manually interpreting these images is labor-intensive and requires specialized expertise, which may not be readily available in resource-limited settings or during peak demand [4]. This necessitates the exploration of automated and efficient solutions for analyzing medical images.
Deep learning, a branch of artificial intelligence leveraging multi-layered neural networks to capture intricate patterns in data, has demonstrated immense potential in automating medical image analysis. Over the past decade, deep learning has transformed domains such as computer vision and healthcare, showing notable success in COVID-19 detection tasks. Convolutional neural networks (CNNs), in particular, have achieved high accuracy in analyzing CXR and CT images, often rivaling expert radiologists [5]. Unlike traditional approaches reliant on manually engineered features, deep learning models autonomously learn complex representations from raw data, making them ideal for detecting subtle COVID-19-induced abnormalities.
This work presents a detailed review of deep learning methodologies applied to COVID-19 detection using medical imaging. We examine diverse architectures, including CNNs, recurrent neural networks (RNNs), generative adversarial networks (GANs), and transformers, discussing their respective advantages and limitations [6]. The insights offered here aim to guide the development of effective and adaptable models for diagnostic applications.

1.1. Challenges in Deep Learning Applications

Despite the potential of deep learning, several hurdles remain. A primary challenge is the scarcity of large, annotated datasets due to privacy regulations and the sensitive nature of medical data [7]. To address this, researchers have employed strategies such as data augmentation, transfer learning, and synthetic data generation. Additionally, the interpretability of deep learning models is critical for clinical acceptance. Tools like saliency maps and Grad-CAM provide some level of explanation for model decisions, but more transparent methods are needed [8].
Generalization is another key issue. Models trained on specific datasets often struggle to perform consistently across diverse populations or imaging devices, necessitating techniques such as domain adaptation and federated learning to enhance robustness [9].

1.2. Objectives and Structure

This paper seeks to provide a comprehensive analysis of recent advancements in deep learning for COVID-19 detection, focusing on models, datasets, and evaluation metrics. We explore promising techniques, highlight current gaps, and propose directions for future research. Emerging trends, such as multimodal diagnostics and federated learning, are also discussed [10].
The remainder of this paper is organized as follows: Section explores deep learning architectures used in COVID-19 detection. Section reviews key datasets and preprocessing methodologies. Section discusses evaluation metrics and model performance. Section outlines persistent challenges, and Section suggests future research directions.
By synthesizing current knowledge, this survey aims to support researchers and practitioners in advancing the deployment of deep learning for COVID-19 diagnostics.

2. Deep Learning Architectures in COVID-19 Detection

Deep learning has emerged as a transformative tool for analyzing medical images, playing a crucial role in the detection of COVID-19 from modalities such as chest X-rays (CXR) and computed tomography (CT) scans. This section explores the primary architectures employed in this domain, including Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), and Transformers. Each architecture is discussed in terms of its unique capabilities and contributions to COVID-19 detection.

2.1. Convolutional Neural Networks (CNNs)

Convolutional Neural Networks (CNNs) are a cornerstone in image-based applications due to their proficiency in capturing spatial hierarchies within images. They have shown exceptional performance in identifying COVID-19-induced patterns such as ground-glass opacities and consolidation in CXR and CT images [11]. CNN architectures like ResNet, DenseNet, and Inception have been extensively adapted for this task, demonstrating high accuracy in distinguishing COVID-19 from other respiratory conditions [12,13].
Advanced CNN techniques include transfer learning, where models pretrained on large datasets like ImageNet are fine-tuned for COVID-19-specific tasks, effectively addressing the challenge of limited labeled data [14]. Additionally, attention mechanisms integrated into CNNs have further improved their ability to localize critical lung regions for accurate diagnosis [3].

2.2. Recurrent Neural Networks (RNNs)

While RNNs are traditionally applied to sequential data, their utility in multi-modal COVID-19 detection tasks has been noteworthy. Architectures such as Long Short-Term Memory (LSTM) networks and Gated Recurrent Units (GRUs) excel in tracking temporal changes in disease progression through sequential imaging or integrating clinical and imaging data for comprehensive diagnostics [15,16]. CNN-RNN hybrids combine the spatial analysis of CNNs with the temporal modeling capabilities of RNNs, enabling robust prediction of disease progression and treatment outcomes [17].

2.3. Generative Adversarial Networks (GANs)

Generative Adversarial Networks (GANs) address the issue of limited data availability by generating realistic synthetic images for training purposes [18]. Conditional GANs (cGANs) have been particularly effective in balancing datasets by producing images with varying infection severity levels [19]. Moreover, GANs have been applied in domain adaptation, enhancing model performance across diverse imaging settings by simulating variations in data distribution [20].

2.4. Transformers

Originally developed for natural language processing, Transformers have recently demonstrated significant potential in computer vision tasks, including COVID-19 detection. Vision Transformers (ViTs) utilize self-attention mechanisms to capture global dependencies in imaging data, complementing the local feature extraction strengths of CNNs [21]. Hybrid models integrating CNNs and Transformers combine these advantages, offering enhanced accuracy and interpretability [22]. Furthermore, Transformers facilitate multi-modal integration, combining imaging with clinical data for more comprehensive diagnostic models [23].

2.5. Hybrid Architectures

The increasing complexity of COVID-19 detection tasks has driven the development of hybrid models that merge multiple deep learning paradigms. CNN-RNN hybrids excel in capturing spatial and temporal patterns, while CNN-Transformer models leverage both local and global features [24]. GAN-CNN combinations, on the other hand, address data scarcity by using GAN-generated images to train CNN classifiers, significantly improving detection performance in low-data scenarios [25].

2.6. Conclusion on Architectures

Among the discussed architectures, CNNs remain the most widely employed due to their efficiency and accuracy in image-based tasks. However, Transformers and hybrid models show immense promise, particularly in scenarios requiring integration of diverse data types or addressing complex imaging patterns. GANs complement these efforts by enriching training datasets and enhancing model robustness. The subsequent section explores the datasets and preprocessing techniques pivotal to the success of these architectures.

3. Datasets for COVID-19 Detection

Datasets play a pivotal role in the development and evaluation of deep learning models for COVID-19 detection. The availability and quality of datasets directly influence model performance, robustness, and generalizability. Several datasets were created during the pandemic to support research in this area, focusing on chest X-ray (CXR) and computed tomography (CT) images. However, these datasets face challenges such as limited size, lack of diversity, and variability in annotation quality. This section reviews the key datasets, their sources, associated challenges, and strategies to address these limitations.

3.1. Chest X-Ray (CXR) Datasets

CXR imaging is widely used in COVID-19 diagnosis due to its accessibility and cost-effectiveness. Various datasets have been curated to support deep learning research in this field.

COVID-19 Image Data Collection

This dataset, compiled by Cohen et al., was among the first publicly available collections for COVID-19 detection [26]. It aggregates CXR images from multiple sources, including research articles and open repositories. Despite its importance, the dataset’s small sample size and lack of class balance limit its effectiveness for training robust models.

BIMCV-COVID19+

The Valencia Region Medical Image Bank curated this dataset, which includes both CXR and CT images, along with clinical data such as severity scores [27]. Although rich in metadata, the dataset suffers from an imbalance in COVID-19 and non-COVID-19 cases, potentially skewing model performance.
To overcome these limitations, data augmentation methods (e.g., rotation, scaling, and color transformation) and synthetic data generation techniques (e.g., GANs) are commonly employed [28,29].

3.2. Computed Tomography (CT) Datasets

CT imaging offers more detailed insights into COVID-19-related abnormalities, such as ground-glass opacities and consolidations, making CT datasets crucial for advanced tasks like segmentation and severity assessment.

COVID-CT Dataset

This dataset comprises annotated CT images from confirmed COVID-19 cases and other conditions, extracted from case reports and publications [30]. It is widely used for training segmentation and classification models.

SARS-CoV-2 CT Dataset

Featuring balanced classes of COVID-19-positive and negative scans, this dataset facilitates binary classification tasks [31].

MosMedData

Curated by Russian research institutions, MosMedData contains CT images with severity annotations, supporting models aimed at assessing disease progression [32].

MIDRC Dataset

The Medical Imaging and Data Resource Center (MIDRC) initiative provides diverse CXR and CT datasets, aiming to standardize imaging protocols and support large-scale research [33].
CT datasets are limited by high acquisition costs, radiation exposure concerns, and variability in imaging protocols across institutions. Techniques like domain adaptation and transfer learning help address these challenges [34].

3.3. Challenges in Data Collection and Annotation

Data Scarcity and Privacy Concerns

The initial scarcity of COVID-19 cases and stringent privacy regulations have restricted dataset availability. Medical imaging data is often subject to confidentiality laws, complicating data sharing [35].

Annotation Limitations

High-quality annotations, such as segmentation masks, require expertise from radiologists, making the process costly and time-intensive [36]. Active learning approaches can mitigate this by focusing annotation efforts on the most informative samples [37].

Standardization Issues

Variability in imaging equipment and protocols introduces inconsistencies, affecting model performance. Aggregated datasets, such as MIDRC and MosMedData, address these challenges through data standardization and domain adaptation techniques [38].

3.4. Addressing Data Limitations

Data Augmentation

Augmentation techniques, including random rotations and intensity scaling, increase dataset diversity and help mitigate overfitting on small datasets [28].

Synthetic Data Generation

GANs have been employed to create realistic synthetic COVID-19 images, expanding datasets and balancing class distributions [18].

Transfer Learning

Models pretrained on large datasets like ImageNet are fine-tuned on COVID-19-specific datasets, improving accuracy and reducing training time [39].

Collaborative Efforts

Initiatives like MIDRC promote data sharing across institutions, enhancing dataset diversity and supporting large-scale research [33].

3.5. Overview of Available Datasets

This review highlights the diversity of datasets available for COVID-19 detection and strategies for addressing their limitations. The subsequent section will explore evaluation metrics and validation strategies used in assessing the performance of deep learning models in this domain.
Table 1. Summary of commonly used datasets for COVID-19 detection.
Table 1. Summary of commonly used datasets for COVID-19 detection.
Dataset Image Type Size Annotations Source
COVID-19 Image Data Collection CXR ∼1000 COVID-19, pneumonia, normal Public repository
BIMCV-COVID19+ CXR, CT ∼5000 Severity scores, clinical data Valencia Region Medical Image Bank
COVID-CT CT 349 COVID-19, non-COVID-19 Research papers
SARS-CoV-2 CT CT 2482 COVID-19, normal Open source
MosMedData CT 1110 Severity scores Russian institutions

4. Challenges and Future Directions

The integration of deep learning (DL) techniques for COVID-19 detection has shown promise, delivering positive results in certain scenarios. However, numerous challenges must be addressed to enable reliable and effective application in clinical environments. This section outlines the significant challenges in the field and explores potential future directions to enhance the performance and applicability of deep learning models for COVID-19 detection.

4.1. Challenges in COVID-19 Detection with Deep Learning

4.1.1. Data Quality and Availability

A critical challenge in applying deep learning to COVID-19 detection is the limited availability and quality of annotated medical imaging data. Many datasets suffer from small sample sizes, low diversity, and incomplete annotations, hindering the training of robust models. Geographic and institutional biases in patient demographics, imaging techniques, and medical equipment further impact model generalizability [26].
Additionally, inconsistencies in imaging protocols exacerbate the problem. Variations in equipment (e.g., CT vs. X-ray) or even different versions of the same device result in images with differing resolutions, contrasts, and formats (e.g., grayscale vs. color). Such variability leads to models that overfit to specific dataset characteristics and perform poorly in real-world scenarios [27]. The scarcity of high-resolution, multi-modal datasets that combine CT scans, X-rays, MRIs, and clinical metadata further compounds these challenges.

4.1.2. Data Imbalance and Labeling Issues

The issue of data imbalance is a significant challenge in medical image analysis, particularly in tasks such as disease detection and classification [40]. This imbalance arises when datasets contain disproportionately fewer samples of certain classes—often the positive or diseased cases—compared to the negative or healthy ones. In the context of COVID-19 detection, for example, datasets frequently include an abundance of non-COVID-19 cases but relatively few confirmed COVID-19-positive samples. This disparity can severely impact the performance of deep learning models, which tend to be biased toward the majority class, leading to high overall accuracy but poor sensitivity in detecting rare conditions. Moreover, imbalanced datasets may fail to capture the variability in disease presentation across different demographic and clinical scenarios, resulting in models that generalize poorly to unseen data. Addressing this issue requires strategies such as oversampling the minority class, using synthetic data generation techniques like GANs, applying class-balanced loss functions, or employing ensemble methods to improve model robustness and fairness. Balancing class representation is critical for developing reliable AI systems capable of equitably identifying and diagnosing medical conditions.
Class imbalance is another significant challenge in COVID-19 datasets, with fewer positive cases compared to negative (non-COVID-19) samples. This imbalance adversely impacts model performance, particularly for detection tasks where correctly identifying positive cases is critical [40]. Models trained on imbalanced data often favor the majority class, leading to poor performance on the minority class (COVID-19) [32].
Labeling inconsistencies and errors also pose problems. Even with expert annotations, complex cases such as COVID-19 may be misclassified due to variability in disease presentation across patients. Mislabels, incomplete annotations, and disagreements on criteria (e.g., “severe” vs. “mild” cases) introduce noise into the training process, reducing model reliability [41].

4.1.3. Model Interpretability and Trust

Deep learning models often function as "black boxes," making their decision-making processes difficult to interpret. This lack of transparency limits their acceptance in clinical practice, where clinicians require clear, justifiable reasons for predictions [42]. Explainable AI (XAI) methods, such as saliency maps and attention mechanisms, enable models to highlight relevant regions of images, improving trust and usability in medical settings [43].

4.1.4. Generalization and Domain Adaptation

Deep learning models trained on specific datasets frequently fail to generalize to new data from different regions or institutions. This lack of generalizability arises from variations in demographics, healthcare infrastructure, and imaging equipment [34]. Domain adaptation techniques, including transfer learning and adversarial domain adaptation, aim to address this challenge by enhancing model performance across diverse datasets. However, these methods are computationally expensive and remain underexplored in clinical applications.

4.1.5. Clinical Validation and Regulatory Approval

Deploying deep learning models in clinical practice requires rigorous validation and regulatory approval. Models must be extensively tested in real-world settings to ensure safety, efficacy, and robustness. Regulatory processes, such as those governed by the FDA or EMA, are complex and time-intensive, delaying the adoption of AI-based diagnostic tools [44]. Differences in regulatory requirements across countries further complicate international deployment.

4.2. Future Directions in COVID-19 Detection with Deep Learning

4.2.1. Synthetic Data and Augmentation

Synthetic data generation using techniques like Generative Adversarial Networks (GANs) can address data limitations by creating realistic COVID-19 images to augment existing datasets. This approach balances class distributions and enhances dataset diversity, exposing models to various infection stages and imaging modalities. Data augmentation techniques, such as rotation, scaling, and cropping, can further improve model robustness without requiring additional real-world data [18].

4.2.2. Explainable AI (XAI) and Model Interpretability

Future research will prioritize enhancing the transparency of deep learning models. Improved XAI methods, such as advanced attention maps and saliency visualizations, will enable clinicians to better understand model decisions. Additionally, uncertainty quantification, which highlights predictions with low confidence, will aid in clinical decision-making, ensuring safer and more reliable use of AI tools [43].

4.2.3. Multi-Modal and Multi-Task Learning

Integrating data from multiple sources, such as CT scans, X-rays, and patient demographics, through multi-modal learning can improve model robustness. Multi-task learning, where a model performs tasks such as classification, segmentation, and severity prediction simultaneously, can enhance utility in clinical settings. These approaches enable comprehensive analyses and nuanced insights into patient conditions [45].

4.2.4. Federated Learning for Privacy-Preserving Models

Federated learning, which trains models on decentralized data without compromising patient privacy, is a promising avenue for global collaboration. By sharing model updates instead of raw data, federated learning ensures security and facilitates the development of models trained on diverse datasets, improving generalization and reducing bias [46].

4.2.5. Integration into Clinical Workflows

Seamless integration of deep learning models into clinical workflows is essential for their adoption. Models must be compatible with existing radiology systems and provide real-time, actionable insights. Attention to user interface design and interoperability with hospital information systems will ensure practical and efficient use in clinical settings [47].

5. Conclusion

The integration of deep learning in COVID-19 detection represents a significant milestone in the intersection of artificial intelligence and healthcare. By analyzing medical imaging modalities such as chest X-rays and CT scans, deep learning models have exhibited the potential to complement traditional diagnostic tools, offering faster and more accurate assessments. However, despite the promising outcomes in experimental and research settings, numerous challenges hinder the full realization of these technologies in clinical practice.
One of the most critical barriers is the availability and quality of datasets. The early phase of the pandemic saw a scarcity of labeled COVID-19 imaging data, and while several datasets have since been curated, issues such as class imbalance, geographic biases, and inconsistencies in annotation persist. These limitations affect the robustness and generalizability of models, often causing them to perform poorly when applied to unseen or heterogeneous data. Moreover, differences in imaging protocols and equipment across institutions exacerbate this problem, emphasizing the need for standardized data collection and annotation practices.
Another significant hurdle lies in the interpretability of deep learning models. Often described as “black boxes,” these models provide limited insight into their decision-making processes, making it difficult for clinicians to trust or act upon their predictions. While explainable AI (XAI) methods, such as saliency maps and attention mechanisms, have made strides in improving interpretability, further advancements are required to make these explanations clinically meaningful and actionable. Coupled with this, the high computational requirements of certain deep learning approaches present challenges in resource-constrained environments, limiting their scalability.
The lack of clinical validation and regulatory approval also remains a substantial roadblock. For deep learning models to transition from research to real-world application, they must undergo rigorous testing to ensure safety, efficacy, and robustness. Regulatory frameworks, such as those by the FDA and EMA, are necessary to standardize this process, but differences in international regulations and the complexity of these procedures slow down adoption. Additionally, current models often lack compatibility with clinical workflows, requiring significant effort to integrate with hospital information systems and electronic health records.
Despite these challenges, the field continues to evolve, with several promising directions for future research and development. The generation of synthetic data through techniques like generative adversarial networks (GANs) offers a viable solution to the scarcity of labeled datasets. Synthetic data can balance class distributions and improve model performance by exposing algorithms to a broader range of imaging variations and disease manifestations. Data augmentation, including transformations like rotation, scaling, and noise addition, further enhances model robustness and reduces overfitting.
Explainable AI remains a focal point for research, as trust and usability are paramount in healthcare. Improved visualization techniques, such as advanced saliency maps and uncertainty quantification, can provide clearer insights into model predictions, empowering clinicians to make informed decisions. Moreover, multi-modal learning, which combines imaging data with patient demographics, clinical history, and laboratory results, has the potential to create comprehensive diagnostic systems that consider a broader range of patient information.
Federated learning represents another transformative approach, addressing both data scarcity and privacy concerns. By enabling collaborative model training across institutions without sharing sensitive data, federated learning ensures patient confidentiality while leveraging diverse datasets for improved generalization. This approach could be particularly impactful in global efforts to standardize COVID-19 diagnostics, fostering cross-border collaboration and resource sharing.
Efforts to streamline integration into clinical workflows are equally critical. User-friendly interfaces, interoperability with existing radiology systems, and real-time diagnostic capabilities are essential for adoption. Partnerships between AI researchers, clinicians, and healthcare institutions will play a vital role in designing tools that align with practical needs and operational constraints in medical settings.
In the broader context, the advancements in deep learning for COVID-19 detection hold implications far beyond the current pandemic. The tools, techniques, and insights developed during this period can be adapted to other infectious diseases, respiratory conditions, and even non-communicable diseases, such as cancer and cardiovascular disorders. By addressing the challenges highlighted in this review, deep learning has the potential to revolutionize medical imaging and diagnostic workflows, improving patient outcomes worldwide.
In conclusion, while the application of deep learning in COVID-19 detection is still in its nascent stages, it has laid a strong foundation for the future of AI in healthcare. By addressing the limitations of dataset quality, model interpretability, generalization, and clinical validation, researchers can unlock the full potential of these technologies. Collaborative efforts across disciplines, supported by advancements in data augmentation, explainable AI, and federated learning, will drive the development of robust, trustworthy, and deployable AI systems. As the field progresses, these innovations promise not only to enhance pandemic response capabilities but also to transform healthcare delivery on a global scale, ensuring better preparedness for future public health challenges.

References

  1. Organization, W.H. WHO Coronavirus Disease (COVID-19) Dashboard, 2020. Accessed: 2023-11-08.
  2. Waheed, A.; Goyal, M.; Gupta, D.; Khanna, A.; Al-Turjman, F.; Pinheiro, P.R. Covidgan: Data augmentation using auxiliary classifier gan for improved covid-19 detection. IEEE Access 2020, 8, 91916–91923. [CrossRef]
  3. Kadry, S.; Rajinikanth, V.; Rho, S.; Raja, N.S.M.; Rao, V.S.; Thanaraj, K.P. Development of a Machine-Learning System to Classify Lung CT Scan Images into Normal/COVID-19 Class. arXiv preprint arXiv:2004.13122 2020.
  4. Rahimzadeh, M.; Attar, A. A modified deep convolutional neural network for detecting COVID-19 and pneumonia from chest X-ray images based on the concatenation of Xception and ResNet50V2. Informatics in Medicine Unlocked 2020, p. 100360. [CrossRef]
  5. Das, D.; Santosh, K.; Pal, U. Truncated inception net: COVID-19 outbreak screening using chest X-rays. Physical and engineering sciences in medicine 2020, pp. 1–11. [CrossRef]
  6. Peng, X.; Xu, X.; Li, Y.; Cheng, L.; Zhou, X.; Ren, B. Transmission routes of 2019-nCoV and controls in dental practice. International Journal of Oral Science 2020, 12, 1–6.
  7. Tian, S.; Hu, W.; Niu, L.; Liu, H.; Xu, H.; Xiao, S.Y. Pulmonary pathology of early phase 2019 novel coronavirus (COVID-19) pneumonia in two patients with lung cancer. Journal of Thoracic Oncology 2020. [CrossRef]
  8. Razai, M.S.; Doerholt, K.; Ladhani, S.; Oakeshott, P. Coronavirus disease 2019 (covid-19): a guide for UK GPs. BMJ 2020, 368.
  9. https://www.ecdc.europa.eu/en/geographical-distribution-2019-ncov-cases. https://www.ecdc.europa.eu/en/geographical-distribution-2019-ncov-cases, 2020.
  10. Sohrabi, C.; Alsafi, Z.; O’Neill, N.; Khan, M.; Kerwan, A.; Al-Jabir, A.; Iosifidis, C.; Agha, R. World Health Organization declares global emergency: A review of the 2019 novel coronavirus (COVID-19). International Journal of Surgery 2020. [CrossRef]
  11. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proceedings of the IEEE 1998, 86, 2278–2324. [CrossRef]
  12. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition 2016. pp. 770–778.
  13. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks 2017. pp. 4700–4708.
  14. Pham, V.T.; Nguyen, T.P. Identification and localization COVID-19 abnormalities on chest radiographs. The International Conference on Artificial Intelligence and Computer Vision. Springer, 2023, pp. 251–261.
  15. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Computation, 1997, Vol. 9, pp. 1735–1780.
  16. Cho, K.; van Merrienboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. On the properties of neural machine translation: Encoder-decoder approaches. Proceedings of SSST, 2014, pp. 103–111.
  17. Alizadehsani, R.; Behjati, M.; Roshanzamir, Z.; Hussain, S.; Abedini, N.; Hasanzadeh, F.; Khosravi, A.; Shoeibi, A.; Roshanzamir, M.; Moradnejad, P.; others. Risk Factors Prediction, Clinical Outcomes, and Mortality of COVID-19 Patients. medRxiv 2020. [CrossRef]
  18. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Advances in Neural Information Processing Systems, 2014, pp. 2672–2680.
  19. Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
  20. Talo, M.; Yildirim, O.; Baloglu, U.B.; Aydin, G.; Acharya, U.R. Convolutional neural networks for multi-class brain disease detection using MRI images. Computerized Medical Imaging and Graphics 2019, 78, 101673. [CrossRef]
  21. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; others. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 2020.
  22. Acharya, U.R.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adeli, H. Deep convolutional neural network for the automated detection and diagnosis of seizure using EEG signals. Computers in biology and medicine 2018, 100, 270–278. [CrossRef]
  23. Mohammadpoor, M.; Shoeibi, A.; Zare, H.; Shojaee, H. A hierarchical classification method for breast tumor detection. Iranian Journal of Medical Physics 2016, 13, 261–268.
  24. Apostolopoulos, I.D.; Aznaouridis, S.I.; Tzani, M.A. Extracting possibly representative COVID-19 Biomarkers from X-Ray images with Deep Learning approach and image data related to Pulmonary Diseases. Journal of Medical and Biological Engineering 2020, p. 1. [CrossRef]
  25. Chollet, F. Xception: Deep learning with depthwise separable convolutions. Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1251–1258.
  26. Cohen, J.P.; Morrison, P.; Dao, L. COVID-19 image data collection: Prospective predictions are the future. arXiv preprint arXiv:2006.11988 2020.
  27. Vaya, M.T.; Gimenez, J.; Gonzalez, J.; et al.. BIMCV COVID-19+: A large annotated dataset of RX and CT images from COVID-19 patients. Radiology: Artificial Intelligence 2020, 2, e200032.
  28. Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. Journal of Big Data 2019, 6, 1–48. [CrossRef]
  29. Diaz-Escobar, J.; Ordóñez-Guillén, N.E.; Villarreal-Reyes, S.; Galaviz-Mosqueda, A.; Kober, V.; Rivera-Rodriguez, R.; Lozano Rizk, J.E. Deep-learning based detection of COVID-19 using lung ultrasound imagery. Plos one 2021, 16, e0255886. [CrossRef]
  30. Zhao, J.; Zhang, Y.; He, X.; Xie, P. COVID-CT-dataset: A CT scan dataset about COVID-19. arXiv preprint arXiv:2003.13865 2020.
  31. Soares, E.; Angelov, P.; Biaso, S.; Froes, M.; Abe, D. SARS-CoV-2 CT-scan dataset: A large dataset of real patients CT scans for SARS-CoV-2 identification. medRxiv 2020. [CrossRef]
  32. Morozov, S.P.; Andreychenko, A.E.; Pavlov, N.A.; Vladzymyrskyy, A.V.; Ledikhova, N.V.; Gombolevskiy, V.A.; Chernina, V.Y. MosMedData: Chest CT scans with COVID-19 related findings. arXiv preprint arXiv:2005.06465 2020. [CrossRef]
  33. Consortium, M. Medical Imaging and Data Resource Center (MIDRC) for COVID-19: Advancing data collection and sharing. Radiology: Artificial Intelligence 2021, 3, e200215. Available at: https://www.midrc.org/.
  34. Tzeng, E.; Hoffman, J.; Saenko, K.; Darrell, T. Adversarial discriminative domain adaptation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 7167–7176.
  35. Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 2014. [CrossRef]
  36. Rahimzadeh, M.; Attar, A. A New Modified Deep Convolutional Neural Network for Detecting COVID-19 from X-ray Images. arXiv preprint arXiv:2004.08052 2020.
  37. Mobiny, A.; Cicalese, P.A.; Zare, S.; Yuan, P.; Abavisani, M.; Wu, C.C.; Ahuja, J.; de Groot, P.M.; Van Nguyen, H. Radiologist-Level COVID-19 Detection Using CT Scans with Detail-Oriented Capsule Networks. arXiv preprint arXiv:2004.07407 2020.
  38. Zhou, T.; Canu, S.; Ruan, S. An automatic COVID-19 CT segmentation based on U-Net with attention mechanism. arXiv preprint arXiv:2004.06673 2020.
  39. Gozes, O.; Frid-Adar, M.; Greenspan, H.; Browning, P.D.; Zhang, H.; Ji, W.; Bernheim, A.; Siegel, E. Rapid ai development cycle for the coronavirus (covid-19) pandemic: Initial results for automated detection & patient monitoring using deep learning ct image analysis. arXiv preprint arXiv:2003.05037 2020. [CrossRef]
  40. Zheng, S.; Vu, T.M.; Nath, S.; others. Chest X-ray abnormalities localization via ensemble of deep convolutional neural networks. 2021 International Conference on Advanced Technologies for Communications. IEEE, 2021, pp. 125–130. [CrossRef]
  41. Arora, P.; Kumar, H.; Panigrahi, B.K. Prediction and analysis of COVID-19 positive cases using deep learning models: A descriptive case study of India. Chaos, Solitons & Fractals 2020, p. 110017. [CrossRef]
  42. Kim, B.; Wattenberg, M.; Gilmer, J.; Cai, C.; Wexler, J.; Viegas, F. Interpretable machine learning via feature clustering. Advances in Neural Information Processing Systems, 2017, pp. 6633–6642.
  43. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual explanations from deep networks via gradient-based localization. Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 618–626.
  44. Bansal, N.; Sridhar, S. Classification Of X-ray Images For Detecting Covid-19 Using Deep Transfer Learning 2020. [CrossRef]
  45. Zhang, J.; Xie, P.; He, X.; Zhang, Y.; Zhao, J. A survey on deep learning applications in COVID-19 medical imaging. Computational and Structural Biotechnology Journal 2020, 18, 2226–2237.
  46. McMahan, H.B.; Moore, E.; Ramage, D.; Hampson, S.; others. Communication-efficient learning of deep networks from decentralized data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. PMLR, 2017, pp. 1273–1282.
  47. Liao, W.h.; Li, Z.; He, J.; et al.. COVID-19 chest CT study in a hospital outbreak setting: Comprehensive imaging and clinical data from 1000 cases. European Journal of Radiology 2020, 130, 109011.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated