This version is not peer-reviewed.
† These authors contributed equally to this work.
‡ Co-first Authors.
Submitted:
15 March 2023
Posted:
16 March 2023
Read the latest preprint version here
Sl. | Dataset | Year | N | Tr/Va/Te | Classes | Note |
---|---|---|---|---|---|---|
1 | London Health Sciences Centre’s 2 tertiary hospitals (Canada) [38] |
2020 | (243 patients) 600 videos; 121,381 frames |
∼80/20 | COVID, Non-COVID, Hydrostatic Pulmonary Edema |
- |
2 | ULTRACOV (Ultrasound in Coronavirus disease) [39] |
2022 | (28 COVID-19 patients) 3 sec video each |
- | A-Lines, B-Lines, consolidations, and pleural effusions |
Available upon request |
3 | Huoshenshan Hospital (Wuhan, China) [40] |
2021 | (31 patients) 1,527 images |
- | Normal, septal syndrome, interstitial-alveolar syndrome, white lung |
Source Link2 |
4 | Royal Melbourne Hospital (Australia) [35] |
2022 | (9 patients) 27 videos; 3,827 frames |
- | Normal, consolidation/collapse | Available upon request |
5 | Ultrasound lung data [34] |
2021 | (300 patients) 1530 videos; 287,549 frames |
80/20 | A-line artifacts, B-line artifacts, presence of consolidation/pleural effusion |
- |
6 | Huoshenshan Hospital (Wuhan, China) [41] |
2022 | (31 patients); 2,062 images | - | Normal, septal syndrome, interstitial-alveolar syndrome, white lung |
Source Link3 |
7 | Fondazione IRCCS Policlinico San Matteo’s Emergency Department (Pavia, Italy) [42] |
2021 | (450 patients) 2,908 frames |
75/15/10 | A-lines with two B-lines, slightly irregular pleural line, artefacts in 50% of the pleura, damaged pleural line, visible consolidated areas, damaged pleura/irregular tissue |
- |
8 | Third People’s Hospital of Shenzhen (China) [48] |
2020 | (71 COVID-19 patients) 678 videos; 6,836 images |
- | A-line, B-line, pleural lesion, pleural effusion |
- |
9 | Fondazione Policlinico Universitario Agostino Gemelli (Rome, Italy), Fondazione Policlinico San Matteo (Pavia, Italy) [44] |
2021 | (82 patients) 1,488 videos; 314,879 frames |
- | 4 severity levels [24] |
- |
10 | CHUV (Lausanne, Switzerland) [37] |
2020 | (193 patients) 1,265 videos; 3,455 images |
80/20 | True (experts’ approval), False (experts’ disapproval) |
- |
11 | Various online sources [49] |
2022 | 792 images | - | COVID-19, healthy | - |
12 | Spain, India [36] |
2021 | (10 subjects) 400 videos, 5,000 images |
- | A-lines, lack of A-lines, appearance of B-lines, confluent appearance of B-lines, appearance of C-lines |
Available upon request |
13 | Private clinics (Lima, Peru) [50] |
2021 | 1,500 images | - | Healthy, COVID-19 | Available upon request |
14 | BresciaMed (Brescia, Italy), Valle del Serchio General Hospital (Lucca, Italy), Fondazione Policlinico Universitario A. Gemelli IRCCS (Rome, Italy), Fondazione Policlinico Universitario San Matteo IRCCS (Pavia, Italy), and Tione General Hospital (Tione, Italy) [45] |
2021 | (32 patients) 203 videos; 1,863 frames |
90/10 | Healthy, indentation of pleural line, discontinuity of the pleural line, white lung |
- |
15 | Beijing Ditan Hospital (Beijing, China) [43] |
2021 | (27 COVID-19 patients) 13 moderate, 7 severe, 7 critical |
- | Severe, non-severe | - |
16 | Cancer Center of Union Hospital, West of Union Hospital, Jianghan Cabin Hospital, Jingkai Cabin Hospital, Leishenshan Hospital [21] |
2021 | (313 COVID-19 patients) 10 second video from each |
- | Normal, presence of 3-5 B-lines, ≥6 B-lines or irregular pleura line, fused B-lines or thickening pleura line, consolidation |
- |
Studies | AI models |
Loss | Results | Cross-validation | Augmentation/ pre-processing |
Prediction Classes |
Code |
---|---|---|---|---|---|---|---|
Arntfield et al. [38] | Xception | Binary Cross Entropy | ROC-AUC: 0.978 | ✗ | Random zooming in/out by ≤10%, horizontal flipping, horizontal stretching/contracting by ≤20%, vertical stretching/contracting (≤5%), and bi-directional rotation by |
Hydrostatic pulmonary edema (HPE), onn-COVID acute respiratory distress syndrome (ARDS), COVID-19 ARDS |
Availablea |
Chen et al. [40] | 2-layer NN, SVM, Decision Tree |
✗ | Accuracy: 87% | k=5 | Curve-to-linear conversion |
Score 0: Normal, Score 1: Septal syndrome, Score 2: Interstitial-alveolar syndrome, Score 3: White lung syndrome |
✗ |
Durrani et al. [35] | CNN, Reguralized STN (Reg-STN) |
SORD | Accuracy: 89%, PR-AUC: 73% |
k=10 | Replacing overlays, resizing to 806×550 pixels |
Consolidation present, consolidation absent |
✗ |
Ebadi et al. [52] | Kinetics-I3D | Focal loss | Accuracy: 90% Precision: 95% |
k=5 | ✗ | A-line (normal), B-line, Consolidation and/or pleural effusion |
✗ |
Huang et al. [41] | Non-local Channel Attention ResNet |
Cross-entropy | Accuracy: 92.34%, F1-score: 92.05%, Precision: 91.96% Recall: 90.43%, |
✗ | Resizing to 300×300 pixels |
Score 0: normal, Score 1: septal syndrome, Score 2: interstitial-alveolar syndrome, Score 3: white lung syndrome |
Availableb |
La Salvia et al. [42] | ResNet-18, ResNet-50 | Cross-entropy | F1-score: 98% | ✗ | Geometric, filtering, random centre cropping, and colour transformations |
Severity Score: 0, 0*, 1, 1*, 2, 2*, 3 |
✗ |
Liu et al. [48] | Multi-symptom multi-label (MSML) network |
Cross-entropy | Accuracy: 100% (with 14.7% data) |
✗ | Random rotation (up to 10 degrees) and horizontal flips |
A-line, B-line, Pleural lesion, Pleural effusion |
✗ |
Mento et al. [44] | STN, U-Net, DeepLabV3+ | ✗ | Agreement between AI scoring and expert scoring 85.96% |
✗ | ✗ | Expert scores: 0, 1, 2, 3 |
✗ |
Quentin Muller et al. [37] | ResNet-18 | Cross-entropy | Accuracy (Val): 100% | ✗ | Resizing to 349×256 | Ultrasound frames with (positive) and without (negative) clinical predictive value |
✗ |
Nabalamba [49] | VGG-16, VGG-19, ResNet | Binary cross-entropy | Accuracy: 98%, Recall: 1, Precision: 96%, F1-score: 97.82%, ROC-AUC: 99.9% |
✗ | Width and height shifting, random zoom within 20%, brightness variations within [0.4, 1.3], rotation up to 10 degrees |
COVID-19, Healthy | ✗ |
Panicker et al. [36] | LUSNet (U-Net based CNN) | Categorical cross-entropy | Accuracy: 97%, Sensitivity: 93%, Specificity: 98% |
k=5 | Generation of local phase and shadow back scatter product images |
Classes: 1, 2, 3, 4, 5 | Availablec |
Roshankhah et al. [45] | U-Net | Categorical cross-entropy | Accuracy: 95% | ✗ | Randomly cropping and rotating the frames |
Severity Score: 0, 1, 2, 3 |
✗ |
Wang et al. [43] | SVM | ✗ | ROC-AUC: 0.93, Sensitivity: 0.93, Specificity: 0.85 |
✗ | ✗ | Non-severe, severe | ✗ |
Xue et al. [21] | UNet (with modality alignment contrastive learning of representation (MA-CLR)) |
Dice, cross-entropy |
Accuracy: 75% (4-level) 87.5% (binary) |
✗ | Affine transformations (translation, rotation, scaling, shearing), reflection, contrast change, Gaussian noise, and Gaussian filtering |
Severity score: 0, 1, 2, 3 |
✗ |
a https://github.com/bvanberl/covid-us-ml | |||||||
b https://biohsi.ecnu.edu.cn | |||||||
c https://github.com/maheshpanickeriitpkd/ALUS |
1 | |
4 |
Sl. | Dataset | Year | Number of Samples | Class Distribution | Note |
---|---|---|---|---|---|
1 | POCUS | 2020 | (216 patients) 202 videos 59 images |
COVID-19 (35%) Bacterial Pneumonia (28%) Viral Pneumonia (2%) Healthy (35%) |
Link1 |
2 | ICLUS-DB | 2020 | (35 patients) 277 videos 58,924 frames |
Score 0: Continuous A-line (34%) Score 1: Alteration in A-line (24%) Score 2: Small consolidation (32%) Score 3: Large consolidation (10%) |
Link2 |
3 | COVIDx-US | 2021 | 242 videos 29,651 images |
COVID-19 (29%) CAP (20%) non-pneumonia diseases (39%) Healthy (12%) |
Link3 |
Sl. | Studies | AI Methods | CM | DL |
---|---|---|---|---|
1 | Adedigba and Adeshina [59] | SqueezeNet, MobileNetV2 | ✗ | ✓ |
2 | Al-Jumaili et al. [68] | ResNet-18, RestNet-50, NASNetMobile, GoogleNet, SVM | ✓ | ✓ |
3 | Al-Zogbi et al. [70] | DenseNet | ✗ | ✓ |
4 | Almeida et al. [71] | MobileNet | ✗ | ✓ |
5 | Arntfield et al. [38] | Xception | ✗ | ✓ |
6 | Awasthi et al. [72] | MiniCOVIDNet | ✗ | ✓ |
7 | Azimi et al. [73] | InceptionV3, RNN | ✗ | ✓ |
8 | Barros et al. [69] | Xception-LSTM | ✗ | ✓ |
9 | Born et al. [12] | VGG-16 | ✗ | ✓ |
10 | Born et al. [74] | VGG-16 | ✗ | ✓ |
11 | Born et al. [13] | VGG-16 | ✗ | ✓ |
12 | Carrer et al. [16] | Hidden Markov Model, Viterbi Algorithm, SVM | ✓ | ✗ |
13 | Che et al. [17] | Multi-scale Residual CNN | ✗ | ✓ |
14 | Chen et al. [40] | 2-layer NN, SVM, Decision tree | ✓ | ✓ |
15 | Diaz-Escobar et al. [67] | InceptionV3, VGG-19, ResNet-50, Xception | ✗ | ✓ |
16 | Dastider et al. [18] | Autoencoder-based Hybrid CNN-LSTM | ✗ | ✓ |
17 | Durrani et al. [35] | Reg-STN | ✗ | ✓ |
18 | Ebadi et al. [52] | Kinetics-I3D | ✗ | ✓ |
19 | Frank et al. [19] | ResNet-18, MobileNetV2, DeepLabV3++ | ✗ | ✓ |
20 | Gare et al. [15] | Reverse Transfer Learning | ✗ | ✓ |
21 | Hou et al. [75] | Saab transform-based successive subspace learning model | ✗ | ✓ |
22 | Huang et al. [41] | Non-local channel attention ResNet | ✗ | ✓ |
23 | Karar et al. [53] | MobileNet, ShuffleNet, MENet, MnasNet | ✗ | ✓ |
24 | Karar et al. [56] | A semi-supervised GAN, a modified AC-GAN | ✗ | ✓ |
25 | Karnes et al. [54] | Few-shot learning | ✗ | ✓ |
26 | Khan et al. [76] | CNN | ✗ | ✓ |
27 | La Salvia et al. [42] | ResNet-18, ResNet-50 | ✗ | ✓ |
28 | Liu et al. [48] | Multi-symptom multi-label (MSML) network | ✗ | ✓ |
29 | MacLean et al. [77] | COVID-Net US | ✗ | ✓ |
30 | MacLean et al. [78] | ResNet | ✗ | ✓ |
31 | Mento et al. [44] | STN, U-Net, DeepLabV3+ | ✗ | ✓ |
32 | Muhammad and Hossain [58] | CNN | ✗ | ✓ |
33 | Nabalamba [49] | VGG-16, VGG-19, ResNet | ✗ | ✓ |
34 | Panicker et al. [36] | LUSNet (a U-Net like network for ultrasound images) | ✗ | ✓ |
35 | Perera et al. [55] | Transformer Network Architecture | ✗ | ✓ |
36 | Quentin Muller et al. [37] | ResNet-18 | ✗ | ✓ |
37 | Roshankhah et al. [45] | U-Net | ✗ | ✓ |
38 | Roy et al. [20] | STN, U-Net, U-Net++, DeepLabv3, Model Genesis | ✗ | ✓ |
39 | Sadik et al. [66] | DenseNet-201, ResNet-152V2, Xception, VGG-19, NasNetMobile | ✗ | ✓ |
40 | Wang et al. [43] | SVM | ✓ | ✗ |
41 | Xue et al. [21] | U-Net | ✗ | ✓ |
42 | Zeng et al. [79] | COVID-Net US-X | ✗ | ✓ |
Studies | AI models |
Loss | Results | Cross-validation | Augmentation/ Pre-processing |
Prediction Classes |
Code |
---|---|---|---|---|---|---|---|
Al-Jumaili et al. [68] | ResNet-18, RestNet-50, NASNetMobile, GoogleNet, SVM |
Categorical cross-entropy | Accuracy: 99% | k=5 | ✗ | COVID-19, CAP, Healthy | ✗ |
Al-Zogbi et al. [70] | DenseNet | L1 | Mean Euclidean error 14.8±7.0 mm | ✗ | ✗ | - | ✗ |
Almeida et al. [71] | MobileNet | Categorical cross-entropy | Accuracy: 95-100% | ✗ | ✗ | Abnornal, B-lines, Mild B-lines, Severe B-lines, Consolidations, Pleural thickening |
✗ |
Awasthi et al. [72] | Modified MobileNet, CNN, and other lightweight models |
Focal loss | Accuracy 83.2% | k=5 | ✗ | COVID-19, CAP, Healthy | ✗ |
Barros et al. [69] | POCOVID-Net, DenseNet, ResNet, NASNet, Xception-LSTM |
Categorical cross-entropy |
Accuracy: 93%, Sensitivity: 97% |
k=5 | ✗ | COVID-19, Bacterial Pneumonia, Healthy |
Availablea |
Born et al. [12] | POCOVID-Net | Categorical cross-entropy |
AUC: 0.94, Accuracy: 0.89, Sensitivity: 0.96, Specificity: 0.79, F1-score: 0.92 |
k=5 | Rotations of up to 10°; Horizontal and vertical flipping; Shifting up to 10% of the image height or width |
COVID-19, CAP, Healthy | ✗ |
Born et al. [74] | VGG-16 | Categorical cross-entropy |
Sensitivity: 0.98±0.04, specificity: 0.91±0.08 |
k=5 | Horizontal and vertical flips, rotations up to 10° and translations of up to 10% |
COVID-19, CAP, Healthy | ✗ |
Born et al. [13] | Frame based: VGG-16 Video-based: Models Genesis |
Categorical cross-entropy |
Sensitivity: 0.90±0.08, specificity: 0.96±0.04 |
k=5 | Resizing to 224×224 pixels; Horizontal and vertical flips; Rotation up to 10°; Translations of up to 10% |
COVID-19, CAP, Healthy | Availableb |
Diaz-Escobar et al. [67] | InceptionV3, ResNet-50, VGG-19, Xception |
Cross-entropy | Accuracy: 89.1%, ROC-AUC: 97.1% |
k=5 | Rotations (10°), horizontal and vertical flips, shifts (10%), and zoom (zoom range of 20%) |
COVID-19, non-COVID | ✗ |
Gare et al. [15] | U-Net (reverse-transfer learning; segmentation to classification) |
Cross-entropy | mIoU: 0.957±0.002, Accuracy: 0.849, Precision: 0.885, Recall: 0.925, F1-score: 0.897 |
k=3 | Left-to-right flipping; Scaling grey image pixels; |
COVID-19, CAP, Healthy | ✗ |
Hou et al. [75] | Saab transform based successive subspace CNN model |
Categorical cross-entropy |
Accuracy: 0.96 | ✗ | Saab transformation | A-line, B-line, Consolidation |
✗ |
Karar et al. [53] | MobileNets, ShuffleNets, MENet, MnasNet |
Categorical cross-entropy |
Accuracy: 99% | ✓ | Grayscale conversion | COVID-19, Bacterial Pneumonia, Healthy |
✗ |
Karar et al. [56] | A semi-supervised GAN, and a modified AC-GAN with auxiliary classifier |
Min-Max loss: special form of cross-entropy |
Accuracy: 91.22% | ✓ | Grayscale conversion | COVID-19, CAP, Healthy | ✗ |
Karnes et al. [54] | Few-shot learning (FSL) visual classification algorithm |
Mahalanobis distances | ROC-AUC > 85% | k=10 | ✗ | COVID-19, CAP, Healthy | Available upon request |
Muhammad and Hossain [58] | CNN | Categorical cross-entropy |
Accuracy 91.8%, Precision 92.5%, Recall 93.2% |
k=5 | Reflection around x- and y-axe;s Rotation by [-20°, +20°]; Scaling by a factor [0.8, 1.2] |
COVID-19, CAP, Healthy | ✗ |
Sadik et al. [66] | DenseNet-201, ResNet-152V2, Xception, VGG-19, NasNetMobile |
Categorical cross-entropy | Accuracy: 0.906 (with SpecMEn), F1-score: 0.90 |
✓ | Contrast-Limited Adaptive Histogram Equalization |
COVID-19, CAP, Healthy | ✗ |
Perera et al. [55] | Transformer | Categorical cross-entropy |
Accuracy: 93.9% | ✓ | ✗ | COVID-19, CAP, Healthy | ✗ |
a https://github.com/bmandelbrot/pulmonary-covid19 | |||||||
b https://github.com/BorgwardtLab/covid19_ultrasound |
Studies | AI models |
Loss | Results | Cross-validation | Augmentation/ pre-processing |
Prediction Classes |
Code |
---|---|---|---|---|---|---|---|
Carrer et al. [16] | HMM, VA, SVM | ✗ | Accuracy: 88% (convex probe) 94% (linear probe) |
k=10 | ✗ | Severity Score (0, 1, 2, 3) |
✗ |
Che et al. [17] | Multi-scale residual CNN | Cross-entropy | Accuracy: 95.11%, F1-score: 96.70% |
k=5 | Generation of local phase filtered and radial symmetry transformed images |
COVID-19, non-COVID |
✗ |
Dastider et al. [18] | Autoencoder-based Hybrid CNN-LSTM |
Categorical cross-entropy |
Accuracy: 67.7% (convex probe) 79.1% (linear probe) |
k=5 | Rotation, horizontal and vertical shift, scaling, horizontal and vertical flips |
Severity Score (0, 1, 2, 3) |
Available5 |
Frank et al. [19] | ResNet-18, ResNet-101, VGG-16, MobileNetV2, MobileNetV3, DeepLabV3++ |
SORD, cross-entropy |
Accuracy: 93%, F1-Score: 68.8% |
✗ | Affine transformations, rotation, scaling, horizontal flipping, random jittering |
Severity Score (0, 1, 2, 3) |
✗ |
Roy et al. [20] | Spatial Transformer Network (STN), U-Net, U-Net++, DeepLabV3, Model Genesis |
SORD, cross entropy | Accuracy: 96%, F1-score: 61±12%, Precision: 70±19%, Recall: 60±7% |
k=5 | ✓ | Severity Score (0, 1, 2, 3) |
Available6 |
Khan et al. [76] | Pre-trained CNN from [20] |
SORD, cross-entropy | Agreement-based scoring (82.3%) |
✗ | ✗ | Severity Score (0, 1, 2, 3) |
✗ |
Studies | AI models |
Loss | Results | Cross-validation | Augmentation/ pre-processing |
Prediction Classes |
Code |
---|---|---|---|---|---|---|---|
Adedigba and Adeshina [59] | SqueezeNet, MobileNetV2 |
Categorical cross-entropy |
Accuracy: 99.74%, Precision: 99.58%, Recall: 99.39% |
✗ | Rotation, Gaussian blurring, random zoom, random lighting, random warp |
COVID-19, CAP, Normal, Other |
✗ |
Azimi et al. [73] | InceptionV3, RNN | Cross-entropy | Accuracy: 94.44% | ✗ | Padding | Positive (COVID-19), Negative (non-COVID-19) |
Available7 |
MacLean et al. [77] | COVID-Net US | Cross-entropy | ROC-AUC: 0.98 | ✗ | ✗ | Positive (COVID-19) Negative (non-COVID-19) |
Available8 |
MacLean et al. [78] | ResNet | Categorical cross-entropy |
Accuracy: 0.692 | ✗ | ✗ | Lung ultrasound severity score (0, 1, 2, 3) |
✗ |
Zeng et al. [79] | COVID-Net US-X | Cross-entropy | Accuracy: 88.4%, AUC: 93.6% |
✗ | Random projective augmentation |
Positive (COVID-19) Negative (non-COVID-19) |
✗ |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
Altmetrics
Downloads
628
Views
579
Comments
0
Subscription
Notify me about updates to this article or when a peer-reviewed version is published.
© 2025 MDPI (Basel, Switzerland) unless otherwise stated