Submitted:
03 January 2024
Posted:
04 January 2024
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. background
2.1. Fundamentals of Transformer
2.1.1. Self-attention
2.1.2. Multi-head self-attention
2.2. transformer architecture
2.3. vision transformers
3. Organs
3.1. Breast
3.2. urinary bladder
3.3. Pancreatic
3.4. Prostate
3.5. Thyroid
3.6. Heart
3.7. Fetal
3.8. Carotic
3.10. Lung
3.11. Liver
3.12. IVUS
3.13. Gallbladder
3.14. Other-Synthetic
4. Discussion
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Koutras, A.; Perros, P.; Prokopakis, I.; Ntounis, T.; Fasoulakis, Z.; Pittokopitou, S.; Samara, A.A.; Valsamaki, A.; Douligeris, A.; Mortaki, A. Advantages and Limitations of Ultrasound as a Screening Test for Ovarian Cancer. Diagnostics 2023, 13, 2078. [Google Scholar] [CrossRef]
- Leung, K.-Y. Applications of Advanced Ultrasound Technology in Obstetrics. Diagnostics 2021, 11, 1217. [Google Scholar] [CrossRef] [PubMed]
- Brunetti, N.; Calabrese, M.; Martinoli, C.; Tagliafico, A.S. Artificial intelligence in breast ultrasound: from diagnosis to prognosis—a rapid review. Diagnostics 2022, 13, 58. [Google Scholar] [CrossRef] [PubMed]
- Gifani, P.; Vafaeezadeh, M.; Ghorbani, M.; Mehri-Kakavand, G.; Pursamimi, M.; Shalbaf, A.; Davanloo, A.A. Automatic diagnosis of stage of COVID-19 patients using an ensemble of transfer learning with convolutional neural networks based on computed tomography images. Journal of Medical Signals & Sensors 2023, 13, 101–109. [Google Scholar]
- Ait Nasser, A.; Akhloufi, M.A. A review of recent advances in deep learning models for chest disease detection using radiography. Diagnostics 2023, 13, 159. [Google Scholar] [CrossRef] [PubMed]
- Shalbaf, A.; Gifani, P.; Mehri-Kakavand, G.; Pursamimi, M.; Ghorbani, M.; Davanloo, A.A.; Vafaeezadeh, M. Automatic diagnosis of severity of COVID-19 patients using an ensemble of transfer learning models with convolutional neural networks in CT images. Polish Journal of Medical Physics and Engineering 2022, 28, 117–126. [Google Scholar] [CrossRef]
- Qian, J.; Li, H.; Wang, J.; He, L. Recent Advances in Explainable Artificial Intelligence for Magnetic Resonance Imaging. Diagnostics 2023, 13, 1571. [Google Scholar] [CrossRef] [PubMed]
- Vafaeezadeh, M.; Behnam, H.; Hosseinsabet, A.; Gifani, P. A deep learning approach for the automatic recognition of prosthetic mitral valve in echocardiographic images. Computers in Biology and Medicine 2021, 133, 104388. [Google Scholar] [CrossRef] [PubMed]
- Gifani, P.; Shalbaf, A.; Vafaeezadeh, M. Automated detection of COVID-19 using ensemble of transfer learning with deep convolutional neural network based on CT scans. International journal of computer assisted radiology and surgery 2021, 16, 115–123. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint 2020, arXiv:2010.11929. [Google Scholar]
- Reynaud, H.; Vlontzos, A.; Hou, B.; Beqiri, A.; Leeson, P.; Kainz, B. Ultrasound video transformers for cardiac ejection fraction estimation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Proceedings, Part VI 24, 2021. pp. 495–505. [Google Scholar]
- Gilany, M.; Wilson, P.; Perera-Ortega, A.; Jamzad, A.; To, M.N.N.; Fooladgar, F.; Wodlinger, B.; Abolmaesumi, P.; Mousavi, P. TRUSformer: improving prostate cancer detection from micro-ultrasound using attention and self-supervision. International Journal of Computer Assisted Radiology and Surgery 2023, 1–8. [Google Scholar] [CrossRef] [PubMed]
- Dadoun, H.; Rousseau, A.-L.; de Kerviler, E.; Correas, J.-M.; Tissier, A.-M.; Joujou, F.; Bodard, S.; Khezzane, K.; de Margerie-Mellon, C.; Delingette, H. Deep learning for the detection, localization, and characterization of focal liver lesions on abdominal US images. Radiology: Artificial Intelligence 2022, 4, e210110. [Google Scholar] [CrossRef] [PubMed]
- Wang, W.; Jiang, R.; Cui, N.; Li, Q.; Yuan, F.; Xiao, Z. Semi-supervised vision transformer with adaptive token sampling for breast cancer classification. Frontiers in Pharmacology 2022, 13, 929755. [Google Scholar] [CrossRef] [PubMed]
- Liu, X.; Almekkawy, M. Ultrasound Localization Microscopy Using Deep Neural Network. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control 2023.
- Liu, Y.; Zhao, J.; Luo, Q.; Shen, C.; Wang, R.; Ding, X. Automated classification of cervical lymph-node-level from ultrasound using Depthwise Separable Convolutional Swin Transformer. Computers in Biology and Medicine 2022, 148, 105821. [Google Scholar] [CrossRef]
- Perera, S.; Adhikari, S.; Yilmaz, A. Pocformer: A lightweight transformer architecture for detection of covid-19 using point of care ultrasound. In Proceedings of the 2021 IEEE international conference on image processing (ICIP); 2021; pp. 195–199. [Google Scholar]
- Li, J.; Zhang, P.; Wang, T.; Zhu, L.; Liu, R.; Yang, X.; Wang, K.; Shen, D.; Sheng, B. DSMT-Net: Dual Self-supervised Multi-operator Transformation for Multi-source Endoscopic Ultrasound Diagnosis. IEEE Transactions on Medical Imaging 2023. [Google Scholar] [CrossRef] [PubMed]
- Hu, X.; Cao, Y.; Hu, W.; Zhang, W.; Li, J.; Wang, C.; Mukhopadhyay, S.C.; Li, Y.; Liu, Z.; Li, S. Refined feature-based Multi-frame and Multi-scale Fusing Gate network for accurate segmentation of plaques in ultrasound videos. Computers in Biology and Medicine 2023, 107091. [Google Scholar] [CrossRef] [PubMed]
- Xia, M.; Yang, H.; Qu, Y.; Guo, Y.; Zhou, G.; Zhang, F.; Wang, Y. Multilevel structure-preserved GAN for domain adaptation in intravascular ultrasound analysis. Medical Image Analysis 2022, 82, 102614. [Google Scholar] [CrossRef]
- Yang, C.; Liao, S.; Yang, Z.; Guo, J.; Zhang, Z.; Yang, Y.; Guo, Y.; Yin, S.; Liu, C.; Kang, Y. RDHCformer: Fusing ResDCN and Transformers for Fetal Head Circumference Automatic Measurement in 2D Ultrasound Images. Frontiers in Medicine 2022, 9, 848904. [Google Scholar] [CrossRef]
- Sankari, V.R.; Raykar, D.A.; Snekhalatha, U.; Karthik, V.; Shetty, V. Automated detection of cystitis in ultrasound images using deep learning techniques. IEEE Access 2023. [Google Scholar] [CrossRef]
- Basu, S.; Gupta, M.; Rana, P.; Gupta, P.; Arora, C. RadFormer: Transformers with global–local attention for interpretable and accurate Gallbladder Cancer detection. Medical Image Analysis 2023, 83, 102676. [Google Scholar] [CrossRef]
- Shamshad, F.; Khan, S.; Zamir, S.W.; Khan, M.H.; Hayat, M.; Khan, F.S.; Fu, H. Transformers in medical imaging: A survey. Medical Image Analysis 2023, 102802. [Google Scholar] [CrossRef] [PubMed]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Advances in neural information processing systems 2017, 30. [Google Scholar]
- Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-end object detection with transformers. In Proceedings of the European conference on computer vision; 2020; pp. 213–229. [Google Scholar]
- Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; Jégou, H. Training data-efficient image transformers & distillation through attention. In Proceedings of the International conference on machine learning; 2021; pp. 10347–10357. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the Proceedings of the IEEE/CVF international conference on computer vision, 2021; pp. 10012–10022.
- Wang, W.; Xie, E.; Li, X.; Fan, D.-P.; Song, K.; Liang, D.; Lu, T.; Luo, P.; Shao, L. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the Proceedings of the IEEE/CVF international conference on computer vision, 2021; pp. 568–578.
- Wu, H.; Xiao, B.; Codella, N.; Liu, M.; Dai, X.; Yuan, L.; Zhang, L. Cvt: Introducing convolutions to vision transformers. In Proceedings of the Proceedings of the IEEE/CVF international conference on computer vision, 2021; pp. 22–31.
- Ranftl, R.; Bochkovskiy, A.; Koltun, V. Vision transformers for dense prediction. In Proceedings of the Proceedings of the IEEE/CVF international conference on computer vision, 2021; pp. 12179–12188.
- https://www.who.int/news-room/fact-sheets/detail/breast-cancer.
- Liu, Y.; Yang, Y.; Jiang, W.; Wang, T.; Lei, B. 3d deep attentive u-net with transformer for breast tumor segmentation from automated breast volume scanner. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC); 2021; pp. 4011–4014. [Google Scholar]
- Gheflati, B.; Rivaz, H. Vision transformers for classification of breast ultrasound images. In Proceedings of the 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC); 2022; pp. 480–483. [Google Scholar]
- Ayana, G.; Choe, S.-W. BUVITNET: Breast ultrasound detection via vision transformers. Diagnostics 2022, 12, 2654. [Google Scholar] [CrossRef] [PubMed]
- Li, G.; Jin, D.; Yu, Q.; Qi, M. IB-TransUNet: Combining Information Bottleneck and Transformer for Medical Image Segmentation. Journal of King Saud University-Computer and Information Sciences 2023, 35, 249–258. [Google Scholar] [CrossRef]
- Mo, Y.; Han, C.; Liu, Y.; Liu, M.; Shi, Z.; Lin, J.; Zhao, B.; Huang, C.; Qiu, B.; Cui, Y. Hover-trans: Anatomy-aware hover-transformer for roi-free breast cancer diagnosis in ultrasound images. IEEE Transactions on Medical Imaging 2023. [Google Scholar] [CrossRef] [PubMed]
- Wu, H.; Huang, X.; Guo, X.; Wen, Z.; Qin, J. Cross-image Dependency Modelling for Breast Ultrasound Segmentation. IEEE Transactions on Medical Imaging 2023. [Google Scholar] [CrossRef] [PubMed]
- Ji, H.; Zhu, Q.; Ma, T.; Cheng, Y.; Zhou, S.; Ren, W.; Huang, H.; He, W.; Ran, H.; Ruan, L. Development and validation of a transformer-based CAD model for improving the consistency of BI-RADS category 3–5 nodule classification among radiologists: a multiple center study. Quantitative Imaging in Medicine and Surgery 2023, 13, 3671. [Google Scholar] [CrossRef]
- Li, G.; Jin, D.; Yu, Q.; Zheng, Y.; Qi, M. MultiIB-TransUNet: Transformer with multiple information bottleneck blocks for CT and ultrasound image segmentation. Medical Physics 2023. [Google Scholar] [CrossRef] [PubMed]
- Zhou, J.; Hou, Z.; Lu, H.; Wang, W.; Zhao, W.; Wang, Z.; Zheng, D.; Wang, S.; Tang, W.; Qu, X. A deep supervised transformer U-shaped full-resolution residual network for the segmentation of breast ultrasound image. Medical Physics 2023, 50, 7513–7524. [Google Scholar] [CrossRef]
- Song, M.; Kim, Y. Optimizing proportional balance between supervised and unsupervised features for ultrasound breast lesion classification. Biomedical Signal Processing and Control 2024, 87, 105443. [Google Scholar] [CrossRef]
- https://zenodo.org/records/8041285.
- https://www.kaggle.com/datasets/aryashah2k/breast-ultrasound-images-dataset.
- Lu, X.; Liu, X.; Xiao, Z.; Zhang, S.; Huang, J.; Yang, C.; Liu, S. Self-supervised dual-head attentional bootstrap learning network for prostate cancer screening in transrectal ultrasound images. Computers in Biology and Medicine 2023, 165, 107337. [Google Scholar] [CrossRef]
- Li, C.; Du, R.; Luo, Q.; Wang, R.; Ding, X. A novel model of thyroid nodule segmentation for ultrasound images. Ultrasound in Medicine & Biology 2023, 49, 489–496. [Google Scholar]
- Zhang, N.; Liu, J.; Jin, Y.; Duan, W.; Wu, Z.; Cai, Z.; Wu, M. An adaptive multi-modal hybrid model for classifying thyroid nodules by combining ultrasound and infrared thermal images. BMC bioinformatics 2023, 24, 315. [Google Scholar] [CrossRef] [PubMed]
- JERBI, F.; ABOUDI, N.; KHLIFA, N. Automatic classification of ultrasound thyroids images using vision transformers and generative adversarial networks. Scientific African 2023, 20, e01679. [Google Scholar] [CrossRef]
- Liu, Q.; Ding, F.; Li, J.; Ji, S.; Liu, K.; Geng, C.; Lyu, L. DCA-Net: Dual-branch contextual-aware network for auxiliary localization and segmentation of parathyroid glands. Biomedical Signal Processing and Control 2023, 84, 104856. [Google Scholar] [CrossRef]
- Chen, F.; Han, H.; Wan, P.; Liao, H.; Liu, C.; Zhang, D. Joint Segmentation and Differential Diagnosis of Thyroid Nodule in Contrast-Enhanced Ultrasound Images. IEEE Transactions on Biomedical Engineering 2023. [Google Scholar] [CrossRef]
- Zhao, X.; Li, H.; Xu, J.; Wu, J. Ultrasonic Thyroid Nodule Benign-Malignant Classification with Multi-level Features Fusions. In Proceedings of the 2023 8th International Conference on Image, Vision and Computing (ICIVC), 2023; pp. 907–912.
- Zeng, Y.; Tsui, P.-H.; Wu, W.; Zhou, Z.; Wu, S. MAEF-Net: multi-attention efficient feature fusion network for deep learning segmentation. In Proceedings of the 2021 IEEE International Ultrasonics Symposium (IUS); 2021; pp. 1–4. [Google Scholar]
- Ahmadi, N.; Tsang, M.; Gu, A.; Tsang, T.; Abolmaesumi, P. Transformer-based spatio-temporal analysis for classification of aortic stenosis severity from echocardiography cine series. IEEE Transactions on Medical Imaging 2023. [Google Scholar] [CrossRef]
- Vafaeezadeh, M.; Behnam, H.; Hosseinsabet, A.; Gifani, P. CarpNet: Transformer for mitral valve disease classification in echocardiographic videos. International Journal of Imaging Systems and Technology 2023. [Google Scholar] [CrossRef]
- Al Qurri, A.; Almekkawy, M. Improved UNet with Attention for Medical Image Segmentation. Sensors 2023, 23, 8589. [Google Scholar] [CrossRef]
- Luo, J.; Wang, Q.; Zou, R.; Wang, Y.; Liu, F.; Zheng, H.; Du, S.; Yuan, C. A Heart Image Segmentation Method Based on Position Attention Mechanism and Inverted Pyramid. Sensors 2023, 23, 9366. [Google Scholar] [CrossRef]
- Ahn, S.S.; Ta, K.; Thorn, S.L.; Onofrey, J.A.; Melvinsdottir, I.H.; Lee, S.; Langdon, J.; Sinusas, A.J.; Duncan, J.S. Co-attention spatial transformer network for unsupervised motion tracking and cardiac strain analysis in 3D echocardiography. Medical Image Analysis 2023, 84, 102711. [Google Scholar] [CrossRef] [PubMed]
- Tang, Z.; Duan, J.; Sun, Y.; Zeng, Y.; Zhang, Y.; Yao, X. A combined deformable model and medical transformer algorithm for medical image segmentation. Medical & Biological Engineering & Computing 2023, 61, 129–137. [Google Scholar]
- Liao, M.; Lian, Y.; Yao, Y.; Chen, L.; Gao, F.; Xu, L.; Huang, X.; Feng, X.; Guo, S. Left Ventricle Segmentation in Echocardiography with Transformer. Diagnostics 2023, 13, 2365. [Google Scholar] [CrossRef] [PubMed]
- Zhao, C.; Chen, W.; Qin, J.; Yang, P.; Xiang, Z.; Frangi, A.F.; Chen, M.; Fan, S.; Yu, W.; Chen, X. IFT-net: Interactive fusion transformer network for quantitative analysis of pediatric echocardiography. Medical Image Analysis 2022, 82, 102648. [Google Scholar] [CrossRef] [PubMed]
- Fazry, L.; Haryono, A.; Nissa, N.K.; Hirzi, N.M.; Rachmadi, M.F.; Jatmiko, W. Hierarchical Vision Transformers for Cardiac Ejection Fraction Estimation. In Proceedings of the 2022 7th International Workshop on Big Data and Information Security (IWBIS); 2022; pp. 39–44. [Google Scholar]
- Hagberg, E.; Hagerman, D.; Johansson, R.; Hosseini, N.; Liu, J.; Björnsson, E.; Alvén, J.; Hjelmgren, O. Semi-supervised learning with natural language processing for right ventricle classification in echocardiography—a scalable approach. Computers in Biology and Medicine 2022, 143, 105282. [Google Scholar] [CrossRef] [PubMed]
- Vafaeezadeh, M.; Behnam, H.; Hosseinsabet, A.; Gifani, P. Automatic morphological classification of mitral valve diseases in echocardiographic images based on explainable deep learning methods. International Journal of Computer Assisted Radiology and Surgery 2022, 17, 413–425. [Google Scholar] [CrossRef]
- Rahman, R.; Alam, M.G.R.; Reza, M.T.; Huq, A.; Jeon, G.; Uddin, M.Z.; Hassan, M.M. Demystifying evidential Dempster Shafer-based CNN architecture for fetal plane detection from 2D ultrasound images leveraging fuzzy-contrast enhancement and explainable AI. Ultrasonics 2023, 132, 107017. [Google Scholar] [CrossRef] [PubMed]
- Sarker, M.M.K.; Singh, V.K.; Alsharid, M.; Hernandez-Cruz, N.; Papageorghiou, A.T.; Noble, J.A. COMFormer: Classification of Maternal-Fetal and Brain Anatomy using a Residual Cross-Covariance Attention Guided Transformer in Ultrasound. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control 2023. [Google Scholar] [CrossRef] [PubMed]
- Arora, U.; Sengupta, D.; Kumar, M.; Tirupathi, K.; Sai, M.K.; Hareesh, A.; Chaithanya, E.S.S.; Nikhila, V.; Bhavana, N.; Vigneshwar, P. Perceiving placental ultrasound image texture evolution during pregnancy with normal and adverse outcome through machine learning prism. Placenta 2023. [Google Scholar] [CrossRef]
- Chen, X.; You, G.; Chen, Q.; Zhang, X.; Wang, N.; He, X.; Zhu, L.; Li, Z.; Liu, C.; Yao, S. Development and evaluation of an artificial intelligence system for children intussusception diagnosis using ultrasound images. Iscience 2023, 26. [Google Scholar] [CrossRef]
- Qiao, S.; Pang, S.; Luo, G.; Sun, Y.; Yin, W.; Pan, S.; Lv, Z. DPC-MSGATNet: dual-path chain multi-scale gated axial-transformer network for four-chamber view segmentation in fetal echocardiography. Complex & Intelligent Systems 2023, 1–17. [Google Scholar]
- Płotka, S.; Grzeszczyk, M.K.; Brawura-Biskupski-Samaha, R.; Gutaj, P.; Lipa, M.; Trzciński, T.; Sitek, A. BabyNet: residual transformer module for birth weight prediction on fetal ultrasound video. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; 2022; pp. 350–359. [Google Scholar]
- Płotka, S.; Grzeszczyk, M.K.; Brawura-Biskupski-Samaha, R.; Gutaj, P.; Lipa, M.; Trzciński, T.; Išgum, I.; Sánchez, C.I.; Sitek, A. BabyNet++: Fetal birth weight prediction using biometry multimodal data acquired less than 24 hours before delivery. Computers in Biology and Medicine 2023, 167, 107602. [Google Scholar] [CrossRef] [PubMed]
- Płotka, S.S.; Grzeszczyk, M.K.; Szenejko, P.I.; Żebrowska, K.; Szymecka-Samaha, N.A.; Łęgowik, T.; Lipa, M.A.; Kosińska-Kaczyńska, K.; Brawura-Biskupski-Samaha, R.; Išgum, I. Deep learning for estimation of fetal weight throughout the pregnancy from fetal abdominal ultrasound. American journal of obstetrics & gynecology MFM 2023, 5, 101182. [Google Scholar]
- Zhao, C.; Droste, R.; Drukker, L.; Papageorghiou, A.T.; Noble, J.A. Visual-assisted probe movement guidance for obstetric ultrasound scanning using landmark retrieval. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October; Proceedings, Part VIII 24, 2021. pp. 670–679. [Google Scholar]
- Lin, Y.; Huang, J.; Xu, W.; Cui, C.; Xu, W.; Li, Z. Method for carotid artery 3-D ultrasound image segmentation based on cswin transformer. Ultrasound in Medicine & Biology 2023, 49, 645–656. [Google Scholar]
- Li, L.; Hu, Z.; Huang, Y.; Zhu, W.; Zhao, C.; Wang, Y.; Chen, M.; Yu, J. BP-Net: Boundary and perfusion feature guided dual-modality ultrasound video analysis network for fibrous cap integrity assessment. Computerized Medical Imaging and Graphics 2023, 107, 102246. [Google Scholar] [CrossRef] [PubMed]
- Nehary, E.; Rajan, S.; Rossa, C. Lung Ultrasound Image Classification Using Deep Learning and Histogram of Oriented Gradients Features for COVID-19 Detection. In Proceedings of the 2023 IEEE Sensors Applications Symposium (SAS); 2023; pp. 1–6. [Google Scholar]
- Xing, W.; Liu, Y.; He, C.; Liu, X.; Li, Y.; Li, W.; Chen, J.; Ta, D. Frame-to-video-based Semi-supervised Lung Ultrasound Scoring Model. In Proceedings of the 2023 IEEE International Ultrasonics Symposium (IUS); 2023; pp. 1–4. [Google Scholar]
- Zhang, J.; Chen, Y.; Liu, P. Automatic Recognition of Standard Liver Sections Based on Vision-Transformer. In Proceedings of the 2022 IEEE 16th International Conference on Anti-counterfeiting, Security, and Identification (ASID), 2022; pp. 1–4.
- Zhang, J.; Chen, Y.; Zeng, P.; Liu, Y.; Diao, Y.; Liu, P. Ultra-Attention: Automatic Recognition of Liver Ultrasound Standard Sections Based on Visual Attention Perception Structures. Ultrasound in Medicine & Biology 2023, 49, 1007–1017. [Google Scholar]
- Huang, X.; Bajaj, R.; Li, Y.; Ye, X.; Lin, J.; Pugliese, F.; Ramasamy, A.; Gu, Y.; Wang, Y.; Torii, R. POST-IVUS: A perceptual organisation-aware selective transformer framework for intravascular ultrasound segmentation. Medical Image Analysis 2023, 89, 102922. [Google Scholar] [CrossRef]
- Yan, W.; Ding, Q.; Chen, J.; Yan, K.; Tang, R.S.-Y.; Cheng, S.S. Learning-based needle tip tracking in 2D ultrasound by fusing visual tracking and motion prediction. Medical Image Analysis 2023, 88, 102847. [Google Scholar] [CrossRef]
- Zhao, W.; Su, X.; Guo, Y.; Li, H.; Basnet, S.; Chen, J.; Yang, Z.; Zhong, R.; Liu, J.; Chui, E.C.-s. Deep learning based ultrasonic visualization of distal humeral cartilage for image-guided therapy: a pilot validation study. Quantitative Imaging in Medicine and Surgery 2023, 13, 5306. [Google Scholar] [CrossRef]
- Zhou, Q.; Wang, Q.; Bao, Y.; Kong, L.; Jin, X.; Ou, W. Laednet: A lightweight attention encoder–decoder network for ultrasound medical image segmentation. Computers and Electrical Engineering 2022, 99, 107777. [Google Scholar] [CrossRef]
- Katakis, S.; Barotsis, N.; Kakotaritis, A.; Tsiganos, P.; Economou, G.; Panagiotopoulos, E.; Panayiotakis, G. Muscle Cross-Sectional Area Segmentation in Transverse Ultrasound Images Using Vision Transformers. Diagnostics 2023, 13, 217. [Google Scholar] [CrossRef] [PubMed]
- Lo, C.-M.; Lai, K.-L. Deep learning-based assessment of knee septic arthritis using transformer features in sonographic modalities. Computer Methods and Programs in Biomedicine 2023, 237, 107575. [Google Scholar] [CrossRef] [PubMed]
- Zhang, G.; Zheng, C.; He, J.; Yi, S. PCT: Pyramid convolutional transformer for parotid gland tumor segmentation in ultrasound images. Biomedical Signal Processing and Control 2023, 81, 104498. [Google Scholar] [CrossRef]
- Manzari, O.N.; Ahmadabadi, H.; Kashiani, H.; Shokouhi, S.B.; Ayatollahi, A. MedViT: a robust vision transformer for generalized medical image classification. Computers in Biology and Medicine 2023, 157, 106791. [Google Scholar] [CrossRef]
- Qu, X.; Ren, C.; Wang, Z.; Fan, S.; Zheng, D.; Wang, S.; Lin, H.; Jiang, J.; Xing, W. Complex transformer network for single-angle plane-wave imaging. Ultrasound in Medicine & Biology 2023, 49, 2234–2246. [Google Scholar]






| Methods /References |
Publication Year | Task | Architecture | Dataset | Evaluation Metrics | Highlights |
|---|---|---|---|---|---|---|
| [33] | 2021 | segmentation | 3D Deep Attentive U-Net with Transformer | Self collected data set | Dice: 0.7636 Jc: 0.6214 HD:15.47 Prec: 0.7895 Se: 0.7542 Sp: 0.9885 |
3D deep convolution NN |
| [14] | 2022 | classification | Semi-supervised vision transformer | DBUI BreakHis |
Acc: 0.981 Prec:0.981 Rec: 0.986 F1-core: 0.984 |
a semi-supervised learning ViT |
| [34] | 2022 | classification | Vision transformer/Transfer learning | BUSI UDIAT (goo.gl/SJmoti) |
Acc: 0.867 AUC: 0.95 |
Using various augmentation methods |
| [35] | 2022 | classification | Vision transformer/Transfer learning | BUSI Mendeley breast ultrasound |
Acc: 0.919 AUC: 0.937 F1-core: 0.919 MCC score:0.924 Kappa score:0.919 |
Transfer learning from cancer cell classification |
| [36] | 2023 | segmentation | Transformer and information bottlenecks | Synapse dataset BUSI | DSC: 0.8195 HD: 20.35 |
multi-resolution fusion to skip connections |
| [37] | 2023 | classification | Horizontal and vertical transformers | UDIAT BUSI GDPH&SYSUCC |
AUC: 0.92 Acc: 0.893 Spe: 0.836 Prec: 0.906 Rec: 0.926 F1-score:0.916 |
Deriving horizontal and vertical spatial information |
| [38] | 2023 | Segmentation | Cross-Image Dependency Modeling | BUSI UDIAT |
Dice: 0.8577 Jc: 0.7899 Acc: 0.9733 Sp: 0.9894 Se: 0.8584 |
cross-image dependency module , cross-image contextual modeling; and a cross-image dependency loss |
| [39] | 2023 | Localization/BI-RADS classifications | Vision transformer | Self collected data set | Acc: 0.9489 Sp: 0.9509 Se: 0.941 |
BI-RADS classification |
| [40] | 2023 | segmentation | Transformer and information bottlenecks | BUSI | F1: 0.8078 IoU: 0.6775 |
Using one transformer layer |
| [41] | 2023 | segmentation | a full-resolution residual stream/ TransU-Net/ transformer | open BUS dataset from the Sun Yat-sen University Cancer Center/ UDIAT | Dice: 0.9104 | deep supervised transformer U-shaped full-resolution residual network |
| [42] | 2024 | segmentation /Classification | Using both supervised and unsupervised learning | BUSI UDIAT |
Acc: 0.99907 Sp: 0.9766 Se: 0.9977 |
Tackle the problem of mask unavailability |
| Methods /References |
Publication Year | Task | Architecture | Dataset | Evaluation Metrics | Highlights |
|---|---|---|---|---|---|---|
| [45] |
2023 | Classification |
Online-Net and Target-Net. | Self-collected data | Acc:0.8046; Malignant: Precision:0.8267;Recall:0.8662; F1-score:0.7907; Benign: Precision:0.7500;Recall:0.6364; F1-score:0.6885; |
a self-supervised dual-head attentional bootstrap learning network (SDABL), including Online-Net and Target-Net. |
| [12] | 2023 | Classification |
ROI scale and Core scale feature extraction | Self-collected data | Precision: 0.787; Se: 0.880; Sp: 0.512 AUROC: 0.803; |
a micro-ultrasound data set with biopsyresults |
| Methods /References |
Publication Year | Task | Architecture | Dataset | Evaluation Metrics | Highlights |
|---|---|---|---|---|---|---|
| [46] |
2023 | Segmentation | CNN, Vision Transformer, | Self-collected data the DDTI data set the Breast Ultrasound Images Data Set(BUID) |
IoU:0.810, Dice:0.892; |
boundary attention transformer net |
| [47] | 2023 | Classification |
CNN, Vision Transformer, | Self-collected data | Accuracy:0.9738; Precision:0.9699; Specificity:0.9739; Sensitivity:0.9736; F1-score:0.9717; F2-score:0.9738; |
Using ultrasound images and infrared thermal images simultaneously, Using CNN and Transformer for feature extraction and Vision Transformer for feature fusion. |
| [48] |
2023 | Segmentation | CNN, Vision Transformer, | Self-collected data | Dice: 84.76; Jaccard:74.39; Miou:86.5; Recall:83.9; Precision:86.5; |
Using residual bottlenecks, Transformer bottlenecks, and two branch down-sampling blocks, and the long-range feature extractor composed of the Vision Transformer. |
| [49] |
2023 | Classification | Hybrid CNN and ViT | Public CIM@LAB | F1:96.67, Recall:95.01, Precision:98.51, Accuracy:97.63, |
A hybrid ViT model with a backbone CNN. |
| [50] | 2023 | Segmentation Classification |
Swin Transformer | Self-collected data | Dice:82.41; Accuracy:86.59; |
The dynamic swin-transformer encoder and multi-level feature collaborative learning are combined into U-net |
| [51] | 2023 | Classification | Hybrid CNN and Swin Transformer, | public dataset DDTI provided by the National University of Colombia, | Accuracy:0.954; Specificity:0.958; Sensitivity:0.975; AUC:0.974; |
shallow and deep features are fused for classification. |
| Methods /References |
Publication Year | Task | Architecture | Dataset | Evaluation Metrics | Highlights |
|---|---|---|---|---|---|---|
| MAEF-Net [52] | 2023 | Segmentation and Detection | dual attention (DA) mechanism + atrous spatial pyramid pooling (EASPP) | EchoNet-Dynamic (10,030 videos) Private clinical dataset (2129 images) |
DSC: 0.9310 MAE: 0.9281 |
Captured heartbeat features, minimized noise, integrated a deep supervision mechanism, and employed spatial pyramid feature fusion |
| [53] | 2023 | Aortic stenosis (AS) detection and severity classification | Temporal Deformable Attention (TDA)+MLP+Transformer |
Private AS Dataset: 2247 patients and 9117 videos public dataset: TMED-2 577 patients | Acc(AS detection on private and dataset): 0.952 and 0.915 Acc(classification on private and dataset): 0.781 and 0.838% |
Implemented a temporal loss method to boost sensitivity towards subtle movements of the Autonomic Vascular (AV) system. Applied temporal attention mechanisms to merge spatial data with temporal contextual information Automatically identified key echo frames for classifier |
| CarpNet [54] | 2023 | classification | transformer network +Inception_Resnet_V2 | Private Dataset: 1773 case | Acc: 0. 71 | The initial public unveiling of the application of the Carpentier functional classification in echocardiographic videos of the mitral valve |
| Improved UNet [55] | 2023 | Segmentation | CNNs(Squeeze-and-Excitation (SE)) and Transformer | CAMUS Dataset | DSC(for ED): 0.9252 HD(for ED): 11.04mm DSC(for ES): 0.9264 HD(for ES): 12.35mm |
The proposed network architecture includes the introduction of the Three-Level Attention (TLA) module, utilizing attention mechanisms The TLA module boosts the feature embedding. A Transformer is integrated at the bottleneck. |
| Position Attention [56] |
2023 | Segmentation | Position Attention Block + Atrous Spatial Pyramid Pooling (ASPP) | EchoNet-Dynamic dataset |
DSC: 0.9145 Precision: 0.9079; Recall: 0.9278; F1-score: 0.9177 Jc: 0.8847 |
employing bicubic interpolation to produce high-resolution images, It integrates a position-aware attention to capture positional knowledge. |
| Co-attention spatial transformer [57] | 2023 | Tracking | Co-Attention Spatial Transformer Network (STN) |
synthetic dataset + an in vivo 3D echocardiography dataset | MTE: 0.99 |
implementation of a Spatial-Temporal Co-Attention Module within 3D Echocardiography |
| [58] | 2023 | Segmentation | gated axial attention | 480 transverse images | DSC: 0.919 | The network leveraged axial attention and dual-scale training to obtain detailed insights from long-range features, enabling the model to focus on important areas. ensuring its applicability across a wide range of medical imaging scenarios. |
| Segformer + Swin Transformer and K-Net [59] | 2023 | Segmentation | Mixed Vision Transformer + Lightweight Segformer | EchoNet-Dynamic dataset | DSC(for Swin and Segformer): 0.9292 and 0.9279 | The technique employs basic post-processing by discarding segments with the largest pixel square, leading to more accurate segmentation outcomes. Two exclusive Transformer automated deep-learning strategies are introduced for Left Ventricle (LV) segmentation in echocardiography. These strategies aim to enhance missegmented outcomes via post-processing. |
| IFT-Net [60] | 2022 | Segmentation |
interactive fusion transformer network (IFT-Net) |
4485 A4C and 1623 PSAX echocardiography of pediatric dataset + CAMUS |
Acc:0.954 DSC(LVEndo and LVEpi): 0.9049 and 0.8046 |
The novel interaction established between the convolution branch and the transformer branch enables bidirectional fusion of local features and global context information. A parallel network of Dual-Path Transformers (DPT) and Convolutional Neural Networks (CNN) was introduced, enabling effective fusion of local and global features through full-process dual-branch feature interactive learning. this system has been applied to perform an automatic quantitative analysis of pediatric echocardiography. |
| UltraSwin [61] | 2022 | estimate the ejection fraction | hierarchical vision Transformers | EchoNet-Dynamic dataset | MAE: 5.59 | calculating ejection fraction without requiring left ventricle segmentation |
| Semi-supervised learning with NLP [62] | 2022 | right ventricular (RV) function and size classification | text classification with12-layer BERT model | 12,684 examinations with Swedish text dataset | Se and Sp (Text classifier for RV size): 0.98 and 0.98 Se and Sp (Text classifier for RV function): 0.99 and 0.98 Acc (A4C and view classification): 0.92 and 0.73 Se and Sp (The image classifier for RV size and function): 0.8 and 0.85 Se and Sp (The image classifier for RV function): 0.93 and 0.72 |
Developed a pipeline for automatic image assessment using NLP models. Utilized model-annotated data from written echocardiography reports for training. Achieved significant improvement in sensitivity and specificity for identifying impaired RV function and enlarged RV. Demonstrated the potential of integrating auto-annotation within NLP applications. Showcased the capability for fast and cost-effective expansion of the training dataset. |
| Ultrasound Video Transformers [11] | 2021 | ES/ED detection and LVEF estimation | BERT model and Residual Auto-Encoder Network | Echonet-Dynamic dataset | Average Frame Distances of 3.36 Frames for ES and 7.17 Frames for ED, MAE(LVEF): 5.95 R2(LVEF): 0.52 |
Developed an end-to-end learnable approach that allows for ejection fraction estimation without the need for segmentation. Introduced a modified transformer architecture capable of processing image sequences of varying lengths. |
| Methods /References |
Publication Year | Task | Architecture | Dataset | Evaluation Metrics | Highlights |
|---|---|---|---|---|---|---|
| fetal plane detection [64] |
2023 | Classification | Swin Transformer+ Evidential Dempster–Shafer Based CNN | BCNatal: 12400 images | Acc: 0.889 |
Utilized an Evidentiary classifier, specifically the Dempster Shafer Layer, in conjunction with a custom-designed CNN for fetal plane detection. Implemented an end-to-end learnable approach for sample classification exploring the effects of the Swin Transformer, infrequently used in ultrasound fetal planes analysis |
| COMFormer [65] | 2023 | Classification | residual cross-variance attention (R-XCA) | BCNatal: 12, 400 images | Acc(maternal-fetal): 0.9564 Acc(brain anatomy): 0.9633 |
The COMFormer model employs a R-XCA block, leveraging residual connections to decrease gradients and boost the learning process. |
| placental ultrasound image texture evolution [66] | 2023 | Classification | vision transformer (ViT) | 1008 cases | Acc(T1&T2 images): 0.6949 Acc(T2&T3 images): 0.7083 Acc(T1&T3 images): 0.8413 |
Evaluated three deep learning models and found that the transfer learning model achieved the highest accuracy. |
| CIDNet [67] | 2023 | Classification | MI-DTC (multi-stance deformable transformer classification) | 9999 images | balance Acc(BACC): 0.8464 AUC: 0.9716 |
Utilized four CNN based model as backbone networks for pre-processing. Implemented an effective cropping procedure in the pre-processing module. multi-weighted new loss function led to improvement application of Gaussian blurring curriculum was confirmed to fix the texture bias. |
| DPC-MSGATNet [68] | 2023 | Segmentation | Interactive dual-path chain gated axial-transformer (IDPCGAT) |
556 FC views | F1 score: 0.9687 IoU: 0.9399 |
DPC-MSGATNet was developed with a global and a local branch network allows for the simultaneous handling of the full image and its smaller segments |
| BabyNet [69] | 2022 | regression | Residual Transformer Module in the 3D ResNet | 225 2D fetal ultrasound videos | MAPE: 7.5 + 0.66 | present a new methodology for predicting birth weight, which is derived directly from fetal ultrasound video scans. leverages a novel Residual Transformer Module |
| BabyNet++ [70] | 2023 | regression | Residual Transformer with Dynamic Affine Feature Transform Maps (DAFT) |
582 2D fetal ultrasound videos |
MAPE: 5.1 + 0.6 | Demonstrated that BabyNet++ outperforms expert clinicians Proved that BabyNet++ is less sensitive to clinical data |
| [71] | 2023 | regression | BabyNet | 900 routine fetal ultrasound examinations | MAPE: 3.75+ 2.00%. | There is no significant difference observed between fetal weight predictions made by human experts and those generated by a deep network |
| RDHCformer [21] | 2022 | Segmentation | Integrating Transformer and CNN | HC18 dataset | MAE ± std (mm): 1.97±1.89 | rotating ellipse detection method was employed for skull edge detection, based on the anchorfree method. To address the challenge of angle regression, a Soft Stagewise Regression (SSR) strategy was introduced Kullback-Leibler Divergence (KLD) loss was incorporated into the total loss function to enhance the regression accuracy |
| Transformer-VLAD [72] | 2021 | image retrieval | Transformer-VLAD(vector of locally aggregated descriptors) | ScanTrainer Simulator(535,775 US images) | recall@top1: 0.834 | The task of directing the movement of the US probe was addressed as a landmark retrieval issue, utilizing a learned descriptor search method. A Transformer-VLAD network was specifically developed to facilitate automatic landmark retrieval. |
| Methods /References |
Publication Year | Task | Architecture | Dataset | Evaluation Metrics | Highlights |
|---|---|---|---|---|---|---|
| U-CSWT [73] | 2023 | Segmentation | U-shaped CSWin transformer | 213 3-D ultrasound Images | DSC (MAB in the common carotid artery): 0.946 DSC (LIB in the common carotid artery): 0.908 |
This method employs a novel approach to descriptor learning, which is accomplished through contrastive learning. This technique makes use of self-constructed anchor-positive-negative pairs of ultrasound images. |
| BP-Net [74] | 2023 | classification | boundary and perfusion network (BP-Net) + multi-modal fusion block | 245 US and CEUS videos | Acc: 0.9235 AUC: 0.935 |
a multi-modal fusion block has been incorporated to delve deeper into the internal/external characteristics of the plaque and highlight more influential features across US and contrast-enhanced ultrasound (CEUS) videos. It capitalizes on the sturdiness of CNN and the refined global modeling of Transformers, leading to more precise classification results. |
| RMFG_Net [19] | 2023 | Segmentation | Transformer-based Cross-scale Spatial Location (TCSL) | DT dataset: 157 |
DSC: 0.8598 IoU: 0.7922 HD (mm): 11.66 |
A proposed Spatial–Temporal Feature Filter (STFF) learns more target information from low-level features a multilayer gated fusion model is introduced for efficient information propagation, reducing noise during fusion. |
| Methods /References |
Publication Year | Task | Architecture | Dataset | Evaluation Metrics | Highlights |
|---|---|---|---|---|---|---|
| Nehary [75] | 2023 | Classification | Vision transformer (ViT) | lung ultrasound images (LUS) dataset: 202 | Acc: 0.8666 |
the advantages of ViT models include their ability to extract abstract features, leverage transfer learning, utilize transformer encoding for spatial context understanding, and perform accurate final classification |
| POCFormer [17] | 2023 | classification | vision transformer and a linear transformer | 212 US videos | Acc: 0.939 | lightweight transformer architecture |
| DaViT [76] | 2023 | Segmentation | a dual attention vision transformer (DaViT) | LUS dataset: 202 |
Acc(FL scoring): 0.9508 Acc(VL scoring): 0.9259 |
using a long-short term memory (LSTM) module for correlation analysis |
| Methods /References |
Publication Year | Task | Architecture | Dataset | Evaluation Metrics | Highlights |
|---|---|---|---|---|---|---|
| [77] | 2022 | Classification | Vision transformer (ViT) | 13970 images | Acc: 0.929 |
standardize the medical examination of the liver in adults |
| DETR [13] | 2022 | Detection | vision transformer and a linear transformer | 1026 patients | Sp: 0.90 Se: 0.97 |
detecting, localizing, and characterizing focal liver lesions |
| Ultra-Attention [78] | 2023 | Classification | Transformer | 14900 images | Acc: 0.932 | accurately identifying standard sections by considering the coupling of anatomic struc-tures within the images |
| Methods /References |
Publication Year | Task | Architecture | Dataset | Evaluation Metrics | Highlights |
|---|---|---|---|---|---|---|
| POST-IVUS [79] | 2023 | segmentation | selective transformer | IVUS-2011 | Jac: 0.92 | Segmentation by combining Fully Convolutional Networks (FCNs) with temporal context-based feature encoders |
| MSP-GAN [20] | 2023 | classification | vision transformer and a linear transformer | 212 US videos | Acc: 0.939 | domain adaptation in IVUS |
| Methods /References |
Publication Year | Task | Architecture | Dataset | Evaluation Metrics | Highlights |
|---|---|---|---|---|---|---|
| CTN [23] | 2023 | plane-wave imaging (PWI) | CTN: complex transformer network |
1700 samples | contrast ratio: 11.59 dB contrast-to-noise ratio: 1.16 generalized contrast-to-noise ratio: 0.68 |
A CTN was developed using complex convolution to manage envelope information and extract complex reconstruction features from complex IQ data. This resulted in higher spatial resolution and contrast at significantly reduced computational costs. The Complex Self-Attention (CSA) module, was developed based on the principles of the self-attention mechanism. This module assists in eliminating irrelevant complex reconstruction features, thus enhancing image quality. |
| SR-MT [15] | 2023 | localization | Swin transformer | 11 000 realistic synthetic datasets | Lateral localization precision (LP)(MB= 1.6 MBs/mm2):15.0 DSC: 0.8 IoU: 0.66 |
The research confirmed the effectiveness of the proposed method in precisely locating Microbubbles (MB) in synthetic data and in vivo visualization of brain structures. |
| Depthwise Swin Transformer [16] | 2022 | classification | Swin Transformer | 2268 ultrasound images(1146 cases) | Acc: 0.8065 Se: 0.8068 Sp: 0.7873 F1 value: 0.7942 |
introducing a comprehensive approach for categorizing cervical lymph node levels in ultrasound images. Employing model that combines depthwise separable convolutions withs transformer architecture, along with a novel loss function. |
| tip tracking [80] | 2023 | Tracking | visual tracking network | 3,000 US images |
tracking success rate: 78% |
Implemented a motion prediction system, based on the Transformer network Constructed a visual tracking module leveraging dual mask sets to pinpoint the needle tip and minimize background noise. Constructed a robust data fusion system that combines the results from the motion prediction and visual tracking systems |
| [81] | 2023 | Segmentation | Medical Transformer (MedT) | 5,321 ultrasound images | DSC: 0.894 | Developed image-guided therapy (IGT) for visualization of distal humeral cartilage |
| LAEDNet [82] | 2022 | Segmentation | Lightweight Attention Encoder–Decoder Network+ Lightweight Residual Squeeze-and-Excitation (LRSE) | Brachial Plexus (BP) Dataset Breast Ultrasound Images Dataset (BUSI) Head Circumference Ultrasound (HCUS) Dataset |
DSC (BP): 0.73 DSC(BUSI): 0.738 DSC(HCUS): 0.913 |
The LAEDNet's unique asymmetrical structure plays a crucial role in minimizing network parameters, thereby accelerating the inference process. A compact decoding block named LRSE has been developed, which employs an attention mechanism for smooth integration with the LAEDNet backbone. |
| TMUNet [83] | 2023 | Segmentation | Vision transformer+The contextual attention network (TMUNet) | 2005 transverse ultrasound | DSC: 0.96 | Providing additional knowledge to ensure the execution of the previously mentioned tasks. |
| [84] | 2023 | Feature extraction+Classification | vision transformer (ViT) | 278 images | Acc: 0.92 AUC: 0.92 |
Vision Transformer is employed as a feature extractor, while Support Vector Machine (SVM) acts as the classifier |
| PCT [85] | 2023 | Segmentation | Pyramid Convolutional Transformer (PCT) | PGTSeg(parotid gland tumor segmentation ) dataset: 365 images | IoU: 0.8434 DSC: 0.9151 |
The Transformer branch incorporates an enhanced version of the multi-head attention mechanism, referred to as the multi-head fusion attention (MHFA) module. |
| MedViT [86] | 2023 | Classification | Medical Vision Transformer (MedViT) | BreastMNIST: 780 breast ultrasound | AUC: 0.938 Acc: 0.897 |
To improve both generalization performance and adversarial resilience, the authors aim to increase a model’s reliance on global structure features rather than texture information. They do this by calculating the mean and variance of training examples along the channel dimensions in the feature space and mixing them together. This method enables the exploration of new regions in the feature space that are mainly associated with global structure features. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
