Preprint
Review

This version is not peer-reviewed.

Scoping Review of Recent Trends and Challenges in Artificial Intelligence Based Medical Ultrasound Denoising

Submitted:

31 March 2026

Posted:

01 April 2026

You are already at the latest version

Abstract
(1) Background: Ultrasound (US) imaging is widely used in clinical diagnosis but is often degraded by speckle noise, which reduces image quality and can hinder interpretation. Deep learning has emerged as a promising approach for US denoising, yet its clinical applicability remains unclear. (2) Methods: A systematic review of studies published in the last three years on deep learning-based US denoising was conducted following PRISMA-DTA guidelines. Searches were performed in IEEE-Xplore, PubMed, ScienceDirect, Scopus, Web of Science, and Google Scholar. Data were extracted on Anatomy, noise type, learning paradigm, network architecture, datasets, evaluation metrics, and performance outcomes. (3) Results: from 951 records scrapped, 36 studies were included. Most focused-on breast, fetal, cardiac, and abdominal US. Convolutional neural networks (CNNs), particularly U-Net, were the most common approach, while GANs, transformers, and variational autoencoders were less explored. Reported PSNR ranged from 30-45 dB and SSIM from 0.85-0.97. Most studies (34 out of 36) relied on synthetic noise and paired datasets, with limited evaluation on real clinical images. (4) Conclusions: CNN-based methods dominate US denoising research, but translation to clinical practice is limited due to reliance on synthetic data and inconsistent evaluation metrics. Future work should focus on large benchmark datasets and standardized metrics to improve generalizability across clinical settings.
Keywords: 
;  ;  ;  

1. Introduction

Over the last few decades, diagnostic ultrasound (US) has been increasingly used for the non-invasive assessment of soft tissues. It is widely used in clinical practice for applications such as abdominal and pelvic imaging, obstetrics and gynecology, cardiology (echocardiography), musculoskeletal evaluation, vascular assessment and in image-guided procedures. Unlike expensive imaging modalities like computed tomography (CT) or magnetic resonance imaging (MRI), US is relatively low-cost and can be used for real time assessment of both soft tissue and bone [1,2,3]. Unlike X-ray–based modalities, US employs high-frequency acoustic pulses to probe tissues and captures echo signals without exposing patients to ionizing radiation, making it particularly suitable for vulnerable populations such as pregnant women, children, and critically ill patients requiring repeated imaging [4,5]. The widespread clinical adoption of US has been further accelerated by advancements in portability and miniaturization. Point-of-Care US (POCUS) has become integral to emergency medicine, critical care, internal medicine, anesthesia, and rural healthcare, supporting rapid triage, intervention guidance, and real-time physiological monitoring [6].
Despite these obvious advantages, US image quality is limited by various noise artifacts. US imaging relies on the transmission of short acoustic pulses in the frequency range of 2 -20 MHz tissues of varying acoustic impedance [7,8]. As the US beam propagates, it undergoes reflection, refraction, absorption, and attenuation, which can lead to various noise and artifacts, including speckle noise, electronic noise, clutter, reverberation, and shadowing [9,10]. The effects of these noises can often be compounded, which drastically degrades image fidelity and makes visual interpretation challenging [9,11]. Figure 1 illustrates common types of ultrasound noise across different anatomical regions, demonstrating how they degrade visual quality, blur anatomical structures, and obscure tissue boundaries. As illustrated in Figure 2, in addition to the artifacts arising from the physical properties of US, image acquisition factors such as probe pressure, probe angle, acquisition depth, gain settings, and system-specific processing steps applied to the raw signal also produce wide variability in the image quality [12,13].
Over the past decades, US denoising has progressed from hand-crafted statistical models to data-driven deep learning frameworks. Early statistical methods, such as the Lee filter [14], Frost filter [15], speckle reducing anisotropic diffusion (SRAD) [16], total variation (TV) regularization, and non-local means (NLM) filtering [17,18], have been employed to reduce US image noise while improving texture and preserving edges. These methods are often limited by sensitivity to parameter tuning, high computational cost, reliance on hand-crafted features and statistical assumptions [19,20]. Data driven deep learning models such as CNNs, UNet, GAN, and Vision Transformer (ViT) have been used for US enhancement by learning complex mappings and anatomically consistent transformations from the original ultrasound data without relying on handcrafted features or statistical assumptions. Most recently, self-supervised methods like Noise2Noise (N2N) addressed the lack of clean ground-truth data by enabling models to learn the underlying signal directly from pairs of independent noisy observations. Instead of relying on noise-free reference images, which are impractical to obtain in clinical settings, these approaches exploit the statistical nature of noise so that the network can recover consistent anatomical structures while suppressing random variations [21].
Although US denoising and de-speckling methods have been studied previously, existing survey and review papers provide limited and fragmented coverage of deep-learning-based US denoising. Early reviews in US image processing had primarily emphasized classical filtering approaches, such as SRAD, wavelet denoising, and non-local mean, while offering only brief overviews of learning-based techniques [22]. More recent surveys on medical image denoising [23] tend to focus on general-purpose deep learning methods across multiple imaging modalities, including CT, MRI, and X-ray, without addressing the unique challenges specific to US. In many cross-modality reviews, the US appears only as a minor subsection rather than as a dedicated topic. The review by Gupta et al. [24] discussed classical speckle reduction filters such as spatial-domain smoothing, transform-based filters, and PDE-based diffusion. A more recent survey by Sivaanpu et al. [13], examined 97 studies and offered a broad overview spanning classical, transformer and hybrid techniques. This survey provides a taxonomy of methods, discusses the nature of US noise, and tabulates the strengths and weaknesses of different denoising approaches, consolidating two decades of research. Furthermore, survey-level analyses by Kaur et al. [25] and Sagheer et al. [26] spanning multiple modalities, including US, MRI, CT, and X-ray, discussed the unique characteristics of US noise, such as multiplicative speckle, tissue-dependent scattering, and acquisition variability. While existing surveys have provided valuable overviews, they lack a fully up-to-date review of US-specific deep learning-based denoising approaches developed in the last three years capturing more recent approaches like self-supervised networks, diffusion models, and cross-domain generative models. Moreover, prior reviews tend to emphasize method cataloguing over in-depth interpretation and critical discussion, provide limited trend analysis across datasets, models, and evaluation settings, and fall short of outlining clear research directions for further improving US image denoising performance. Accordingly, this review presents a focused and up-to-date synthesis of deep learning-based US denoising methods, datasets, evaluation practices, and research gaps to inform future directions for clinically effective US image enhancement.

2. Materials and Methods

This review was registered in Rayyan systematic reviewing tool. We have followed the Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies (PRISMA-DTA) Checklist to guide this review.

2.1. Search Strategy and Data Sources

A structured literature search was conducted across major scientific databases: IEEE Xplore, PubMed, Science Direct, Scopus, Web of Science, Google Scholar (for supplementary search and snowballing). Search terms were designed to capture both the US-specific and Deep Learning specific components of the topic. Representative keyword combinations included: “US denoising”, “US image enhancement”, “Medical image Enhancement”, “Medical image denoising”, “US de-speckling” and “Speckle reduction”.

2.2. Inclusion Criteria

  • Studies were included if they met all the following criteria:
  • Focus on diagnostic US imaging;
  • Use deep learning or machine learning methods for US image denoising or speckle reduction;
  • Published in a peer-reviewed journal or a reputable medical imaging, computer vision, or machine learning conference;
Published after 2022, to capture more recent methods in this rapidly evolving field of study.

2.3. Study Screening and Selection

The screening process followed PRISMA guidelines and was conducted in multiple stages. First, titles were screened to remove clearly irrelevant studies. Second, duplicate records were removed. Abstracts of the remaining articles were then screened to identify studies explicitly addressing deep-learning-based US image denoising or speckle reduction. Finally, full-text screening was performed to confirm eligibility based on the predefined inclusion criteria.

2.4. Data Extraction

  • A structured data charting process was employed to systematically extract and organize information from all included studies. The data extraction form was piloted on a subset of studies and calibrated by the review team prior to use to ensure consistency. The following variables were extracted from each study.
  • Anatomy (Breast/ Fetal/ Cardiac/ Abdominal/Musculoskeletal/Others);
  • Imaging dimensionality(2D/3D/Videos);
  • Target noise type (Speckle, Gaussian noise);
  • Learning paradigm (Supervised Learning/Self Supervised/Unsupervised);
  • Deep learning architecture (CNN/Unet/GAN/Transformer);
  • Summary of the proposed denoising methodology;
  • Training data characteristics;
  • Evaluation metrics used for quantitative assessment;
  • Baseline methods used for comparison;
  • Summary of performance results;
  • Limitations of the study;
  • Future research directions stated;
Code and dataset availability.

2.5. Data Handling and Summary

Extracted data were categorized by learning paradigm, network architecture, application domain, and data source. Descriptive statistics and frequency counts were used to summarize quantitative variables, while qualitative information such as methodology details and clinical application were organized in tables. Trends and patterns across studies were identified to provide an overview of the current landscape in Deep Learning-based ultrasound denoising.

2.6. Limitations

This scoping review is limited to English-language publications, which may have excluded relevant studies in other languages. Additionally, the findings are primarily descriptive, and interpretations are based on the data reported in the included studies, which may introduce subjectivity.

3. Results and Discussion

This section may be divided by subheadings. It should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn.
Figure 3. PRISMA flow diagram summarizing the literature search and study selection process for ultrasound image denoising. A total of 951 records were identified, 801 were screened, 93 underwent full-text eligibility assessment, and 36 studies were included.
Figure 3. PRISMA flow diagram summarizing the literature search and study selection process for ultrasound image denoising. A total of 951 records were identified, 801 were screened, 93 underwent full-text eligibility assessment, and 36 studies were included.
Preprints 206040 g003

3.1. Study Selection

The study identification, screening, eligibility assessment, and inclusion process followed the PRISMA guidelines and is summarized in Figure 3. A total of 951 records were identified through database searching, and after removing duplicates, 801 unique records remained for screening. Title and abstract screening led to the exclusion of 629 records. Full texts of 96 reports were sought for retrieval, of which 93 were successfully assessed for eligibility. Following full-text evaluation, 57 studies were excluded for reasons including being out of scope, review articles, duplication, or non-English publication, resulting in the inclusion of 36 studies in the final review.

3.2. Characteristics of the Studies

The studies exhibited extensive diversity in terms of learning paradigms, network architecture, application domains, and data sources. Studies focused on B-mode US image denoising, with clinical applications spanning breast, fetal, cardiac, thyroid, liver, nerve, carotid artery, and general abdominal ultrasound.
In terms of learning paradigms, most studies used supervised learning (SL) approaches, while small subsets of recent studies implemented self-supervised (SSL) or unsupervised (USL) learning. The quantitative evaluation of US image denoising performance in the reviewed studies relies on a combination of reference-based and no-reference metrics, including mean squared error (MSE), root mean squared error (RMSE), mean structural similarity index measure (MSSIM), equivalent number of looks (ENL), contrast-to-noise ratio (CNR), signal-to-noise ratio (SNR), feature similarity index measure (FSIM), edge preservation index (EPI), natural image quality evaluator (NIQE), perception-based image quality evaluator (PIQE), figure of merit (FOM), improvement in signal-to-noise ratio (ISNR), speckle index (SI), signal-to-reconstruction error ratio (SRE), and universal image quality index (UIQ). These metrics are employed to assess different aspects of denoising outcomes, such as noise suppression, structural fidelity, perceptual quality, edge preservation, and contrast characteristics. A summary of the key characteristics of the studies, including learning paradigm, architecture type, dataset domain, and reported evaluation metrics, is provided in Table 1.
Across nearly all application domains, SL approaches were predominant, with the highest concentration observed in breast US imaging, illustrated in Figure 4. This finding is consistent with the relative availability of curated datasets and established evaluation benchmarks in breast imaging, which facilitate the use of paired or pseudo-ground truth data for supervised training. SL-based methods were also widely applied in carotid artery, fetal head, cardiac, liver, lung, and abdominal imaging, indicating that supervised paradigms remain the default choice when annotated data are accessible. In contrast, SSL and USL approaches were significantly less represented and confined to a limited subset of domains, such as liver, nerve, thyroid, and abdominal US. The relatively sparse adoption of SSL and USL across organ-specific datasets highlights an ongoing reliance on supervised formulations despite widespread acknowledgement of ground-truth limitations in US image denoising. As shown in Table1, reported deep learning architectures included conventional CNN-based denoisers, U-Net and multi-scale encoder-decoder variants, denoising auto encoder (DAE) and convolutional auto encoder (CAE) based models, GAN framework, denoising CNN (DnCNN), hybrid networks, as well as transformer models. Supervised CNNs and U-Nets were applied broadly across breast, liver, lung, cardiac, fetal, and carotid artery image denoising, while transformer-based and hybrid architectures were more commonly applied to specialized domains such as nerve and cardiac image denoising.

3.3. Training Data and Noise Modelling Strategy

Studies exhibit considerable variability in training data composition and noise modeling strategies. A substantial proportion of studies relied on synthetic or simulated speckle noise, often generated using multiplicative noise models applied to clean US images. Several studies employed publicly available US datasets, particularly in breast imaging, nerve, thyroid, cardiac, and carotid artery, while most of the others used private or institution-specific datasets. Moreover, only a limited number of studies explicitly reported training or validating models exclusively on real clinical US acquisitions. Furthermore, detailed descriptions of speckle statistics, scanner settings, or acquisition parameters were inconsistently reported across studies. In several cases, noise modeling assumptions or data generation procedures were not clearly specified, reflecting variability in reporting practices across literature.

3.4. Evaluation Metrics

PSNR and SSIM were the most frequently reported quantitative evaluation metrics. These full-reference measures were commonly used to assess reconstruction fidelity when paired or simulated ground-truth data were available. In addition, several studies reported US-specific metrics such as CNR and ENL to better reflect speckle suppression and contrast enhancement. On the other hand, a smaller subset of studies employed no-reference perceptual metrics, including NIQE and PIQE, particularly in scenarios where clean reference images were unavailable. Other metrics, such as entropy-based, edge-based, or gradient-based measures, were reported sporadically and were often specific to individual studies. The list of evaluation metrics reported across studies is summarized in Table 1.

3.5. Meta Analysis of Quantitative Metrices

Studies employing synthetic speckle noise often evaluate performance at multiple noise levels. To maintain clarity, reported values were aggregated across these noise or variance settings. These aggregated results provide descriptive summaries of model performance without a direct cross-study comparison.
The reviewed studies employed a combination of publicly available and privately acquired US datasets. As shown in Table 2, a substantial portion of the literature relied on open datasets, including BUSI [63], US-CASE[64], DDTI[65], BUSID[66], Breast US dataset B (Dataset B) [67], UNS[68], CAMUS[69], MedPix [70], PCOS [71], CBIS-DDSM [72], INBreast [73], HC18 [74], CCA [75], BUS-BRA[76], US-4[77]. These public datasets are widely used to facilitate benchmarking and reproducibility across studies. In contrast, Table 3 shows several works that evaluated their methods using private datasets obtained from different clinics and hospitals, reflecting real-world imaging conditions but limiting direct comparability due to restricted data access.
As illustrated in Figure 5(a), the PSNR distribution demonstrates that most studies achieve scores concentrated in the range of approximately 30dB – 45dB, with a few outliers extending to higher values, indicating variability in denoising performance across different datasets and architectures. The SSIM distribution reveals a similar concentration around 0.85 – 0.97, suggesting that structural similarity is generally high in most studies, although there are a few lower value outliers. The figure shows that while there is variability, most US image denoising studies achieve moderately good quantitative performance, with PSNR and SSIM predominantly falling in the higher ranges.
In addition to reference-based evaluation metrics, Figure 5(b) shows distributions of selected no-reference US image denoising evaluation metrics, including ENL, EPI, NIQE, and CNR, as reported across the reviewed studies. The ENL scores are most frequently reported in the lower-to-moderate range, with the majority of values concentrated approximately between 5 and 25, while a smaller number of higher values extend beyond this range. The NIQE values are predominantly clustered between approximately 4 and 5, with a limited number of lower reported values. The EPI report values are between 0.2 and 0.7. The CNR values are most frequently observed in the range of approximately 2dB to 5dB. The diversity of these reported metric values on real US images indicates that existing enhancement methods struggle to generalize across clinical scenario suggesting US image enhancement on real clinical data is still an open research problem, requiring more robust methods and standardized evaluation protocols.
Figure 5. Performance score distribution across studies. (a) Frequently used reference-based evaluation metrics scores distribution. (b) Common no-reference evaluation metrics scores distribution.
Figure 5. Performance score distribution across studies. (a) Frequently used reference-based evaluation metrics scores distribution. (b) Common no-reference evaluation metrics scores distribution.
Preprints 206040 g005

3.6. Methodological Trends

From the studies, U-Net-based architectures and their variants were commonly adopted models across the literature, reflecting their implementation incomplexity and having the capacity to handle denoising tasks by capturing multi-scale contextual information while preserving fine spatial details. CNN-based models, including DAE, CAE, and DnCNN, remained foundational, particularly in supervised learning paradigms. GAN-based approaches were frequently employed to enhance perceptual quality and texture realism, though these methods often required careful loss balancing and complex training procedures. In recent years (2024 - 2025), variational autoencoders (VAE), transformer-based, and hybrid CNN-Transformer architectures emerged, aiming to capture global contextual information, long-range dependencies and probabilistic representations, especially for more complex domains such as cardiac, nerve, and fetal US image denoising. As shown in Figure 6, the trends indicate a gradual evolution from conventional CNN and U-Net models toward hybrid, transformer, and VAE architectures, reflecting both methodological innovation and adaptation to practical challenges in clinical US datasets, while UNet and CNN, and hybrid of CNN with ViT still continue in the game.
Figure 5. Architectural trends employed by US image denoising studies over the years.
Figure 5. Architectural trends employed by US image denoising studies over the years.
Preprints 206040 g006

3.7. Identified Gap

The structured mapping of the literature highlighted several persistent research gaps in deep learning-based US image denoising. A major limitation is the heavy reliance on simulated or synthetic noise models, with many studies trained on paired synthetic noise datasets. While these approaches facilitate controlled experimentation, they may not fully capture the complex noise characteristics present in real clinical US, potentially limiting clinical generalizability. Moreover, validation on real clinical datasets is limited, particularly for unpaired datasets highlighting the scarcity of research directly addressing realistic acquisition conditions.
In addition, there is insufficient investigation of cross-device and cross-domain generalization. Emerging architectures such as hybrid CNN-transformer models and VAE showed promise, yet their performance across different datasets, acquisition devices, or clinical settings has not been systematically assessed. Few studies incorporated expert assessment or task-specific evaluation, such as diagnostic accuracy or image-guided interventions, meaning that the clinical impact of these denoising methods remains uncertain. Furthermore, reproducibility challenges persist, as many studies do not provide open datasets and source code. Those that do often rely on proprietary or institution-specific data, limiting the ability to benchmark and compare methods fairly. Thus, while literature demonstrates significant methodological innovation, these gaps indicate that further research and standardization is needed for robust, clinically meaningful, and generalizable denoising solutions.

4. Conclusions

This review presented a comprehensive and systematic analysis of recent deep-learning-based approaches for US image denoising and speckle reduction. By examining 36 studies published after 2022, the review mapped the current methodological landscape in terms of learning paradigms, architectural designs, dataset usage, noise modeling strategies, and evaluation metrics. The findings indicate that supervised learning approaches, particularly CNN and U-Net-based architectures, continue to dominate the field, largely driven by the availability of curated datasets and synthetic noise generation strategies. More advanced models, including GANs, hybrid CNN-Transformer architectures, variational autoencoders, and diffusion-based methods, have demonstrated promising performance improvements.
Despite these advances, several challenges, including reliance on simulated speckle noise and paired training data, lack of evaluation protocols standardization, with PSNR and SSIM dominating despite their limited clinical interpretability, restricted access to datasets and source code remain persistent. Hence, greater emphasis on open datasets, transparent reporting, clinically grounded and cross dataset validation, and remains open research problems for the development of reliable, generalizable, and clinically impactful US image denoising solutions.

Author Contributions

Conceptualization, M.D, A.H., M.M and M.C.; methodology, A.H., M.M and M.C.; validation, M.D, A.H., M.M and M.C.; formal analysis, M.D; investigation, M.D, A.H., M.M and M.C.; resources, M.D, A.H., M.M and M.C.; data curation, M.D.; writing—original draft preparation, M.D.; writing-review and editing, A.H., M.M and M.C.; visualization, M.D, A.H., M.M and M.C; supervision, A.H.; project administration, A.H.;. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Alberta Innovates AICE Concepts Grant grant awarded to Dr. Hareendranathan. The funding agency had no involvement in the study design, data collection, analysis, manuscript writing, or submission decision.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
US Ultrasound
PRISMA-DTA Preferred Reporting Items for a Systematic Review and Meta-Analysis of Diagnostic Test Accuracy studies
CNN Convolutional neural network
GAN Generative adversarial neural network
VAE Variational autoencoder
POCUS Point of care ultrasound
CT Computed tomography
MRI Magnetic resonance imaging
SL Supervised learning
SSL Self-supervised learning
USL Unsupervised learning
MSE Mean squared error
RMSE Root mean squared error
MSSIM Mean structural similarity index
ENL Equivalent number of looks
CNR Contrast-to-noise ratio
SRN Signal-to-noise ratio
FSIM Feature similarity index measure
EPI Edge preservation index
NIQE Natural image quality evaluator
PIQE Perception-based image quality evaluator
FOM Figure of merit
ISNR Improvement in signal-to-noise ratio
SI Speckle index
SRE Signal-to-reconstruction error
UIQ Universal image quality
SSIM Structural similarity index
PSNR Peak signal to noise ratio

References

  1. S. Wolstenhulme, “Peter Hoskins, Kevin Martin and Abigail Thrush (eds). Diagnostic Ultrasound: Physics and Equipment,” Ultrasound, vol. 28, no. 1, pp. 62-62, 2020. [CrossRef]
  2. T. Szabo, “Diagnostic Ultrasound Imaging—Inside Out,” 09/01 2004.
  3. M. Mahesh, “The Essential Physics of Medical Imaging, Third Edition,” Medical Physics, vol. 40, no. 7, p. 077301, 2013. [CrossRef]
  4. M. R. Torloni et al., “Safety of ultrasonography in pregnancy: WHO systematic review of the literature and meta-analysis,” Ultrasound in Obstetrics & Gynecology, vol. 33, no. 5, pp. 599-608, 2009. [CrossRef]
  5. C. M. I. Quarato et al., “A Review on Biological Effects of Ultrasounds: Key Messages for Clinicians,” (in eng), Diagnostics (Basel), vol. 13, no. 5, Feb 23 2023. [CrossRef]
  6. P. R. Atkinson et al., “Does Point-of-Care Ultrasonography Improve Clinical Outcomes in Emergency Department Patients With Undifferentiated Hypotension? An International Randomized Controlled Trial From the SHoC-ED Investigators,” (in eng), Ann Emerg Med, vol. 72, no. 4, pp. 478-489, Oct 2018. [CrossRef]
  7. A. P. Sarvazyan, M. W. Urban, and J. F. Greenleaf, “Acoustic waves in medical imaging and diagnostics,” (in eng), Ultrasound Med Biol, vol. 39, no. 7, pp. 1133-46, Jul 2013. [CrossRef]
  8. S. P. Grogan and C. A. Mount, “Ultrasound Physics and Instrumentation,” in StatPearls. Treasure Island (FL): StatPearls Publishing, Copyright © 2025, StatPearls Publishing LLC., 2025.
  9. G. F. Pinton, G. E. Trahey, and J. J. Dahl, “Sources of image degradation in fundamental and harmonic ultrasound imaging using nonlinear, full-wave simulations,” (in eng), IEEE Trans Ultrason Ferroelectr Freq Control, vol. 58, no. 4, pp. 754-65, Apr 2011. [CrossRef]
  10. M. M. Goodsitt, P. L. Carson, S. Witt, D. L. Hykes, and J. M. Kofler, Jr., “Real-time B-mode ultrasound quality control test procedures. Report of AAPM Ultrasound Task Group No. 1,” (in eng), Med Phys, vol. 25, no. 8, pp. 1385-406, Aug 1998. [CrossRef]
  11. C. L. Moore and J. A. Copel, “Point-of-care ultrasonography,” (in eng), N Engl J Med, vol. 364, no. 8, pp. 749-57, Feb 24 2011. [CrossRef]
  12. N. Yahya, N. S. Kamel, and A. S. Malik, “Subspace-based technique for speckle noise reduction in ultrasound images,” BioMedical Engineering OnLine, vol. 13, no. 1, p. 154, 2014/11/25 2014. [CrossRef]
  13. A. Sivaanpu et al., “Speckle Noise Reduction Techniques in Ultrasound Imaging: A comprehensive review of the last two decades (2005-2024),” (in eng), Comput Methods Programs Biomed, vol. 274, p. 109150, Nov 6 2025. [CrossRef]
  14. J. S. Lee, “Digital Image Enhancement and Noise Filtering by Use of Local Statistics,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-2, no. 2, pp. 165-168, 1980. [CrossRef]
  15. V. S. Frost, J. A. Stiles, K. S. Shanmugan, and J. C. Holtzman, “A model for radar images and its application to adaptive digital filtering of multiplicative noise,” (in eng), IEEE Trans Pattern Anal Mach Intell, vol. 4, no. 2, pp. 157-66, Feb 1982. [CrossRef]
  16. Y. Yongjian and S. T. Acton, “Speckle reducing anisotropic diffusion,” IEEE Transactions on Image Processing, vol. 11, no. 11, pp. 1260-1270, 2002. [CrossRef]
  17. A. Pizurica and W. Philips, “Estimating the probability of the presence of a signal of interest in multiresolution single- and multiband image denoising,” (in eng), IEEE Trans Image Process, vol. 15, no. 3, pp. 654-65, Mar 2006. [CrossRef]
  18. P. Coupé, P. Hellier, C. Kervrann, and C. Barillot, “Nonlocal means-based speckle filtering for ultrasound images,” (in eng), IEEE Trans Image Process, vol. 18, no. 10, pp. 2221-9, Oct 2009. [CrossRef]
  19. C. Duarte-Salazar, A. Castro-Ospina, M. Becerra, and E. Delgado-Trejos, “Speckle Noise Reduction in Ultrasound Images for Improving the Metrological Evaluation of Biomedical Applications: An Overview,” IEEE Access, vol. PP, pp. 1-1, 01/17 2020. [CrossRef]
  20. S. Wu, Q. Zhu, and Y. Xie, “Evaluation of various speckle reduction filters on medical ultrasound images,” (in eng), Annu Int Conf IEEE Eng Med Biol Soc, vol. 2013, pp. 1148-51, 2013. [CrossRef]
  21. J. Lehtinen et al., “Noise2Noise: Learning Image Restoration without Clean Data,” presented at the Proceedings of the 35th International Conference on Machine Learning, Proceedings of Machine Learning Research, 2018. [Online]. Available: https://proceedings.mlr.press/v80/lehtinen18a.html.
  22. A. Pizurica, A. m. Wink, E. Vansteenkiste, W. Philips, and J. Roerdink, “A Review of Wavelet Denoising in MRI and Ultrasound Brain Imaging,” Current Medical Imaging Reviews, vol. 2, pp. 247-260, 05/01 2006. [CrossRef]
  23. C. Tian, Y. xu, L. Fei, and K. Yan, Deep Learning for Image Denoising: A Survey. 2019, pp. 563-572.
  24. N. Gupta, A. P. Shukla, and S. Agarwal, “Despeckling of Medical Ultrasound Images: A Technical Review,” (in English), Int J Inf Eng Electron Bus. [CrossRef]
  25. A. Kaur and G. Dong, “A Complete Review on Image Denoising Techniques for Medical Images,” Neural Process. Lett., vol. 55, no. 6, pp. 7807–7850, 2023. [CrossRef]
  26. S. V. Mohd Sagheer and S. N. George, “A review on medical image denoising algorithms,” Biomedical Signal Processing and Control, vol. 61, p. 102036, 2020/08/01/ 2020. [CrossRef]
  27. W. Cui, Z. Pan, X. Li, Y. Tang, and S. Sun, “Physical imaging model-guided deep variational despeckling framework for ultrasound images,” Knowledge-Based Systems, vol. 329, p. 114409, 2025/11/04/ 2025. [CrossRef]
  28. A. Soy and V. V. Prakash, “Medical Image Denoising using Deep Convolutional Autoencoders for Ultrasound,” in 2025 International Conference on Automation and Computation (AUTOCOM), 4-6 March 2025 2025, pp. 262-267. [CrossRef]
  29. J. Chi, J. Miao, J. H. Chen, H. Wang, X. Yu, and Y. Huang, “DSTAN: A Deformable Spatial-temporal Attention Network with Bidirectional Sequence Feature Refinement for Speckle Noise Removal in Thyroid Ultrasound Video,” (in eng), J Imaging Inform Med, vol. 37, no. 6, pp. 3264-3281, Dec 2024. [CrossRef]
  30. A. Kavand and M. Bekrani, “Speckle noise removal in medical ultrasonic image using spatial filters and DnCNN,” Multimedia Tools and Applications, vol. 83, no. 15, pp. 45903-45920, 2024/05/01 2024. [CrossRef]
  31. M. Jha, R. Gupta, and R. Saxena, “Noise cancellation of polycystic ovarian syndrome ultrasound images using robust two-dimensional fractional fourier transform filter and VGG-16 model,” International Journal of Information Technology, vol. 16, pp. 2497 - 2504, 2024.
  32. N. A. El-Hag, H. M. El-Hoseny, and F. Harby, “DNN-driven hybrid denoising: advancements in speckle noise reduction,” Journal of Optics, vol. 54, no. 5, pp. 3126-3135, 2025/11/01 2025. [CrossRef]
  33. N. Reddy, C. Chitteti, S. Yesupadam, V. Desanamukula, S. S. Vellela, and N. Bommagani, “Enhanced Speckle Noise Reduction in Breast Cancer Ultrasound Imagery Using a Hybrid Deep Learning Model,” Ingénierie des systèmes d information, vol. 24, pp. 1063-1071, 08/31 2023. [CrossRef]
  34. M. Khalifa, H. M. Hamza, and K. M. Hosny, “De-speckling of medical ultrasound image using metric-optimized knowledge distillation,” Scientific Reports, vol. 15, no. 1, p. 23703, 2025/07/03 2025. [CrossRef]
  35. P. N. Devi et al., “Denoising of Medical Ultrasound Images Using Deep Learning With Channel And Spatial Attention Based Modified U-Net,” 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT), pp. 1-5, 2024.
  36. W. T. Hsu, O. Agbodike, and J. Chen, “Attentive U-Net with Physics-Informed Loss for Noise Suppression in Medical Ultrasound Images,” in 2024 10th International Conference on Applied System Innovation (ICASI), 17-21 April 2024 2024, pp. 409-411. [CrossRef]
  37. S. Satish, N. Herald Anantha Rufus, M. Antony Freeda Rani, and R. Senthil Rama, “U-Net-Based Denoising Autoencoder Network for De-Speckling in Fetal Ultrasound Images,” Singapore, 2023: Springer Nature Singapore, in Fourth International Conference on Image Processing and Capsule Networks, pp. 323-338.
  38. P. Monkam et al., “US-Net: A lightweight network for simultaneous speckle suppression and texture enhancement in ultrasound images,” Computers in Biology and Medicine, vol. 152, p. 106385, 2023/01/01/ 2023. [CrossRef]
  39. R. S. S, S. S, R. S. S. K, B. V, S. Saranya, and B. Babu, “Ultrasound Image Denoising Using Cascaded Median Filter and Autoencoder,” in 2023 4th International Conference on Smart Electronics and Communication (ICOSEC), 20-22 Sept. 2023 2023, pp. 296-302. [CrossRef]
  40. T. Slimi, R. Ferjaoui, and A. B. Khalifa, “Ultrasound Imaging Enhancement Using Denoising AutoEncoders,” in 2025 IEEE 22nd International Multi-Conference on Systems, Signals & Devices (SSD), 17-20 Feb. 2025 2025, pp. 209-214. [CrossRef]
  41. S. Bhute, S. Mandal, and D. Guha, “Speckle Noise Reduction in Ultrasound Images using Denoising Auto-encoder with Skip connection,” in 2024 IEEE South Asian Ultrasonics Symposium (SAUS), 27-29 March 2024 2024, pp. 1-4. [CrossRef]
  42. Y. Jiménez-Gaona, M. J. Rodríguez-Alvarez, L. Escudero, C. Sandoval, and V. Lakshminarayanan, “Ultrasound breast images denoising using generative adversarial networks (GANs),” Intelligent Data Analysis, vol. 28, no. 6, pp. 1661-1678, 2024. [CrossRef]
  43. A. Sivaanpu et al., “A Lightweight Ultrasound Image Denoiser Using Parallel Attention Modules and Capsule Generative Adversarial Network,” Informatics in Medicine Unlocked, vol. 50, p. 101569, 2024/01/01/ 2024. [CrossRef]
  44. J. Liu et al., “Speckle noise reduction for medical ultrasound images based on cycle-consistent generative adversarial network,” Biomedical Signal Processing and Control, vol. 86, p. 105150, 2023/09/01/ 2023. [CrossRef]
  45. J. Gan, L. Wang, Z. Liu, and J. Wang, “Multi-scale ultrasound image denoising algorithm based on deep learning model for super-resolution reconstruction,” presented at the Proceedings of the 2023 4th International Conference on Control, Robotics and Intelligent System, Guangzhou, China, 2023. [Online]. Available: . [CrossRef]
  46. Y. Chen and Z. Guo, “TranSpeckle: An edge-protected transformer for medical ultrasound image despeckling,” IET Image Processing, vol. 17, no. 14, pp. 4014-4027, 2023. [CrossRef]
  47. D. Oliveira-Saraiva et al., “Make It Less Complex: Autoencoder for Speckle Noise Removal—Application to Breast and Lung Ultrasound,” Journal of Imaging, vol. 9, no. 10, p. 217, 2023. [Online]. Available: https://www.mdpi.com/2313-433X/9/10/217.
  48. Y. Li, X. Zeng, Q. Dong, and X. Wang, “RED-MAM: A residual encoder-decoder network based on multi-attention fusion for ultrasound image denoising,” Biomedical Signal Processing and Control, vol. 79, p. 104062, 2023/01/01/ 2023. [CrossRef]
  49. M. Jiang et al., “Controllable Deep Learning Denoising Model for Ultrasound Images Using Synthetic Noisy Image,” Cham, 2024: Springer Nature Switzerland, in Advances in Computer Graphics, pp. 297-308.
  50. O. Mahmoudi Mehr, M. R. Mohammadi, and M. Soryani, “Deep Learning-Based Ultrasound Image Despeckling by Noise Model Estimation,” (in eng), IRANIAN JOURNAL OF ELECTRICAL AND ELECTRONIC ENGINEERING, Research Paper vol. 19, no. 3, pp. 1-13, 2023. [CrossRef]
  51. Y. Chen, Z. Guo, J. Yuan, X. Li, and H. Yu, “Dual-TranSpeckle: Dual-pathway transformer based encoder-decoder network for medical ultrasound image despeckling,” Computers in Biology and Medicine, vol. 173, p. 108313, 2024/05/01/ 2024. [CrossRef]
  52. A. Sivaanpu et al., “Speckle Noise Reduction for Medical Ultrasound Images Using Hybrid CNN-Transformer Network,” IEEE Access, vol. 12, pp. 168607-168625, 2024. [CrossRef]
  53. Z. Bu, G. Zhou, and Y. Chen, A Complementary Global and Local Knowledge Network for Ultrasound Denoising with Fine-grained Refinement. 2024, pp. 1-5.
  54. B. B. Vimala et al., “Image Noise Removal in Ultrasound Breast Images Based on Hybrid Deep Learning Technique,” Sensors, vol. 23, no. 3, p. 1167, 2023. [Online]. Available: https://www.mdpi.com/1424-8220/23/3/1167.
  55. T. Slimi, A. Djeha, and A. B. Khalifa, “Medical Ultrasound Image Improvement Based on Denoising Convolutional Autoencoder,” in 2025 IEEE 22nd International Multi-Conference on Systems, Signals & Devices (SSD), 17-20 Feb. 2025 2025, pp. 715-720. [CrossRef]
  56. C. Yu, F. Ren, S. Bao, Y. Yang, and X. Xu, “Self-supervised ultrasound image denoising based on weighted joint loss,” Digital Signal Processing, vol. 162, p. 105151, 2025/07/01/ 2025. [CrossRef]
  57. C. Sun, J. Chi, H. Yu, B. Wu, Z. Li, and Y. Huang, “Self-Supervised Denoising of Thyroid Ultrasound Images Using SE-Module Enhanced U-Net with FPN,” in 2025 37th Chinese Control and Decision Conference (CCDC), 16-19 May 2025 2025, pp. 4212-4217. [CrossRef]
  58. T.-T. Zhang, H. Shu, K.-Y. Lam, C.-Y. Chow, and A. Li, “Feature decomposition and enhancement for unsupervised medical ultrasound image denoising and instance segmentation,” Applied Intelligence, vol. 53, no. 8, pp. 9548-9561, 2023/04/01 2023. [CrossRef]
  59. S. Goudarzi and H. Rivaz, Deep ultrasound denoising without clean data (SPIE Medical Imaging). SPIE, 2023.
  60. N. Chen, Y. Zhang, C. Fan, W. Zhao, C. Wang, and H. Wang, “DiffusionClusNet: Deep Clustering-Driven Diffusion Models for Ultrasound Image Enhancement,” IEEE Transactions on Consumer Electronics, vol. 71, no. 1, pp. 1495-1503, 2025. [CrossRef]
  61. P. Wei, L. Wang, J. Gan, X. Shi, and M. Shang, “Incorporation of Structural Similarity Index and Regularization Term into Neighbor2Neighbor Unsupervised Learning Model for Efficient Ultrasound Image Data Denoising,” Applied Sciences, vol. 14, no. 17, p. 7988, 2024. [Online]. Available: https://www.mdpi.com/2076-3417/14/17/7988.
  62. M. Basile et al., “Unsupervised Learning of Speckle Removal from Real Ultrasound Acquisitions without Clean Data,” in 2024 IEEE International Symposium on Medical Measurements and Applications (MeMeA), 26-28 June 2024 2024, pp. 1-6. [CrossRef]
  63. W. Al-Dhabyani, M. Gomaa, H. Khaled, and A. Fahmy, “Dataset of breast ultrasound images,” Data in Brief, vol. 28, p. 104863, 2020/02/01/ 2020. [CrossRef]
  64. Ultrasound cases [Online] Available: https://www.ultrasoundcases.info/.
  65. V. Pedraza, Narvaez, Duran, Munoz, Romero. DDTI Dataset: An open access database of thyroid ultrasound images. [Online]. Available: https://www.kaggle.com/datasets/dasmehdixtr/ddti-thyroid-ultrasound-images/data.
  66. P. S. Rodrigues. Breast Ultrasound Image. [Online]. Available: https://data.mendeley.com/datasets/wmy84gzngw/1.
  67. M. H. Yap et al., “Automated Breast Ultrasound Lesions Detection Using Convolutional Neural Networks,” IEEE Journal of Biomedical and Health Informatics, vol. 22, no. 4, pp. 1218-1226, 2018. [CrossRef]
  68. D. S. Anna Montoya, Hasnin, kaggle446, shirzad, Will Cukierski, and yffud. Ultrasound Nerve Segmentation. [Online]. Available: https://www.kaggle.com/c/ultrasound-nerve-segmentation.
  69. S. Leclerc et al., “Deep Learning for Segmentation Using an Open Large-Scale Dataset in 2D Echocardiography,” IEEE Transactions on Medical Imaging, vol. 38, no. 9, pp. 2198-2210, 2019. [CrossRef]
  70. N. L. o. Medicine. MedPix. [Online]. Available: https://lhncbc.nlm.nih.gov/medpix.html.
  71. I. A. PCOS Dataset. [Online]. Available: https://figshare.com/articles/dataset/PCOS_Dataset/27682557?file=50407062.
  72. R. Sawyer-Lee, Gimenez, F., Hoogi, A., & Rubin, D. urated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM). [CrossRef]
  73. C. Moreira, I. Amaral, I. Domingues, A. Cardoso, M. J. Cardoso, and J. S. Cardoso, “INbreast: toward a full-field digital mammographic database,” (in eng), Acad Radiol, vol. 19, no. 2, pp. 236-48, Feb 2012. [CrossRef]
  74. D. d. B. Thomas L. A. van den Heuvel, Chris L. de Korte and Bram van Ginneken. Automated measurement of fetal head circumference using 2D ultrasound images. [Online]. Available: http://doi.org/10.5281/zenodo.1322001.
  75. A. Momot. Common Carotid Artery Ultrasound Images. [CrossRef]
  76. W. Gómez-Flores, M. J. Gregorio-Calas, and W. Coelho de Albuquerque Pereira, “BUS-BRA: A breast ultrasound dataset for assessing computer-aided diagnosis systems,” (in eng), Med Phys, vol. 51, no. 4, pp. 3110-3123, Apr 2024. [CrossRef]
  77. Y. a. Z. Chen, Chunhui and Liu, Li and Feng, Cheng and Dong, Changfeng and Luo, Yongfang and Wan, Xiang. Pretraining deep ultrasound image diagnosis model through video contrastive representation learning. [Online]. Available: https://opendatalab.com/OpenDataLab/US-4.
Figure 1. Examples of common noise and artifacts in ultrasound imaging and their impact on image quality. (a) Motion artifacts in cardiac ultrasound leading to blurred structures. (b) Speckle noise obscuring the bone boundary in wrist ultrasound. (c) Speckle noise and acoustic shadowing reduce the visibility of thyroid nodule margins.
Figure 1. Examples of common noise and artifacts in ultrasound imaging and their impact on image quality. (a) Motion artifacts in cardiac ultrasound leading to blurred structures. (b) Speckle noise obscuring the bone boundary in wrist ultrasound. (c) Speckle noise and acoustic shadowing reduce the visibility of thyroid nodule margins.
Preprints 206040 g001
Figure 2. Ultrasound images acquired with different parameters and probe settings. (a) Pancreas images obtained at low and high transducer frequencies, illustrating the trade-off between penetration depth and image resolution. (b) Lung images acquired using linear and curvilinear probes, demonstrating differences in field of view and depth coverage. (c) Liver images obtained with low and high gain settings, showing changes in brightness, signal amplification, and noise.
Figure 2. Ultrasound images acquired with different parameters and probe settings. (a) Pancreas images obtained at low and high transducer frequencies, illustrating the trade-off between penetration depth and image resolution. (b) Lung images acquired using linear and curvilinear probes, demonstrating differences in field of view and depth coverage. (c) Liver images obtained with low and high gain settings, showing changes in brightness, signal amplification, and noise.
Preprints 206040 g002
Figure 4. Distribution of machine learning approaches across different anatomical regions in the studies. The number of studies is shown for supervised learning (SL), semi-supervised learning (SSL), and unsupervised learning (USL) methods.
Figure 4. Distribution of machine learning approaches across different anatomical regions in the studies. The number of studies is shown for supervised learning (SL), semi-supervised learning (SSL), and unsupervised learning (USL) methods.
Preprints 206040 g004
Table 1. Consolidated summary of deep learning-based US denoising and analysis studies, categorized by machine learning approach, architectural framework, anatomy and performance evaluation metrics.
Table 1. Consolidated summary of deep learning-based US denoising and analysis studies, categorized by machine learning approach, architectural framework, anatomy and performance evaluation metrics.
Studies Machine earning paradigm Deep Learning architecture Dataset domain (Anatomy) Metrics
Cui et al. [27],Soy et al. [28], Chi et al. [29], Kavand et al. [30], Jha et al. [31], El-Hag at al. [32], Reddy et al. [33] SL CNN Breast, Thyroid, Ovary (PCOS), Carotid Artery, General US PSNR, SSIM, MSE, RMSE, NIQE, PIQE, ENL, AGM, SSI, EI
Khalifa et al. [34], Devi et al. [35], Hsu et al. [36], Satish et al.[37], Monkam et al.[38] SL Unet Breast, Liver, Lung, Fetal (Cardiac/Head), Carotid Artery, General US PSNR, SSIM, MSE, EPI, ENL, CNR, SNR, AGM
Saranya et al.[39], Slimi et al. [40], Bhute et al. [41] SL DAE Breast, General US PSNR, SSIM, MSE
Jiménez-Gaona et al.[42], Sivaanpu et al. [43], Liu et al. [44], Gan et al. [45] SL GAN Breast, Fetal Head, General US PSNR, SSIM, MSSIM, MSE, RMSE, FOM, FSIM
Chen et al. [46],Oliveira et al. [47],Li et al.[48] SL CAE Breast, Lung, Nerve, Cardiac, Fetal Head PSNR, SSIM, RMSE
Jiang et al. [49], Mahmoudi et al. [50] SL DnCNN General US, Carotid Artery PSNR, SSIM
Chen et al. [51],Sivaanpu et al.[52], Bu et al. [53] SL Hybrid CNN + Transformer Fetal Head, Breast, Dental, Cardiac Phantom PSNR, SSIM, RMSE, MSE, NIQE, ENL, SNR, CNR, ISNR, SI
Vimala et al. [54] SL LPRNN (CNN+RNN)
Slimi et al. [55], Yu et al. [56], Sun et al. [57] SSL DAE / Unet Breast, Thyroid, Abdominal, General US PSNR, SSIM
Zhang et al. [58], Goudarzi et al. [59] USL CNN / Unet Nerve PSNR, SSIM, FSIM, EPI, CNR, SRE, UIQ, MSR
Chen et al. [60], Wei et al. [61], Basile et al. [62] USL N2N / VAE Liver, Breast, Abdominal, Heart, Mediastinum PSNR, SSIM, MSSIM, ENL, MSE, CNR, SNR
Table 2. Summary of studies performance report on public or open clinical US datasets.
Table 2. Summary of studies performance report on public or open clinical US datasets.
Study Dataset PSNR (dB) SSIM Other Metrics score
Slimi at al. [55] BUS-BRA 33.82 0.7625 -
Saranya et al. [39] PICMUS 44.48 0.935 -
Khalifa et al. [34] Breast US 40.72 0.940 -
Cui et al.[27] BUID - - ENL=5.71, AGM=38.57, NIQE=4.25, PIQE=31.83
BUSI - - ENL=2.71, AGM=33.24, NIQE=4.74, PIQE=50.61
CCA - - ENL=0.76, AGM=40.27, NIQE=4.36, PIQE=64.39
US-case - - ENL=3.50, AGM=65.18, NIQE=5.38, PIQE=50.57
Chen et al. [60] US-CASE 35.19 0.90 -
Slimi at al. [40] BUS-BRA 20.60 0.81 -
Chi at al. [29] DDTI 36.82 0.93 -
Jiménez-Gaona et al. [42] BUSI 39.79 0.96 -
Wei at al. [61] BUSI 40.03 - SSI=0.80
Chen at al. [51] UNS 32.82 0.9358 SSI=0.79
CAMUS 35.29 0.9317 SSI=0.78
Kavand et al. [30] BUI + MedPix 30.50 0.97 UIQ=0.54
Jha et al.[31] PCOS 72.96 0.99 UIQ=0.23
Sivaanpu et al. [52] HC18 - 0.965 ENL=7.26, NIQE=4.61, MSE=13.905, SRE=32.61, UIQ=0.04,
Sivaanpu at al. [43] HC18 33.86 0.91 ISNR=23.57dB
BUSI 34.16 0.90 ISNR=18.52dB
El-Hag at al.[32] BUSI 28.72 0.77 NIQE=4.50, MSE=157.3, SNR=40.95dB
Bhute et al. [41] BUSI 23.64 0.92 MSE=0.0048
Bu et al. [53] HC18 40.62 0.98 RMSE=2.33
Hsu et al. [36] BUSI + US-4 42.27 0.99 -
Reddy et al. [33] INBreast + CBIS-DDSM 64.44 - NIQE=0.08, MSE=0.22
Vimala et al. [54] INBreast + CBIS-DDSM 68.70 - SRE=63.8
Monkam et al. [38] HC18 - - ENL=15.71, CNR=1.10, SNR=39.32dB, SRE=27.46
BUSI - - ENL=17.04, CNR=4.20, SNR=34.54dB, SRE=17.04
CAMUS 32.77 0.87 RMSE=6.05
Table 3. Summary of performance report of studies on private clinical US dataset.
Table 3. Summary of performance report of studies on private clinical US dataset.
Scheme Dataset PSNR (dB) SSIM Other Metrics score
Saranya et al. [39] Private Fetus 44.48 0.935 -
Cehn et al. [60] Private (abdominal) 32.22 0.89 -
Sun et al. [57] Private Thyroid 32.89 0.88 -
Soy et al. [28] Private Synthetic US 34.38 0.93 MSE=0.0021
Devi et al. [35] Clinical US private 32.22 0.88 MSE=0.0008, UIQ=0.65
Sivaanpu et al. [52] Private Heart Phantom - - CNR=18.78dB, MSR=3.85
Basile et al. [62] Abdominal private - - ENL=55.89, MSE=0.004, SSI=0.33, CNR=4.21dB, SNR=8.57dB
Jiang et al. [49] Breast private 23.13 0.81 -
Liu et al. [44] Private breast, heart, lymph node 38.13 - RMSE=3.25, UIQ=0.98
Vimala et al. [54] Private CTS nerve 41.27 0.97 RMSE=0.85, CNR=11.05, EPI=0.18, SRE=51.7, UIQ=0.86, MSR=1.69,
Private CTS nerve 51.78 0.86 RMSE=1.69
Satish et al. [37] Private Fetal cardiac 29.07 0.86 -
Goudarzi et al. [59] Private Heart 37.27 0.90 MSE=0.006
Private Chicken breast 37.11 0.91 MSE=0.008
Private Bovine liver 31.28 0.88 MSE=0.017
Li et al. [48] Private Fetal Heart 34.31 0.88 RMSE=5.10
Gan et al. [45] Private Liver - - NIQE=0.58, PIQE=0.79, RMSE=0.39
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated