Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

DAE-GAN: Underwater Image Super-Resolution Based on Degradation-Aware Attention Enhanced Generative Adversarial Network

Version 1 : Received: 3 April 2024 / Approved: 3 April 2024 / Online: 4 April 2024 (17:36:21 CEST)

How to cite: Gao, M.; Li, Z.; Wang, Q.; Fan, W. DAE-GAN: Underwater Image Super-Resolution Based on Degradation-Aware Attention Enhanced Generative Adversarial Network. Preprints 2024, 2024040331. https://doi.org/10.20944/preprints202404.0331.v1 Gao, M.; Li, Z.; Wang, Q.; Fan, W. DAE-GAN: Underwater Image Super-Resolution Based on Degradation-Aware Attention Enhanced Generative Adversarial Network. Preprints 2024, 2024040331. https://doi.org/10.20944/preprints202404.0331.v1

Abstract

Underwater image often exhibit detail blurring and color distortion due to light scattering, impurities, and other influences, obscuring essential textures and details. This presents a challenge for existing super-resolution techniques in identifying and extracting effective features, making high-quality reconstruction difficult. This research aims to innovate underwater image super-resolution technology to tackle this challenge. Initially, an underwater image degradation model was created by integrating random subsampling, Gaussian blur, mixed noise, and suspended particle simulation to generate a highly realistic synthetic dataset, thereby training the network to adapt to various degradation factors. Subsequently, to enhance the network's capability to extract key features, improvements were made based on the symmetrically structured Blind Super-Resolution Generative Adversarial Network (BSRGAN) model architecture. An attention mechanism based on energy functions was introduced within the generator to assess the importance of each pixel, and a weighted fusion strategy of adversarial loss, reconstruction loss, and perceptual loss was utilized to improve the quality of image reconstruction. Experimental results demonstrate that the proposed method achieved significant improvements in Peak Signal-to-Noise Ratio (PSNR) and Underwater Image Quality Measure (UIQM) by 0.85 dB and 0.19, respectively, significantly enhancing the visual perception quality and indicating its feasibility in super-resolution applications.

Keywords

underwater image super resolution; degradation model; generative adversarial network; atten-tion mechanism

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.