1. Introduction
Underwater imaging plays an important role in ocean observation and marine engineering applications. However, underwater images suffer from several artifacts. While capturing underwater images, a considerable portion of the light is absorbed during its propagation in the water, resulting in color distortion [
1]. Moreover, backward-forward light scattering severely affects the contrast and details of images, which further deteriorates the performance of underwater industrial applications [
2]. Therefore, Underwater Image Enhancement – addressing color restoration, enhancing contrast, and improving details – is an essential task in marine engineering and observation applications.
In the literature, many methods have been proposed for improving underwater image quality. These methods can be broadly categorized into prior-based, imaging-based, and machine-deep learning-based techniques [
3]. Prior-based methods utilize the underwater image formation model (IFM) and draw priors from the degraded images. Initially, a transmission map (TM) is derived from priors such as the dark-channel prior (DCP) [
4], red-channel prior (RCP) [
5], medium-channel prior (MDP) [
6], and haze-line prior (NLD) [
7]. Subsequently, the image is restored using the IFM, equipped with the TM and atmospheric light. In [
8], a red channel prior (RCP)-guided variational framework is introduced to enhance the TM, and the image is restored utilizing the IFM. Generally, these methods heavily depend on hand-crafted priors and excel in dehazing outdoor images. However, their performance is less than satisfactory for underwater images, and they struggle to correctly manage color shifts.
Imaging-based methods, in contrast to the prior ones, do not utilize the IFM. Instead, they rely on foundational image enhancement techniques such as contrast enhancement, histogram equalization, image fusion, and depth estimation. Peng et al. [
2] proposed a depth estimation technique for underwater scenes that relies on image blurriness and light absorption. This depth information is fed into the IFM to restore and enhance the underwater visuals. In another study by Ancuti et al. [
9], a combined approach of color compensation and white balancing is applied to the original degraded image to restore its clarity. Zhang et al. [
10] introduced a strategy guided by the minimum color loss principle and maximum attenuation map to adjust for color shifts. In another recent work by Zhang et al. [
11], a Retinex-inspired color correction mechanism is employed to eliminate color cast. The research further incorporates both local and global contrast-enhanced versions of the image to refine the color output. Although these approaches significantly improve the color and contrast of underwater images, they often overlook the specificities of underwater imaging models. This oversight can result in over-enhanced or over-saturated final images.
On the other hand, deep learning methods are mainly divided into ASM-based and non-ASM-based techniques. ASM-based methods use the Atmospheric Scattering Model to clear up hazy images. For instance, DehazeNet [
12] by Cai et al. applies a deep architectural approach to estimate transmission maps, generating clear images. Similarly, MSCNN [
13] by Ren et al. uses a multi-scale network to learn the mapping between hazy images and their corresponding transmission maps. AOD-Net [
14] by Li et al. directly creates clear images with a lightweight CNN, and DCPDN [
15] by Zhang et al. leverages an innovative network architecture, focusing on multi-level pyramid pooling to optimize dehazing performance. In contrast, non-ASM-based methods rely on various network designs to transform hazy images directly into clear ones through various structures like the encoder-decoder, GAN-based, attention-based, knowledge transfer, and transformer-based networks. Encoder-decoder structures like the Gated Fusion Network (GFN) by Ren et al. [
16] and Gated Context Aggregation Network (GCANet) by Chen et al. [
17] utilize multiple inputs and dilated convolutions to effectively reduce halo effects and enhance feature extraction. GAN-based networks such as Cycle-Dehaze by Engin et al. [
18] and BPPNet by Singh et al. [
19] offer unpaired training processes and are capable of learning multiple complexities, thereby yielding high-quality dehazing results even with minimal training datasets. Attention-based networks like GridDehazeNet by Liu et al. [
20] and FFA-Net by Qin et al. [
21] implement adaptive and attention-based techniques, providing more flexibility and efficiently dealing with non-homogeneous haze. Knowledge transfer methods like KTDN by Wu et al. [
22] leverage teacher-student networks, enhancing performance in non-homogeneous haze conditions by transferring the robust knowledge acquired by the teacher network. Lastly, transformer-based networks like DehazeFormer by Song et al. [
23] make significant modifications in traditional structures and employ innovative techniques like SoftReLU and RescaleNorm, presenting better performance in dehazing tasks with efficient computational cost and parameter utilization.
In this study, we propose a method for underwater image restoration that employs linear or non-linear mapping depending on the type of the input image. First, an input image is classified as Type-I or Type-II. Then, the Type-I image is enhanced using the Deep Line Model (DLM), while the Deep Curve Model (DCM) is employed for the Type-II image. The DLM effectively integrates color compensation and contrast adjustment in a unified process, utilizing deep lines for transformation, whereas the DCM is focused on applying higher-order curves for image enhancement. Both models utilize lightweight neural networks that learn per-pixel dynamic weights based on the input image’s characteristics. The efficacy of the proposed method is measured by conducting experiments on benchmark datasets and using quantitative metrics like PSNR and RMSE. The comparative analysis affirms our method’s effectiveness in accurately restoring underwater images, outperforming existing techniques.
2. Motivation
Let
represent a degraded underwater color image, where
denotes the coordinates of the image pixels and
signifies the red, green, and blue color channels, respectively. The color components of the image can thus be denoted as
. In underwater imaging, differential color attenuation across wavelengths frequently leads to compromised visual fidelity, predominantly impacting the red channel while leaving the green comparatively unaltered [
24]. Conventional restoration techniques typically adopt a sequential approach: initial color correction to balance channel disparities, followed by linear enhancement methods such as contrast stretching to mitigate the attenuation effects.
In literature, many methods use the mean values from each channel for color compensation [
9,
24,
25,
26,
27]. This approach is grounded in the Gray World assumption, which suggests that all channels should exhibit equal mean intensities in an undistorted image [
25], leading to a straightforward approach for color compensation:
where
,
, and
denote the mean values of the degraded color components of the underwater image
.
Although additive adjustments can compensate for color distortions in red and blue channels, our study reveals that this compensation may worsen the color composition in many cases, leading to inferior quality in restored images. As demonstrated in
Figure 1, two distinct outcomes are observed: Type-I images benefit from color correction, with spectral intensities approaching the ground truth, enhancing visual quality. Conversely, Type-II images experience worsened color discrepancies, resulting in suboptimal restoration. This necessitates a dual restoration approach. Our method uses a classifier to categorize images, followed by the application of the DLM for Type-I and the DCM for Type-II. This strategy ensures precise, adaptive restoration aligned with the specific requirements of each image category.
Author Contributions
Conceptualization, H.S., and M.M.; methodology, M.M.; software, H.S.; validation, H.S., and M.M.; formal analysis, H.S.; investigation, H.S.; resources, M.M.; data curation, H.S.; writing—original draft preparation, H.S.; writing—review and editing, M.M.; visualization, H.S.; supervision, M.M.; project administration, M.M.; funding acquisition, M.M. All authors have read and agreed to the published version of the manuscript.