Contrast Limited Adaptive Histogram Equalization Based Fusion for Underwater Image Enhancement

In order to improve contrast and restore color for underwater image captured by camera sensors without suffering from insufficient details and color cast, a fusion algorithm for image enhancement in different color spaces based on contrast limited adaptive histogram equalization (CLAHE) is proposed in this article. The original color image is first converted from RGB color space to two different special color spaces: YIQ and HSI. The color space conversion from RGB to YIQ is a linear transformation, while the RGB to HSI conversion is nonlinear. Then, the algorithm separately operates CLAHE in YIQ and HSI color spaces to obtain two different enhancement images. The luminance component (Y) in the YIQ color space and the intensity component (I) in the HSI color space are enhanced with CLAHE algorithm. The CLAHE has two key parameters: Block Size and Clip Limit, which mainly control the quality of CLAHE enhancement image. After that, the YIQ and HSI enhancement images are respectively converted backward to RGB color. When the three components of red, green, and blue are not coherent in the YIQ-RGB or HSI-RGB images, the three components will have to be harmonized with the CLAHE algorithm in RGB space. Finally, with 4 direction Sobel edge detector in the bounded general logarithm ratio operation, a self-adaptive weight selection nonlinear image enhancement is carried out to fuse YIQ-RGB and HSI-RGB images together to achieve the final fused image. The enhancement fusion algorithm has two key factors: average of Sobel edge detector and fusion coefficient, and these two factors determine the effects of enhancement fusion algorithm. A series of evaluate metrics such as mean, contrast, entropy, colorfulness metric (CM), mean square error (MSE) and peak signal to noise ratio (PSNR) are used to assess the proposed enhancement algorithm. The experiments results showed that the proposed algorithm provides more detail enhancement and higher values of colorfulness restoration as compared to other existing image enhancement algorithms. The proposed algorithm can suppress effectively noise interference, improve the image quality for underwater image availably.


Introduction
In the digital image application field, images with high contrast and bright colors are the crucial prerequisite for good understanding of the real scenes, such as detection and classification for underwater dam cracks, and multitarget detection under complex environment [1,2].The images having a higher contrast level usually display a larger degree of color scale difference as compared to the lower contrast level ones [3].Light plays a crucial role in generating images of satisfactory quality in photography.Strong light causes an image to have a washed out appearance; on the contrary, weak light leads to an image that is too dark to be visible.In these two cases, the contrasts of the images are low and their detailed textures are difficult to discern [4].The underwater images may lose contrast suffering from degradation due to poor visibility conditions and effects such as light absorption, light reflection, bending of light and scattering of light, which result in dimness and distortion [5].Furthermore, the poor sensitivity of charge-coupled device /complementary-metal-oxide-semiconductor (CCD/CMOS) sensors leads to images with excessively narrow dynamic ranges and renders their details unclear [4].There are heuristically serious disagreements existing between the recorded color images and the direct observation of the real underwater scenes.The purpose of image enhancement is a process that allows image features to show up more visibly details and highlight the useful information by making best use of the color presented on the display devices.Image enhancement is used to improve the quality of an image for visual perception of human being [6].Therefore, it is particularly important to design effective enhancement algorithms to improve contrast and restore color for the degenerated underwater images.
During the last decade，a large number of enhancement algorithms have been developed for contrast enhancement of images in various applications.Several effective image enhancement algorithms can be mainly divided into two categories [7]: (1) image restoration based on physical models , and (2) image enhancement based on image processing techniques.
For the first category, the optimal estimate of an improved image is obtained by establishing and inverting the process of image degradation.More recently, dark channel prior (DCP) theory which was proposed by He et al. directly estimates the depth information based on the comparison between the degraded and the clear images [8].Though some improved algorithms [9][10][11] based on DCP theory have achieved significant performance, results restored from images captured under the overcast environment are still unsatisfactory, especially for the images with large amount of lightness and cloud zones.
The second category of image enhancement techniques directly improves contrast and highlights details by either global or local pixel processing, regardless of the cause of color cast and image degradation.
Recently, Retinex, Homomorphic and Wavelet Multi-Scale techniques have been popular for enhancing images.These methods perform much better than those traditional ones [12].The Retinex theory is firstly introduced to image enhancement by Edwin et al [13].There are some different algorithms based on Retinex theory such as single-scale Retinex (SSR) [14], multi-scale Retinex (MSR) [15], multi-scale Retinex with color restoration (MSRCR) [16], and fast multi-scale Retinex (FMSR) [17] etc.Among them, the MSRCR method proposes to estimate the illumination of the input image using gaussian surround filterings of different scales and conducts enhancement by applying color restoration followed by linear stretching to the logarithm of reflectance.Though the MSRCR method has demonstrated a strong ability in providing dynamic range compression, color restoration and preserving most of details, a large number of parameters are involved and set empirically, which limit the generalization ability and often result in pseudo halos and unnatural color [18].
The classical contrast enhancement is Histogram Equalization (HE) which has good performance in ordinary images, such as human portraits or natural images [19].This method increases the contrast of an image globally by spreading out the most frequent intensity values.However, it suffers from noise amplification in relatively homogeneous regions.HE has been generalized to a local histogram equalization which is known as adaptive histogram equalization (AHE) .AHE is based on HE that the adaptive method formulates each histogram of sub-image to redistribute the brightness values of the images.AHE is therefore suitable for improving the local contrast of an image and bringing out more details [19].Some AHE algorithms have get important progress in suppressing noise and enhancing contrast.The hybrid cumulative histogram equalization (HCHE) can improve the enhancement effect on hot objects rather than background [20].The gap adjustment histogram equalization can solve the over-enhancement problem and alleviate the feature loss problem in the dark regions of the image [21].However, the problem remain the same with the global histogram equalization because of amplifying noise in relatively homogeneous regions.In order to overcome this problem, contrast limited adaptive histogram equalization (CLAHE) was proposed .CLAHE is a well-known block-based processing, and it can overcome the over amplification of noise problem in the homogeneous region of image with standard histogram equalization.CLAHE algorithm differs from standard HE in the respect that CLAHE operates on small regions in the image, called tiles, and computes several histograms, each corresponding to a distinct section of the image and use them to redistribute the lightness values of the image [22][23][24].
The CLAHE enhancement algorithm can be operated in different color spaces such as RGB space, YIQ space , HSI space and so on.In RGB color model, a color space is defined in terms of red (R), green (G), and blue (B) components.These three components are monochrome intensity images.Therefore, RGB model is an ideal tool for color generations, when images are captured by a color video camera or displayed in color monitor screen [25].In RGB color model, CLAHE can be applied on all the three components individually.The result of full-color RGB image can be obtained by combining the R, G, and B individual components [5].Although the RGB color space is best suited to display color images, this space is not suitable for analysis and processing imaging because of a high degree of correlation between these three components.In the YIQ format, image data consists of three components: luminance (Y), hue (I), and saturation (Q).The first component, luminance, represents grayscale information, while the last two components make up chrominance (color information) [3].The HSI color model describes colors in terms of the Hue (H), Saturation (S), and Intensity (I).The dominant description for black and white is the term of intensity.The hue and saturation level do not make a difference when value is at max or min intensity level [26] .
The first advantage of YIQ and HSI format is that grayscale information is separated from color data, so the same signal can be used for both color and black & white sets.Second advantage is that it takes advantage of human color-response characteristics.For the purpose of enhancing a color image, it is to be seen that hue should not change for any pixel.If hue is changed then the color gets changed, thereby distorting the image.One needs to improve the visual quality of an image without distorting it for image enhancement [6].
This paper focuses on the improvement of visual quality of underwater color images, especially for those captured under the overcast or low-light conditions.To this end, we propose an improved CLAHE image enhancement based on adaptive image fusion of YIQ and HSI color spaces.The contributions of this paper can be summarized as follows: (1) It is proposed to use two different color space transformations for CLAHE enhancement: RGB-YIQ linear transformation, and RGB-HSI nonlinear transformation.
(2) It is proposed to use an improved Euclidean norm to fuse the two individual color spaces CLAHE enhancement results: YIQ-RGB and HSI-RGB images.
(3) It is proposed to use bounded general logarithm ratio (GLR) operation with 4 directions Sobel edge detector to enhance the whole contrast of image, and get richer gradient details than before.
The remainder of this paper is organized as follows: In the following section, we first introduce related works including CLAHE algorithm, linear transformation of RGB-YIQ model, nonlinear transformation of RGB-HSI model, 4 directions Sobel edge detector and bounded GLR operation.Section 3 introduces our proposed algorithm, including CLAHE in different color spaces and enhancement fusion of YIQ-RGB and HSI-RGB color images.Section 4 presents the experiment results and a series of evaluate metrics to show the improvements.Section 5 summarizes our work.

Related Works
In this section, we introduce the original CLAHE algorithm, different color spaces such as RGB, YIQ and HSI, color space transformations of RGB-YIQ and RGB-HSI, improved Sobel edge detector and bounded GLR operation.The RGB-YIQ color space conversion is a linear transformation, while the RGB-HSI color space conversion is a nonlinear one.The Sobel edge detector describes the gradient information of the original image, and the value of gradient changes in different pixel of the original image.The Sobel edge detector can be used to enhance contrast for fusion image with the help of special bounded GLR operation.

CLAHE algorithm
CLAHE was originally applied for enhancement of low-contrast medical images [23,24].CLAHE differs from ordinary AHE in its contrast limiting.The CLAHE introduced clipping limit to overcome the noise amplification problem.The CLAHE limits the amplification by clipping the histogram at a predefined value before computing the Cumulative Distribution Function (CDF).In CLAHE technique, an input original image is divided into non-overlapping contextual regions called sub-images, tiles or blocks.The CLAHE has two key parameters: Block Size (BS) and Clip Limit (CL).These two parameters mainly control enhanced image quality.The image is getting bright when CL is increased because input image has very low intensity and larger CL makes its histogram flatter.As the BS is bigger, the dynamic range becomes larger and the contrast of image is also increasing.The two parameters determined at the point with maximum entropy curvature produce subjectively good quality of image with using the entropy of image [27].
The CLAHE method applies histogram equalization to each contextual region.The original histogram is clipped and the clipped pixels are redistributed to each gray level.The redistributed histogram is different with ordinary histogram, because each pixel intensity is limited to a selected maximum.But the enhanced image and the original image have the same minimum and maximum gray values [24,28].The CLAHE method to enhance the original image consists of the following steps: Step 1: Dividing the original intensity image into non-overlapping contextual regions.The total number of image tiles is equal to N M × , and 8 8× is a good value to preserve the image chromatic data.
Step 2: Calculating the histogram of each contextual region according to gray levels present in the array image. Step , then the average of the remain pixels to distribute to each gray level is The histogram clipping rule is given by the following statements Step 4: Redistribute the remain pixels until the remaining pixels have been all distributed.The step of redistribution pixels is given by Step is positive integer at least 1.The program starts search from the minimum to the maximum of gray level with the above step.If the number of pixels in the gray level is less than CL N , the program will distribute one pixel to the gray level.If the pixels are not all distributed when the search is end, the program will calculate the new step according to Eq.( 7) and start new search round until the remaining pixels is all distributed .
Step 5: Enhancing intensity values in each region by Rayleigh transform.The clipped histogram is transformed to cumulative probability, ) (i P input , which is provided to create transfer function.The underwater image appears to look more natural when the Rayleigh distribution is used.Rayleigh forward transform is given by where min y is the lower bound of the pixel value, and α is a scaling parameter of Rayleigh distribution that is defined depending on each input image.In this study, α value in Rayleigh function is set to 0.04.The output probability density of each intensity value can be expressed as A higherα value will result in more significant contrast enhancement in the image, meanwhile increasing saturation value and amplification of noise levels. Step 6: Reducing abruptly changing effect.The output from the transfer function in Eq. ( 9) is re-scaled using linear contrast stretch.The linear contrast stretch can be given as Step 7: Calculating the new gray level assignment of pixels within a sub-matrix contextual region by using a bi-linear interpolation between four different mappings in order to eliminate boundary artifacts.

color spaces
Color spaces provide a method for specifying, ordering and manipulating colors.The goal of a color model is to facilitate the specification of colors in a standardized way [29].In general, a color space is a mathematical representation of a set of colors, and they can be classified into three basic parts: color spaces based on HVS (e.g.RGB, HVS, HSI and etc.); application specific (e.g.YCbCr, JPEG-YCbCr, YUV, YIQ and etc.) and CIE color spaces (e.g.CIELab and etc.) [30,31].Within the first category, the most widely used color space in digital image capturing and displaying is RGB.Phenomenal colors also form part of this first category incorporating color spaces such as HSV (hue-saturation-value) and HSI, which are simply transformations from RGB space [30].The HSV space is more akin to the human conceptual understanding of color [32].The second category deals with application-based color space.This includes CMY (Cyan-Magenta-Yellow) used in printing applications and TV-related color spaces such national television system committee (NTSCs) YIQ, YUV and YCbCr [30].The third category deals with the CIE color spaces.International Commission on Illumination (CIE) specifies three color spaces: CIE*XYZ, CIE*Lab and CIE*Luv, which CIE*Lab and CIE*Luv provide a perceptually equal space [30].
Different color spaces usually display different color characteristics suitable for different visual tasks, such as detection, indexing, and recognition [33][34][35][36].The choice of a suitable color space for color representation remains a challenge for scientists researching color image processing [37,38].

RGB color space
RGB model is the primary and source color space.In RGB model, a color space is defined in terms R, G, and B components, known as the primary colors.These three components are monochrome intensity images.In this model, a digital image consists of three planes of independent images, each of which stores the values of R, G and B. The RGB model is a hardware-oriented color model in which R, G, and B components are equivalent and have strong correlation.Thus, the change in one component will affect the others.Therefore, it is an ideal tool for color generations, when images are captured by a color video camera or displayed in a color monitor screen [25].
The RGB color space is the most common and often found in computer systems as well as television, video and so on, and it is widely used in computer graphics and imaging [39].The most of the color spaces have been developed for specific applications, but all come from the same concept: the trichromatic theory of primary colors of R, G and B [40].Other color spaces are usually calculated from the RGB color space via either linear or nonlinear transformations [33][34][35][36]41].However, RGB is not very efficient when dealing with "real-world" images.RGB color space is not appropriate for the entire spectrum of image processing tasks [39].The color image processing is motivated by two important factors: first, by a similarity to human vision, fully chromatic; and second, by the increasing of the information that the chromaticity contributes to the analysis of images [29].
The RGB model, which is computationally convenient, is not very useful in the specification and color recognition [29].The RGB model is a perceptually nonuniform color space and one of its limitations is the fact that the chrominance and intensity components are not explicitly defined [42].The human being does not recognize a color by having an amount of R, G or B components, but uses attributes perceptual of hue, saturation and intensity [29].On the other hand, this RGB model has serious disadvantage when you want to perform different types of processing of the images such as enhancement, segmentation or classification.Although the RGB model is best suited to display color images, the preliminary results obtained shows that this space is not suitable for analysis and processing imaging with a high degree of correlation between the components R, G and B [29].
For the purpose of enhancing a color image, it is to be seen that hue should not change for any pixel.If hue is changed then the color gets changed, thereby distorting the image [6].Thus, all colors are seen as variable combination of the three primaries in the RGB color model, which is usually used in representing and displaying images.Besides, several color models that decouple luminance and chromaticity are briefly described in the following in terms of their relations with the RGB model [43].It is necessary to develop approach extracts the color features using a multispace adaptive clustering algorithm, while the texture features are calculated using a multichannel texture decomposition scheme.

YIQ color space
The YIQ model is the color primary system adopted by NTSC for color television broadcasting.Like RGB, the YIQ color space is a device-dependent color space which means the actual color you see on your monitor depends on what kind of monitor you are using and what its settings are [44].In the NTSC format, image data consists of three components: luminance (Y), hue (I), and saturation (Q).The first component, luminance, represents gray scale information, while the last two components make up chrominance (color information) [26].
YIQ Color space is widely used in the NSTC and PAL televisions of different countries.First advantages of this format is that gray scale information is separated from color data, so the same signal can be used for both color and black & white sets.Second advantage is that it takes advantage of human color-response characteristics.The eye is more sensitive to changes in the orange-blue (I) range than in the purple-green range (Q), therefore less bandwidth is required for Q than for I [3].
In this color space, Y-component stands for luminance or brightness, the I-component seems to mimic mostly shifts from blue, through purple to red colors (with increasing I), and the Q-component seems to mimic mostly the value of green; the I and Q components jointly represent the chromatic attributes [44].
In addition, the NTSC YIQ representation is optimized with respect to human visual systems so that the bandwidths of the I and Q components can be reduced without noticeable loss of visual quality [45,46].
As mentioned earlier, in the YIQ color representation, the chrominance components are separated from the luminance component and as a result the shadows and local inhomogeneities are generally better modeled than in the RGB color space.Colors with high degrees of similarity in the RGB space may be difficult to distinguish, while the YIQ representation may provide a much stronger discrimination [42].Its purpose is to exploit certain characteristics of the human visual system to maximize the use of a fixed bandwidth.
The YIQ color space is defined by means of a linear transformation from the RGB color space [47].The color space from RGB to YIQ transformation is given as The decorrelation of the R, G, and B component images makes the Y, I, and Q component images complementary to each other [44].
The color space from YIQ backward to RGB transformation is given as 1.000 0.9562 0.6214 1.000 0.2727 0.6468 1.000 1.1037 1.7006 For RGB values with a range of 0-255, Y has a range of 0-255, I has a range of 0 to ±152, and Q has a range of 0 to ±134.In the NTSC YIQ representation, the restoration of the Y component is critical because this component contains 85%-95% of the total energy and has a large bandwidth.the bandwidths of I and Q components are much smaller than that of the Y component [48].

HSI color space
HSI model is the most frequently used application-oriented color space.HSI color space is based on the human visual perception theory and is suitable for describing, and interpreting color.HSI model defines a color space in terms of hue (H), saturation (S), and intensity (I) components.It decouples achromatic information (I component) from chromatic information (H and S components) in a color image.Thus, each pixel of an image represented in this space has three data: hue and saturation which provide information of color, and intensity which describes the brightness.Therefore, it is an ideal tool for developing image-processing algorithms based on color descriptions that are natural and intuitive to humans [31,49].
The HSI color space is very important and attractive color model for image processing applications because it represents colors similarly how the human eye senses colors [26,28].It is an application-oriented color model and to some extent H, S, and I components are independent from each other.So, one component can be processed separately without affecting the others [50], which significantly simplifies the workload of image analysis and image processing [25].For the purpose of enhancing a color image, it is to be seen that hue should not change for any pixel.If hue is changed then the color gets changed, thereby distorting the image [6].
There are two reasons why the HSI color space is chosen to develop the CLAHE algorithm: first, compared with the RGB color space, the HSI color space is much closer to human being's perception to color; second, the intensity component is the weighted average of three color channels and is less sensitive to noise [26,51].
The Hue component describes the color itself in the form of an angle between [0, 360] degrees: 0 degree means red, 120 means green, 240 means blue, 60 is yellow, and 300 is magenta.The saturation component signals how much the color is polluted with white color.The saturation range is [0, 1].The Intensity range is between [0, 1], and 0 means black, 1 means white [49].
The HSI space is calculated from the primary RGB color space via nonlinear transformation.The conversion formulas of color from RGB space to HSI space are given as [49] where If R,G,B have been normalized in range of [0, 1], then S, I are in range of [0, 1], and θ is the angle between the point and the red axis in the HSI color space.
On the contrary, the conversion formula of color image from HSI space backward to RGB space are given as [26] For RG district ( ) For GB district ( ) For BR district ( )

Sobel edge detector
The importance of edge detection arises from the fact that edges can capture local features and provide useful information in an image.In images, edges are marked by discontinuities or significant variations in intensity or gray level, providing the location of the object contour [52,53].Edge detection, one of the fundamental and most important problems in the field of lower level image processing, plays a very important role in the realization of a complete vision based understanding/monitoring system for automatic scene analysis/monitoring [54].Quality of detected edges plays a very important role in realization of complex automated computer/machine vision systems [53,54].An edge is a collection of connected pixels where the intensity level changes abruptly [26].Edges in digital images are defined as the image positions/points where the intensity/brightness of two neighboring pixels is significantly different from each other [55].Edges can usually be found in parts of an image where transition occurs, either between different objects, different regions, or between objects and the background.In this view, gradients are effective descriptors of edges [53].
Edges provide significant and important information related to objects present in the scene.This information helps in achieving higher level objectives like segmentation, object recognition, scene analysis, and so forth [55].
For digital images, derivatives can be approximated with discrete differentiation.Therefore, first-order edge detectors are easy to implement and widely used.There are several methods for edge detection and extraction, such as Sobel, Roberts operators and Canny algorithm, and so on.Prewitt and Sobel operations are examples of the gradient-based edge detectors [56,57].Among them, the Sobel operators are especially preferred because they are nonlinear filters with image smoothing, and thus can produce less fragmentary edge images [53].The Sobel edge detector is very popular than simple gradient operators due to its property to counteract the noise sensitivity and easier implementation [58].The Sobel operator is chosen in this paper because it costs low computation and can obtain the direction of the edges.
The Sobel operator is based on computing an approximation of the gradient of the image intensity function.The original Sobel filter uses two 3 3 × spatial masks which are convolved with the original image to calculate the approximations of the gradient [55].The Sobel edge detector [26] performs a spatial gradient measurement on an image and so emphasizes regions of high spatial frequency which corresponds to edges.Typically, it is used to find the approximate absolute gradient magnitude at each pixel in an input gray-scale image.The original Sobel edge detection filter is a commonly used edge detector that computes an approximate gradient of the image intensity function.For each pixel in the image, it obtains the vertical and horizontal components of the gradient by applying convolution with two 3 3 × spatial masks defined as where 1 S is the vertical spatial mask in  90 direction , while 2 S is the horizontal one in  0 direction.The accuracy of the Sobel operator for edge detection is relatively low because it uses only two masks which detect the edges in horizontal and vertical directions.This problem can be overcome by using the Sobel compass operator which uses a larger set of masks with narrowly spaced orientations.It uses four masks (  0 ,  45 ,  90 and  135 ) each providing edge strength along one of the four possible directions of the compass [26,53,55].The other two directions spatial masks can be expressed as where 3 S is the spatial mask in 45  direction , while 2 S is the another one in 135  direction.
Therefore, to find the edges in all possible directions, the four masks ( Supposing ( , ) Z i j denotes the 3 3  × image neighbourhood of pixel ( , ) i j , then ( , ) Z i j can be expressed as ( ) ( 1, 1) ( 1, ) ( 1, 1) where ( , ) z i j denotes the original gray value of pixel ( , ) i j .
These compute the average gradient components across the neighboring lines or columns, respectively.The local edge strength is defined as the gradient magnitude given by the 2 L norm of the corresponding gradient vector (gradient magnitude).Then the gradient vector in the 4 directions can be respectively expressed as The gradient image of the pixel ( , ) i j can be defined as The gradient image is normalized as follows  ( ( , )) ) where 1 δ and 2 δ are small positive disturbance quantities to ensure the result of n g ∈(0,1).
With the abundant gradient information, the adaptive gain of pixel ( , ) i j can be expressed as [59] [ (, ) ] ( , ) 2 n a g i j i j b where a and b are adjustable positive quantities to ensure the average of λ in the range: λ ∈(1,4).

GLR Model in Bounded Operate
The value domains of input and output are in closed range in boundary operates, which can solve the problem of overstepping the boundary.Three GLR models in bounded operation are introduced in this section, including add, subtraction and multiplication models [59].Consider the gray value of image is defined as ) , ( j i I , then the normalized gray value is given by The symbols of ⊕ , ◎ and ⊗ are adopted to define as add, subtraction and multiplication operation of GLR model, and the definition are expressed as [ ] where 1 x and 2 x are two channels signal of input image, r is arbitrary real number.The three GLR model operations are presented in Figure 1.The add and subtraction operations are inverse operations to each other in GLR model.These two operations can adjust the brightness of the image either in low value gray segment or high value gray segment, but the adjustments are not symmetrical for the two segments.
In the multiplication operation of GLR model, in the condition of 1 r > , the pixel values in the zero point of GLR model (

x =
) are stretched, while other pixel values far away from the zero point are compressed [59].This multiplication operation can adjust the brightness of the image both in low value gray segment and high value gray segments, and the adjustments are symmetrical for the two segments, which is very different from the above add and subtraction operations.This operation effects will not be realized by the traditional multiplication operation.The GLR operates are boundary operates with closure, and can solve the problem of overstep the boundary.Which makes the details of the enhanced image more clear and the overall contrast higher than others.

Proposed Algorithm
An algorithm to enhance underwater image captured by CCD/CMOS camera sensors has to improve contrast and restore the chromatic information without suffering from color cast and deficiency in detail enhancement.In this algorithm, at first, the underwater image is converted from RGB color space to YIQ color space with linear transformation and HSI color space with nonlinear transformation.The chromatic information (hue and saturation) and the brightness information are independent in YIQ and HSI color spaces.Secondly, the brightness information is employed to enhance the contrast by using Rayleigh CLAHE, while the chromatic information are preserved.The illuminance component (Y) in YIQ image is enhanced with Rayleigh CLAHE to get improved illuminance component (Y1), and the intensity component (I) in HSI image is enhanced with Rayleigh CLAHE to get improved intensity component (I1).Then, the enhanced YIQ space and HSI space images are transformed backward to RGB space images to achieve enhanced YIQ-RGB and HSI-RGB images.When the three components of red, green, and blue are not coherent in the YIQ-RGB or HSI-RGB images, the three components will have to be harmonized.Finally, the YIQ-RGB image and HSI-RGB image are combined to enhancement fusion RGB image in adaptive Euclidean norm by using GLR multiplication operation with Sobel edge detector.The pipeline of our proposed algorithm is shown in Figure 2. The algorithm of CLAHE in RGB color space is not very difficult, and a more coherent and chromatic image can be achieved in the end.This algorithm is really useful to harmonize the color image while the three components of R, G and B are seriously unbalanced of the original image.But, the enhancement effects such as contrast and information entropy are very limited.

CLAHE in YIQ color space
The YIQ color space is defined by means of a linear transformation from the RGB color space.In the YIQ model, image data consists of three components: Y, I , and Q.The first component, Y, represents gray scale information, while the last two components make up chrominance (color information).Because the YIQ representation is optimized with respect to human visual systems, the YIQ Color space is widely used in the NSTC and PAL televisions of different countries [26] .
The algorithm of CLAHE in YIQ color space could include the following steps: Step 1: The three components of R, G and B in RGB image are normalized to the range of [0, 1] as Step 2: Linear transformation from RGB color space to YIQ color space 0.299 0.587 0.114 0.596 0.274 0.322 0.211 0.523 0.312 Step Step 5：Normalized RGB image backward to the range of [0, 255] 255 Step 6：Final output RGB image can be calculated as The CLAHE enhanced output RGB image in YIQ color space is defined as YIQ-RGB image, and the three components of the YIQ-RGB image is defined as 1 R , 1 G and 1 B .

CLAHE in HSI color space
The HSI space is calculated from the primary RGB color space via nonlinear transformation.HSI color space is based on the human visual perception theory and is suitable for describing, and interpreting color.HSI model defines a color space in terms of H, S, and I components.It decouples achromatic information (I component) from chromatic information (H and S components) in a color image [49].Compared with the RGB color space, the HSI color space is much closer to human being's perception to color.On the other hand, the intensity component is the weighted average of three color channels and is less sensitive to noise [51].So the HSI model is the most frequently used application-oriented color space.
The algorithm of CLAHE in HSI color space could include the following steps: Step 2: Nonlinear transformation from RGB color space to HSI color space where Step Step 5：Normalized RGB image backward to the range of [0, 255] 255 Step 6：Final output RGB image can be calculated as The CLAHE enhanced output RGB image in HSI color space is defined as HSI-RGB image, and the three components of the HSI-RGB image is defined as 2 R , 2 G and 2 B .

Enhancement fusion of YIQ-RGB and HSI-RGB images
Both CLAHE enhanced images of YIQ-RGB and HSI-RGB are integrated using a Euclidean norm [5], then the fusion image is enhanced by GLR multiplication operation with Sobel edge detector.The algorithm of CLAHE enhancement fusion could include the following steps: Step 1: The fusion image in RGB color space can be calculated in a Euclidean norm as where γ is the fusion coefficient of the images fusion, and γ is in the range of [0.
where ( , ) x y λ is the adaptive gain in Sobel edge detector of pixel ( , ) x y .The average Sobel edge detector is expressed as λ .
Step 4：Normalized RGB image which is given as Step 5：Final output RGB image can be calculated as where The three components of CLAHE enhanced fusion image are out R , out G and out B , which combine into final output RGB image as out RGB .

Simulation Results and Discussions
To evaluate the proposed CLAHE fusion algorithm quantitatively, simulation experiments on different underwater images were carried out.

Quantitative Metrics
In order to demonstrate the performance of the proposed CLAHE enhancement fusion algorithm, it is tested on different underwater sensing images.The proposed algorithm and other existing algorithms such as He's DCP, MSR, MSRCR, RGB-CLAHE, YIQ-CLAHE and HSI-CLAHE are implemented using MATLAB software (MATLAB 7.11, release 2010b), and 4GB RAM with I3 processor.A series of quantitative metrics such as Mean, Contrast, Entropy and colorfulness metric (CM) for single enhanced color image were used to assess the enhancement algorithms.The Mean is the average brightness of the enhanced image.The higher values of Contrast, Entropy and CM imply that the visual quality of the enhanced image is good.These four quantitative metrics are defined in Eq. ( 47) to Eq. ( 50).
( ) where R μ , G μ and B μ are the means of the improved image in the three components of R, G and B.
CM is no-reference image quality metric.It is suggested by Susstrunk and Winkler [60].CM is the quality in terms of color enhancement.The metric is defined in the RGB color space as below.Let the three components of a color image be denoted by R, G and B, respectively [61].Consider − , then the colorfulness of the image is defined as where α σ and β σ are standard deviations of α and β , respectively.Similarly, α μ and β μ are their means.The Mean Square Error (MSE) and Peak Signal to Noise Ratio (PSNR) are the two error metrics used to compare the quality of improved underwater images.The MSE represents the cumulative squared error between the improved image and the original image, whereas PSNR represents a measure of the peak error.The good method can be described if it produces lower MSE and higher PSNR values [5].
The MSE is calculated using the following equation as where 1 I and 0 I denotes the improved image and the original image, respectively.The two images must be same and denote by H W × .
To calculate the PSNR, we can use the MSE in Eq. ( 51).The following equation defines PSNR ( 1) 10 log ( ) where L is the gray levels of the image ( =256 L in 8-bit image).In general, a improved image is acceptable by human perception if its 30 ( ) PSNR dB > .There are 3 original underwater images chosen for this enhancement algorithm.The 3 original images are shown in Figure 3, and the characteristics information of these images are presented in Table 1.The characteristics information include image size, mean, contrast, entropy and CM.The 3 original underwater images are landscape wall, power remains and coral branches.Underwater images normally exhibit a high percentage of blue, followed by green and red.Therefore, most underwater images appear bluish or greenish [62], given that blue and green are the dominant color channels forming the overall image color.Red is the inferior color channel, and its percentage is generally lower than those of the other two color channels.The images appear greenish in Figure 3(a), but bluish in Figure 3(b) and 3(c).The characteristics of the 3 original images can be expressed as follows [62,63]:

Experimental Original Images
The landscape wall image  The landscape wall is a typical underwater construction, this image is taken in the archaeological site of Baia (Naples-Italy) at the depth of about 5 m underwater.Two fishes are swimming around the wall, but the fishes could not be distinguished from the background because they are both in almost the same color.The color cast is unusually serious, and the image appears greenish. The brightness is good.The contrast and entropy are both the highest in these 3 images, but almost all the details have been submerged in the greenish image.
(b) The power remains image  The power remains are in the bottom of ocean, and a diver is trying to enter into the cockpit.
The mean of the image is very low, so that the image looks really a little dark.The wheel hubs of the power almost could not be recognized from the degrade image. The image could not provide more detail information than landscape wall image, since the contrast and entropy are both the medium in the 3 images .

Enhancement results of landscape wall image
The different enhancement results of the original landscape wall image are shown in

Enhancement results of power remains image
The different enhancement results of the original power remains image are shown in Figure 8.The enhancement algorithms includes DCP, MSR, MSRCR, RGB-CLAHE, YIQ-CLAHE and HSI-CLAHE.The 6 quantitative metrics enhancement results for the image according to Figure 8 are shown in Table 4.The quantitative metrics includes mean, contrast, entropy, CM, MSE and PSNR.
The proposed enhancement algorithm results of the original power remains image with different CL and BS are shown in Figure 9.The 6 quantitative metrics of our proposed enhancement results according to Figure 9 are shown in Table 5.The relationships of Contrast, Entropy and CM v.s. the average of Sobel detector λ for fused CLAHE power remains image (BS=8*8, CL=0.008,

Enhancement results of coral branches image
The different enhancement results of the original coral branches image are shown in Figure 12.The enhancement algorithms includes DCP, MSR, MSRCR, RGB-CLAHE, YIQ-CLAHE and HSI-CLAHE.The 6 quantitative metrics of enhancement results for the image according to Figure 12 are shown in Table 6.The quantitative metrics includes mean, contrast, entropy, CM, MSE and PSNR.
The proposed enhancement algorithm results of the original coral branches image with different CL and BS are shown in Figure 13.The 6 quantitative metrics of our proposed enhancement results according to Figure 13

Discussions
The enhancement effect of DCP algorithm for underwater image is very limited, especially in term of contrast improvement.The MSR enhancement algorithm can improve contrast only for original image with low contrast, but it could lead to serious color cast.The MSRCR enhancement algorithm can improve contrast, entropy, and restore color, but further improvement may be very difficult.The MSR and MSRCR algorithms suffer from noise amplification in relatively local regions, which may lead to serious color mottles.The CLAHE-YIQ and CLAHE-HSI algorithms can produce better enhancement effects than the ahead three enhancement algorithms in terms of contrast and entropy improvements.The CLAHE-RGB algorithm can produce higher PSNR for human perception than others except DCP.
The enhancement image is getting bright, and the contrast, entropy, and CM are increasing when CL is increased because input image has very low intensity and larger CL makes its histogram flatter.As the BS is bigger, the dynamic range becomes larger and the contrast of image is also increasing, but the entropy and CM decreasing.The image quality mainly depends on the CL rather than BS.
The quantitative metrics of contrast, entropy and CM in enhancement fusion image are getting bigger when average Sobel edge detector increases, but the metric of MSE may appear valley value.The quantitative metrics of contrast and CM in enhancement fusion image are getting bigger when the fusion coefficient increases, but the metric of Entropy may appear peak value.The average Sobel edge detector and the fusion coefficient must be chosen reasonably to ensure the enhancement fusion image in biggest Contrast, Entropy and CM, and smallest MSE.
In a word, there are two key parameters in CLAHE algorithm: BS and CL, another two key parameters in fusion enhancement algorithm: average Sobel edge detector and fusion coefficient.These four key parameters affect the quality of the final CLAHE enhancement fusion image, which should be chosen in a reasonable range.The quantitative metrics are integrated factors to assess the enhancement image, and these factor should be considered in a whole rather than only one or two.

Conclusions
Contrast improving and color restoring is an important but difficult task for underwater image application.Underwater images may lose contrast suffering from degradation because of poor visibility conditions and effects such as light absorption, light reflection, bending of light and scattering of light, which lead to dimness and distortion.Existing image enhancing algorithm may not be able to improve contrast and restore color efficiently for the underwater image.Thus, this paper proposes an CLAHE enhancement fusion algorithm for underwater image.The proposed algorithm consists of four steps: from RGB to YIQ and HSI color spaces, CLAHE enhancement in YIQ and HSI color spaces, from YIQ and HSI backward to RGB color space, and two improved RGB images fusion in Euclidean norm and GLR operation.Based on experimental results obtained by processing various underwater images with different mean, contrast and entropy, the contrast improving and color restoring can be effectively achieved by using the proposed algorithm which outperforms existing state-of-the-art image enhancement algorithm in visual performance and quantitative evaluation.
The main contributions of the proposed algorithm include that this study proposed two different color spaces transformations for CLAHE enhancement: RGB-YIQ and RGB-HSI, an improved Euclidean norm to fuse the two individual color spaces CLAHE images, an improved 4 directions Sobel edge detector and GLR operation.These are four key parameters should be chosen to achieve high contrast and entropy for the final CLAHE enhancement fusion image, which are BS and CL in CLAHE algorithm, average Sobel edge detector and fusion coefficient in fusion enhancement algorithm.The image enhancement effectiveness could be proven by the objective quality metrics.For the underwater image with high contrast and entropy, the contrast and entropy could be improved at least 131.25% and 2.36%; for the image with low contrast and entropy, these two ratios were 2495.52% and 24.66%, respectively.These results indicates that our algorithm could provide underwater image enhancement with the highest quality.
The proposed algorithm is applicable for degraded underwater image and other remote sensing image for visual enhancement of contrast and entropy.However, the main limitation is that it is sometimes more time-consuming than the existing algorithm, and the PSNR is less than 30 dB which is not really acceptable by human perception.Therefore, our future work will focus on the acceleration of our CLAHE enhancement fusion algorithm, and also focus on the CLAHE fusion algorithm optimization applications in underwater image balancing of uneven illumination environment.
number of clipped pixels.
value from the transfer function, min x and max x denote the minimum and maximum value of the transfer function.

)
to each pixel of the input image.

Figure 2 .
Figure 2. The pipeline of the proposed algorithm d θ is the gray-level co-occurrence matrix (GLCM) of the image.L is the gray levels of the image ( =256 L in 8-bit image), d is the distance of two pixels ( 1 d = ), and k θ is the direction between two pixels ( ( 1)*45

Figure 3 .
Figure 3.The original underwater images: (a) The landscape wall image; (b) The power remains image; (c) The coral branches image.

Figure 4 .
The enhancement algorithms includes DCP, MSR, MSRCR, RGB-CLAHE, YIQ-CLAHE and HSI-CLAHE.The 6 quantitative metrics of enhancement results for the image according to Figure 4 are shown in Table 2.The quantitative metrics includes mean, contrast, entropy, CM, MSE and PSNR.The contrast and entropy are both very high, the traditional image enhancement algorithms such as DCP, MSR and MSRCR may have really weak effect for this original underwater image.The proposed enhancement algorithm results of the original landscape wall image with different CL and BS are shown in Figure 5.The 6 quantitative metrics of our proposed enhancement results according to Figure are shown in Table 3.The relationships of Contrast, Entropy and CM v.s. the average of Sobel detector λ for fused CLAHE landscape wall image (BS=8*8, CL=0.006, 0.57 γ = ) are shown in Figure 6.The relationships of Contrast, Entropy and CM v.s. the fusion coefficient γ for fused CLAHE landscape wall image (BS=8*8, CL=0.006,

Figure 5 .
Figure 5.The proposed algorithm results for original landscape wall image( =0.57γ

Figure 7 .
Figure 7.The relationships of Contrast, Entropy and CM v.s.γ for fused CLAHE landscape wall image (BS=8*8, CL=0.006, 1.1420 λ = be the best choice to ensure the enhancement fusion image in biggest Contrast, Entropy and CM, and smallest MSE.

Figure 9 .
Figure 9.The proposed algorithm results for original power remains image(

Figure 13 .
Figure 13.The proposed algorithm results for original coral branches image(

Figure 15 .
Figure 15.The relationships of Contrast, Entropy and CM v.s.γ for fused CLAHE coral branches image (BS=8*8, CL=0.012, 1.0073 λ = be the best choice to ensure the enhancement fusion image in biggest Contrast, Entropy and CM, and smallest MSE.
3: Calculating the contrast limited histogram of the contextual region by CL value as

Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 14 March 2017 doi:10.20944/preprints201703.0086.v1
Step 1: The three original components of R , G and B in RGB image are normalized to n R , and n B , the normalization equation is as same as Eq.(33); n G

Table 1 .
The 5 quantitative metrics of the original images.

Table 3 .
The 6 quantitative metrics of proposed algorithms for wall image in Figure5.

Table 4 .
The 5 quantitative metrics of enhancement results for power remains image in Figure8.

Table 5 .
The 6 quantitative metrics of proposed algorithms for landscape wall image in Figure9.

Table 6 .
The 6 quantitative metrics of enhancement results for coral branched image in Figure12.

Table 7 .
The 6 quantitative metrics of proposed algorithms for coral branches image in Figure13.