ARTICLE | doi:10.20944/preprints202209.0127.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Image defogging; visual enhancement evaluation; edge detection; deep neural networks; autonomous systems
Online: 8 September 2022 (15:37:03 CEST)
Fog, haze, or smoke are usual atmospheric phenomena that dramatically compromise the overall visibility of any scene, critically affecting features such as illumination, contrast, and contour detection of objects. The decrease in visibility compromises the performance of computer vision algorithms such as pattern recognition and segmentation, some of them very relevant for decision-making for the security or autonomous vehicle industries. Several dehazing methods have been proposed, however, to the best of our knowledge, all proposed metrics compare the defogged image to its ground truth for evaluation of the defogging algorithms, or need to estimate parameters through physical models. This fact hinders progress in the field as obtaining proper ground truth images is costly and time-consuming, and physical parameters greatly depend on the scene conditions. This paper aims to tackle this issue by proposing a contour-based metric for image defogging evaluation that does not need a ground truth image. The proposed metric only requires the original hazy RGB image and the RGB image after the defogging procedure. A comparison of the proposed metric with metrics currently used in the NTIRE 2018 defogging challenge is performed to prove its effectiveness in a general situation, showing comparable results to conventional metrics.