Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Unsupervised Low-Light Image Enhancement via Virtual Diffraction Information in Frequency Domain

Version 1 : Received: 12 June 2023 / Approved: 12 June 2023 / Online: 12 June 2023 (07:31:34 CEST)

A peer-reviewed article of this Preprint also exists.

Zhang, X.; Qin, H.; Yu, Y.; Yan, X.; Yang, S.; Wang, G. Unsupervised Low-Light Image Enhancement via Virtual Diffraction Information in Frequency Domain. Remote Sens. 2023, 15, 3580. Zhang, X.; Qin, H.; Yu, Y.; Yan, X.; Yang, S.; Wang, G. Unsupervised Low-Light Image Enhancement via Virtual Diffraction Information in Frequency Domain. Remote Sens. 2023, 15, 3580.

Abstract

With the advent of deep learning, significant progress has been made in low-light image enhancement methods. However, deep learning requires enormous paired training data, which is challenging to capture in real-world scenarios.To address this limitation, this paper presents a novel unsupervised low-light image enhancement method, which first introduces the frequency domain features of images in low-light image enhancement tasks. Our work is inspired by imagining a digital image as a spatially varying metaphoric “field of light”, then subjecting the influence of physical processes such as diffraction and coherent detection back onto the original image space via a frequency-domain to spatial-domain transformation (inverse Fourier transform). However, the mathematical model created by this physical process still requires complex manual tuning of the parameters for different scene conditions to achieve the best adjustment. Therefore, we proposed a dual-branch convolution network to estimate pixel-wise and high-order spatial interactions for dynamic range adjustment of the frequency feature of the given low-light image. Guided by the frequency feature from the “field of light” and parameter estimation networks, our method enables dynamic enhancement of low-light images. Extensive experiments have shown that our method performs well compared to state-of-the-art unsupervised, and its performance approximates the level of the state-of-the-art supervised methods qualitatively and quantitatively. At the same time, the light network structure design allows the proposed method to have an extremely fast inference speed(nearly 150 FPS on an NVIDIA 3090 Ti GPU for an image of size 600×400×3 ). Furthermore, the potential benefits of our method to object detection in the dark are discussed.

Keywords

Low-light Image Enhancement; Unsupervised Learning; Physics-inspired Computer Vision

Subject

Computer Science and Mathematics, Computer Vision and Graphics

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.