This version is not peer-reviewed.
Submitted:
25 February 2023
Posted:
28 February 2023
You are already at the latest version
Title of Study | Keywords | Outcome/Significance |
---|---|---|
Modeling Scene Illumination Color for Computer Vision and Image Reproduction: A survey of computational approaches (Barnard, 1998) | Illumination modeling, Image reproduction and Image enhancement. | Discussion on the progress in modeling the scene illumination that will result in progress in computer vision, image enhancement, and image reproduction. Also, the nature of image formation and acquisition, overview of the computational approaches have also been investigated. |
Computational Color Constancy: Survey and Experiments (Gijsenij et al., 2011) | Statistical and learning based color constancy methods | Various publicly available methods for computational color constancy, of which some are considered to be state-of-the-art, are evaluated on two data sets (the grey-ball SFU (Simon fraser university -set and the Color-checker-set). |
Machine learning in indoor visible light positioning systems: A review (Tran and Ha, 2022) | Machine learning; illumination-based positioning algorithms | Deep discussions of articles published during the past five years in terms of their proposed algorithm, space (2D/3D), experimental simulation method), positioning accuracy, type of collected data, type of optical receiver, and number of transmitters. |
Deep Neural Models for Illumination Estimation and Relighting: A Survey (Einabadi et al., 2021) | Deep learning; Illumination Estimation and Relighting | Discussion on the main characteristics of the current deep learning methods, dataset and possible future trends for illumination estimation and relighting. |
Deep learning for monocular depth estimation: A review (Ming et al., 2021) | Deep learning; Monocular depth estimation | Summarize and categorize the deep learning models for monocular depth estimation. Publicly available datasets and the corresponding evaluation metrics are introduced. The novelties and performance of these methods are compared and discussed. |
Single image depth estimation: An overview (Mertan et al., 2022) | Illumination-based depth estimation; Single image depth estimation problem. | Investigations into the mechanisms, principles, and failure cases of contemporary solutions for depth estimation. |
Title of Study | Methodology/ Technique used |
Performance metric | Outcome/Significance |
---|---|---|---|
Machine learning approach to color constancy (Agarwal et al., 2007) | Ridge regression and Support Vector Regression | Uncertainty analysis | The shorter training time and single parameter optimization provides a potential scope for real time video tracking application |
Convolutional Color Constancy (Barron, 2015) | Discriminative learning using convolutional neural networks and structured prediction | Angular error (mean, median, tri-mean, Error for lowest/ highest 25% predictions | The model can improve performance on standard benchmarks (like White-patch, grey-world etc) by nearly 40% |
CNN-Based Illumination Estimation with Semantic Information (Choi et al., 2020) | CNN with new pooling layer to distinguish between useful data and noisy data, and thus efficiently remove noisy data during learning and evaluating. | Mean angular error (MSE) (mean, median, tri-mean, Error for lowest/ highest 25% predictions |
Takes computational color constancy to higher accuracy and efficiency by adopting a novel pooling method. Results prove that the proposed network outperforms its conventional counterparts in estimation accuracy |
Color Constancy Using CNNs (S. Bianco et al., 2015) | CNN (max pooling, one fully connected layer and three output nodes.) | Angular error (the minimum 10th percentile, 90th percentile, median, maximum, minimum) (Comparison over the methods) |
Integrate feature learning and regression into one optimization process, which leads to a more effective model for estimating scene illumination. Improves stability of the local illuminant estimation ability of the proposed method |
Deep Learning-Based Computational Color Constancy with Convoluted Mixture of Deep Experts (CMoDE) Fusion Technique (H. -H. Choi and B. -J. Yun, 2020) | CMoDE fusion technique, multi-stream deep neural network (MSDNN) | Angular error (mean, median, tri-mean, mean of best/worst 25%) (Comparison over the methods) |
CMoDE-based DCNN brings significant progress towards efficiency of using computing resources, as well as accuracy of estimating illuminants |
Fast Fourier Color Constancy (J. T. Barron and Y. Tsai, 2017) | Fast Fourier Color Constancy in Frequency domain | MSE (mean, median, tri-mean, best/worst 25%) | The method operates in the frequency domain, produces lower error rates than the previous state-of-the-art by 13 − 20% while being 250 − 3000× faster |
Color Constancy by Deep Learning (Lou et al., 2015) | DNN-based regression | Angular error (mean, median, standard deviation) |
The method outperforms the state-of-the-art by 9%. In cross dataset validation, this approach reduces the median angular error by 35%. The algorithm operates at more than 100 fps during testing. |
As-Projective-As-Possible Bias Correction for Illumination Estimation Algorithms (Afifi et al., 2019) | Improving the accuracy of the fast statistical-based algorithms by applying a post-estimate bias-correction function to transform the estimated R, G, B vector such that it lies closer to the correct solution. | MSE (median, tri-mean, best/worst 25%) |
Propose an As-projective-as-possible (APAP) projective transform that locally adapts the projective transform to the input R, G, B vector which is effective over the state-of-the-art statistical methods |
Robust channel-wise illumination estimation (Laakom et al., 2021) | Efficient CNN | Angular error (mean, median, tri-mean, best/worst 25%) | The method substantially reduces the number of parameters needed to solve the task by up to 90% while achieving competitive experimental results compared to state-of-the-art methods |
Deep Outdoor Illumination Estimation (Hold-Geoffroy et al., 2017) | CNN for outdoor illumination estimations | MSE; scale-invariant MSE; per-color scale invariant MSE. |
An extensive evaluation on both the panorama dataset and captured HDR environment maps shows significantly superior performances |
Fast Spatially-Varying Indoor Lighting Estimation (Garon et al., 2019) | CNN for indoor illumination estimations | MSE Mean Absolute Error |
Achieve lower lighting estimation errors and are preferred by users over the state-of-the-art models |
Monte Carlo Dropout Ensembles for Robust Illumination Estimation (Laakom et al., 2020) | Monte Carlo dropout | Angular error (mean, median, tri-mean, best/worst 25%) |
The proposed framework leads to state-of-the-art performance on INTEL-TAU dataset |
Very Deep Learning-Based Illumination Estimation Approach with Cascading Residual Network Architecture (CRNA) (Choi and Yun, 2021) | Cascading Residual Network Architecture (CRNA), which incorporates the ResNet and cascading mechanism into the deep convolutional neural network (DCNN) | Angular error (mean, median, tri-mean, best/worst 25%) (Comparison over the methods) |
The proposed approach delivers more stable and robust results and implies the generalization potential for deep learning models across different applications by comparative experiments in different datasets. |
Effective Learning-Based Illuminant Estimation Using Simple Features (Cheng et al., 2015) | A learning-based method based on four simple color features and show how to use this with an ensemble of regression trees to estimate the illumination | Angular error (mean, median, tri-mean, best/worst 25%) |
A learning-based method based on four simple color features and propose how to use this with an ensemble of regression trees to estimate the illumination (develop a learning-based illumination estimation method with a running-time of statistical methods.) |
On deep learning techniques to boost monocular depth estimation for autonomous navigation (de Queiroz Mendes et al., 2021) | A lightweight and fast supervised CNN architecture combined with novel feature extraction models which are designed for real-world autonomous navigation | the Scale invariant Error, Absolute Relative Difference, Squared Relative Difference, Log10, Mean Absolute Error, Linear RMSE and Log RMSE with indoor and outdoor ablation studies | Able to determine optimal training conditions, using different deep learning techniques, as well as optimized network structures that enabled the generation of high-quality predictions in reduced processing time. |
Learning HDR illumination from LDR panorama images (Jin et al., 2021) | CNN combining with physical modelling | MSE loss function | Results show that the method can predict accurate spherical harmonic coefficients, and the recovered luminance is realistic. |
LISU: Low-light indoor scene understanding with joint learning of reflectance restoration (Zhang et al., 2022) | CNN based novel cascade network to study semantic segmentation in low-light indoor environments |
Overall Accuracy, Mean accuracy, Mean intersection over Union (mIoU) |
The approach is compared with other CNN-based segmentation frameworks, including the state-of-the-art DeepLab v3+, on the proposed real data set in terms of mIoU, the experimental results also show that the semantic information supports the restoration of a sharper reflectance map, thus further improving the segmentation. |
GeoRec: Geometry-enhanced semantic 3D reconstruction of RGB-D indoor scenes (Huan et al., 2022) | Geometry-enhanced multi-task learning network | Mean chamfer distance error, Mean average precision mAP, Mean intersection over Union mIoU |
With the parsed scene semantics and geometries, the proposed GeoRec reconstructs an indoor scene by placing reconstructed object mesh models with 3D object detection results in the estimated layout cuboid. |
DeepLight: light source estimation for augmented reality using deep learning (Kán and Kafumann, 2019) | Residual Neural Network (ResNet) | Angular error | An end-to-end AR system is presented which estimates a directional light source from a single RGB-D camera and integrates this light estimate into AR rendering. |
Deep Spherical Gaussian Illumination Estimation for Indoor Scene (Li et al., 2019) | CNN with extra glossy loss function | Peak Signal to noise Ratio (PSNR) and structural similarity | The proposed approach outperforms the state-of-the-arts both qualitatively and quantitatively. |
GMLight: Lighting Estimation via Geometric Distribution Approximation (F. Zhan et al., 2022) | Regression network with spherical convolution with a generative projector for progressive guidance in illumination generation | RMSE and scale invariant RMSE | GMLight achieves accurate illumination estimation and superior fidelity in relighting for 3D object insertion |
Deep Graph Learning for Spatially Varying Indoor Lighting Prediction (Bai et al., 2022) | A new lighting model (dubbed DSGLight) based on depth-augmented Spherical Gaussians (SG) and a Graph Convolutional Network (GCN) |
PSNR & Qualitative Analysis |
DSGLight combines both learning and physical models, and encodes both direct lighting and indirect environmental lighting more faithfully and compactly. |
Outdoor illumination estimation via all convolutional neural networks (Zhang et al., 2021) | CNN | Angle error | Pruning and quantization are used to compress the network, resulting in a significant reduction in the number of network parameters and the storage space, only with a slight loss of precision |
An illumination estimation algorithm based on outdoor scene classification (Li et al., 2020) | Support Vector Machine (SVM) classifiers with optimization algorithm | Angle error (average, median) and reproduction angle error | Achieve state-of-art results in outdoor environment. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
Downloads
658
Views
241
Comments
0
Subscription
Notify me about updates to this article or when a peer-reviewed version is published.
© 2025 MDPI (Basel, Switzerland) unless otherwise stated