Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

An Adaptive Refinement Scheme for Depth Estimation Networks

Version 1 : Received: 17 October 2022 / Approved: 27 October 2022 / Online: 27 October 2022 (03:25:23 CEST)

A peer-reviewed article of this Preprint also exists.

Naeini, A.A.; Sheikholeslami, M.M.; Sohn, G. An Adaptive Refinement Scheme for Depth Estimation Networks. Sensors 2022, 22, 9755. Naeini, A.A.; Sheikholeslami, M.M.; Sohn, G. An Adaptive Refinement Scheme for Depth Estimation Networks. Sensors 2022, 22, 9755.

Abstract

Deep learning, specifically the supervised approach, has proved to be a breakthrough in depth prediction. However, the generalization ability of deep networks is still limited, and they cannot maintain a satisfactory performance on some inputs. Addressing a similar problem in the segmentation field, a scheme (f-BRS) has been proposed to refine predictions in the inference time. f-BRS adapts an intermediate activation function to each input by using user clicks as sparse labels. Given the similarity between user clicks and sparse depth maps, this paper aims at extending the application of f-BRS to depth prediction. Our experiments show that f-BRS, fused with a depth estimation baseline, is trapped in local optima, and fails to improve the network predictions. To resolve that, we propose a double-stage adaptive refinement scheme (DARS). In the first stage, a Delaunay-based correction module significantly improves the depth generated by a baseline network. In the second stage, a particle swarm optimizer (PSO) delineates the estimation through fine-tuning f-BRS parameters—that is, scales and biases. DARS is evaluated on an outdoor benchmark, KITTI, and an indoor benchmark, NYUv2 while for both the network is pre-trained on KITTI. The proposed scheme outperformed rival methods on both datasets.

Keywords

depth estimation; optimization; deep learning

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.