Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Enhancing View Synthesis with Depth-Guided Neural Radiance Fields and Improved Depth Completion

Version 1 : Received: 19 December 2023 / Approved: 19 December 2023 / Online: 19 December 2023 (14:12:29 CET)

A peer-reviewed article of this Preprint also exists.

Wang, B., Zhang, D., Su, Y., & Zhang, H. (2024). Enhancing View Synthesis with Depth-Guided Neural Radiance Fields and Improved Depth Completion. Sensors, 24(6), 1919. Wang, B., Zhang, D., Su, Y., & Zhang, H. (2024). Enhancing View Synthesis with Depth-Guided Neural Radiance Fields and Improved Depth Completion. Sensors, 24(6), 1919.

Abstract

Neural radiance fields (NeRF) leverage a neural representation to encode scenes, obtaining photo-realistic rendering of novel views. However, NeRF has notable limitations. A significant drawback is that it does not capture surface geometry and only renders the object surface colors. Furthermore, the training of NeRF is exceedingly time-consuming. We propose Depth-NeRF as a solution to these issues. Specifically, our approach employs a fast depth completion algorithm to denoise and complete the depth maps generated by RGB-D cameras. These improved depth maps guide the sampling points of NeRF to be distributed closer to the scene's surface, benefiting from dense depth information. Furthermore, we have optimized the network structure of NeRF and integrated depth information to constrain the optimization process, ensuring that the termination distribution of the ray is consistent with the scene’s geometry. Compared to NeRF, our method accelerates the training speed by 18% and significantly reduces the RMSE between the rendered scene depth and the ground truth depth, which indicates that our method can better capture the geometric information of the scene. With these improvements, we can train the NeRF model more efficiently and achieve more accurate rendering results.

Keywords

NeRF; volume rendering; view synthesis; image-based rendering; depth priors; rendering accelerations

Subject

Computer Science and Mathematics, Computer Vision and Graphics

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.