Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Automatic Point Cloud Colorization of Ground-based LiDAR Data Using Video Imagery Without POS

Version 1 : Received: 26 April 2023 / Approved: 27 April 2023 / Online: 27 April 2023 (03:54:51 CEST)

A peer-reviewed article of this Preprint also exists.

Xu, J.; Yao, C.; Ma, H.; Qian, C.; Wang, J. Automatic Point Cloud Colorization of Ground-Based LiDAR Data Using Video Imagery without Position and Orientation System. Remote Sens. 2023, 15, 2658. Xu, J.; Yao, C.; Ma, H.; Qian, C.; Wang, J. Automatic Point Cloud Colorization of Ground-Based LiDAR Data Using Video Imagery without Position and Orientation System. Remote Sens. 2023, 15, 2658.

Abstract

With the continuous development of 3D city modeling, traditional close-range photogrammetry is limited by complex processing procedures and incomplete 3D depth information, making it unable to meet high-precision modeling requirements. In contrast, the integration of LiDAR and camera in mobile measurement systems provides a new and highly effective solution. The LiDAR can quickly and accurately acquire the 3D spatial coordinates of target objects, while optical imagery contains abundant color information. If the two can be integrated, they can play an important role in multiple fields such as streetscape modeling, archaeology and digital city, etc. Currently, integrated mobile measurement systems commonly require cameras, lasers, POS and IMU, thus the hardware cost is relatively expensive and the system integration is complex. Therefore, in this paper we propose a simple ground mobile measurement system composed of a LiDAR and a GoPro camera without a POS system, providing a more convenient and reliable way to automatically obtain 3D point cloud data with spectral information. The automatic point cloud coloring based on video images mainly includes four aspects: (1) Establishing models for radial distortion and tangential distortion to correct video images. (2) Establishing a registration method based on normalized Zernike moments to obtain the exterior orientation elements. Normalized Zernike moments are region-based shape descriptors that can reflect the features of images in multiple dimensions even for low-quality video images. The results show that registration based on normalized Zernike moments provides a good result, with an error accuracy of 0.5-1 pixel, which is far higher than registration based on a collinearity equation. (3) Establishing adjacent video image relative orientation based on essential matrix decomposition and nonlinear optimization. This involves uniformly using the SURF algorithm with distance restriction and RANSAC to select corresponding points, which can improve the reliability of the corresponding points. The results indicate that the accuracy of the relative orientation method is high. Moreover, this method can converge to good results for stereo image pairs with large rotation angles and displacement amounts. Therefore, relative orientation based on essential matrix decomposition and nonlinear optimization has good applicability. (4) The video imagery suffers from significant motion blur and boundary distortion. Therefore, a point cloud coloring method based on Gaussian distribution with central region restriction is adopted. Only pixels within the central region are considered valid for coloring. Then, the point cloud is colored based on the mean of the Gaussian distribution of the color set. Experimental results show that the coloring accuracy between the video imagery and point cloud data is high, meeting the accuracy requirements of applications such as tunnel detection, street-view modeling and 3D urban modeling.

Keywords

camera calibration; registration; normalized Zernike moments; corresponding point matching; essential matrix; relative orientation; absolute orientation; point cloud coloring

Subject

Environmental and Earth Sciences, Remote Sensing

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.