Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Towards Interpretable Camera and LiDAR data fusion for Unmanned Autonomous Vehicles Localisation

Version 1 : Received: 15 September 2022 / Approved: 19 September 2022 / Online: 19 September 2022 (10:27:42 CEST)

How to cite: Tibebu, H.; De-Silva, V.; Artaud, C.; Pina, R.; Shi, X. Towards Interpretable Camera and LiDAR data fusion for Unmanned Autonomous Vehicles Localisation. Preprints 2022, 2022090276. https://doi.org/10.20944/preprints202209.0276.v1 Tibebu, H.; De-Silva, V.; Artaud, C.; Pina, R.; Shi, X. Towards Interpretable Camera and LiDAR data fusion for Unmanned Autonomous Vehicles Localisation. Preprints 2022, 2022090276. https://doi.org/10.20944/preprints202209.0276.v1

Abstract

Recent deep learning frameworks draw a strong research interest in the application of ego-motion estimation as they demonstrate a superior result compared to geometric approaches. However, due to the lack of multimodal datasets, most of these studies primarily focused on a single sensor-based estimation. To overcome this challenge, we collect a unique multimodal dataset named LboroAV2, using multiple sensors including camera, Light Detecting And Ranging (LiDAR), ultrasound, e-compass and rotary encoder. We also propose an end-to-end deep learning architecture for fusion of RGB images and LiDAR laser scan data for odometry application. The proposed method contains a convolutional encoder, a compressed representation and a recurrent neural network. Besides feature extraction and outlier rejection, the convolutional encoder produces a compressed representation which is used to visualise the network's learning process and to pass useful sequential information. The recurrent neural network uses this compressed sequential data to learn the relation between consecutive time steps. We use the LboroAV2 and KITTI VO datasets to experiment and evaluate our results. In addition to visualising the network's learning process, our approach gives superior results compared to other similar methods. The code for the proposed architecture is released in GitHub and accessible publicly.

Keywords

Sensor fusion; Camera and LiDAR fusion; Odometry; Explainable AI

Subject

Computer Science and Mathematics, Computer Science

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.