Richter, S.; Wang, Y.; Beck, J.; Wirges, S.; Stiller, C. Semantic Evidential Grid Mapping Using Monocular and Stereo Cameras. Sensors2021, 21, 3380.
Richter, S.; Wang, Y.; Beck, J.; Wirges, S.; Stiller, C. Semantic Evidential Grid Mapping Using Monocular and Stereo Cameras. Sensors 2021, 21, 3380.
Accurately estimating the current state of local traffic scenes is one of the key problems in the development of software components for automated vehicles. In addition to details on free space and drivability, static and dynamic traffic participants, information on the semantics may also be included in the desired representation. Multi-layer grid maps allow to include all this information in a common representation. However, most existing grid mapping approaches only process range sensor measurements such as LIDAR and Radar and solely model occupancy without semantic states. In order to add sensor redundancy and diversity it is desired to add vision based sensor setups in a common grid map representation. In this work, we present a semantic evidential grid mapping pipeline including estimates for eight semantic classes that is designed for straightforward fusion with range sensor data. Unlike in other publication our representation explicitly models uncertainties in the evdiential model. We present results of our grid mapping pipeline based on a monocular vision setup and a stereo vision setup. Our mapping resulsts are accurate and dense mapping due to the incorporation of a disparity- or depth-based ground surface estimation in the inverse perspective mapping. We conclude this paper by providing a detailed quantitative evaluation based on real traffic scenarios in the Kitti odometry benchmark and demonstrating the advantages compared to other semantic grid mapping approaches.
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.