Recently, new semantic segmentation and object detection methods have been proposed for direct processing of three-dimensional (3D) LiDAR sensor point clouds. LiDAR can produce
highly accurate and detailed 3D maps of natural and man-made environments and is used for sensing
in many contexts due to its ability to capture more information and its robustness to dynamic changes
in the environment. In addition, the cost of LiDAR sensors has decreased in recent years, which is
an important factor for many application scenarios. The challenge with 3D LiDAR sensors like the
Velodyne HDL-64E S2 is that they can output large 3D data at up to 1,300,000 points per second,
which is difficult to process in real time when applying complex algorithms and models for efficient
semantic segmentation. Most existing approaches are either only suitable for relatively small point
clouds or rely on computationally intensive sampling techniques to reduce their size. As a result,
most of these methods do not work in real-time in realistic field robotics application scenarios, making
them unsuitable for practical applications. Systematic point selection is a possible solution to reduce
the amount of data to be processed, but although it is memory and computationally efficient, it selects
only a small subset of points, which may result in important features being missed. To address this problem, we propose a new approach to semantic segmentation in forestry that uses a systematic sampling method in which the local neighbors of each point are retained to preserve geometric details. Our approach has been shown to process up to 1 million points in a single pass. It outperforms the state of the art in efficient semantic segmentation in large datasets such as Semantic3D. We also present a preliminary study on the performance of LiDAR-only data, i.e., intensity values from LiDAR sensors without RGB values for semi-autonomous robot perception.