Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

PointNet++ Network with Contextual Feature and Mutual Learning for Point Sets

Version 1 : Received: 12 March 2024 / Approved: 12 March 2024 / Online: 13 March 2024 (10:18:11 CET)

How to cite: Hu, X.; Xie, X. PointNet++ Network with Contextual Feature and Mutual Learning for Point Sets. Preprints 2024, 2024030743. https://doi.org/10.20944/preprints202403.0743.v1 Hu, X.; Xie, X. PointNet++ Network with Contextual Feature and Mutual Learning for Point Sets. Preprints 2024, 2024030743. https://doi.org/10.20944/preprints202403.0743.v1

Abstract

The research of object classification and part segmentation is a hot topic in computer vision. A considerable number of studies have been carried out about deep learning on 3D point clouds. However,it is challenging to achieve effective feature learning due to sparsity of point clouds. Recently, a variety of Transformers have been adopted to improve point cloud processing and display great potential. Nevertheless, large numbers of Transformer layers tend to incur huge computational and memory costs. PointNet++ is one of the most influential neural architectures for point cloud understanding. Although the accuracy of PointNet++ has been largely surpassed by recent networks, this does not mean that PointNet++ has no potential.Thus, this paper offer two major contributions that significantly improve PointNet++ performance. Firstly, we introduce a novel contextual feature extraction (CFE) block that significantly enhances the feature extraction capabilities of PointNet++ networks. Secondly, to further enhance feature fusion, we seamlessly integrate a mutual learning (ML) block into the network architecture. By embedding these two innovative blocks within each layer of the network, we not only enrich the network's functionality but also impart it with greater robustness and adaptability. The specific experiments were conducted on the S3DIS (6-fold cross-validation) and Modelnet40 datasets with 86.5% and 92.7% accuracy, respectively, which proved that our model is comparable or even better than most existing methods for classification and segmentation tasks.

Keywords

point cloud; part segmentation; deep learning; self-attention; object classification

Subject

Computer Science and Mathematics, Computer Vision and Graphics

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.