1. Introduction
Three-dimensional point cloud reconstruction remains a challenging problem in computer vision and graphics, particularly when dealing with incomplete or noisy data captured from real-world environments. Accurate reconstruction of 3D coordinates is essential for applications ranging from autonomous navigation to augmented and virtual reality experiences.
Traditional approaches to point cloud reconstruction often rely on iterative closest point (ICP) algorithms or surface fitting techniques that may struggle with complex geometries or sparse data points. In this paper, we introduce a novel depth-first search (DFS) based algorithm that leverages rotational convex hull gift wrapping to identify optimal reconstructions that maximize both coverage and geometric consistency.
Our primary contributions include a depth-first search algorithm for exploring potential convex hull configurations, a dual-objective optimization approach that considers both face count and face area, experimental validation across diverse point cloud datasets, and demonstration of practical applications in virtual reality environments.
2. Related Work
Point cloud reconstruction has been extensively studied in the literature. Early works by [
4] established foundations for surface reconstruction from unorganized points. More recent approaches have leveraged deep learning techniques [
7] to perform direct shape inference from point clouds.
Convex hull algorithms, including the gift wrapping method (also known as Jarvis march) [
5] and Graham scan [
3], provide efficient means for identifying boundary points. Rotational variants of these algorithms have been explored for higher-dimensional spaces [
1], but have not been widely applied to point cloud reconstruction tasks.
The combination of search-based optimization with geometric primitives has shown promise in several domains. [
6] demonstrated the effectiveness of search-based methods for scene reconstruction from 3D point clouds, while [
8] employed a RANSAC approach to fit primitive shapes to point cloud data.
Our approach differs from previous work by explicitly formulating point cloud reconstruction as a search problem over possible convex hull configurations, with a novel dual-objective optimization criterion.
|
Algorithm 1: DFS-Based Convex Hull Optimization |
Input: Point cloud P, depth limit , branching factor b
Output: Optimal rotation and hull
Initialize , , DFSR, d
-
if
then
return
end if
if
then
, ,
end if
Generate b candidate rotations around R
for each candidate rotation do
DFS,
end for
-
DFSI, 0
return ,
|
3. Methodology
3.1. Problem Formulation
Given a set of points
where each
represents a 3D coordinate, our goal is to find a transformation
T that optimally reconstructs the underlying 3D structure. We formulate this as an optimization problem:
where
denotes the convex hull operation and
F is our objective function that evaluates the quality of a given hull configuration.
3.2. Rotational Convex Hull Gift Wrapping
The rotational convex hull gift wrapping algorithm extends the traditional gift wrapping method by exploring different orientations of the point cloud. For each orientation, we compute the convex hull and evaluate its quality using our objective function.
The algorithm proceeds as follows:
Initialize the best hull configuration and its score
-
For each candidate rotation in our search space:
- (a)
Apply rotation to the point set:
- (b)
Compute the convex hull
- (c)
Calculate the score
- (d)
If , update and
Return the inverse transformation and
3.3. Depth-First Search Optimization
The search space of possible rotations is vast, making exhaustive search impractical. We employ a depth-first search strategy to efficiently explore promising regions of the rotation space. The search begins with a set of principal rotations aligned with the eigenvectors of the point cloud’s covariance matrix.
3.4. Dual-Objective Optimization
Our objective function
F is designed to consider both the number of faces in the convex hull and the sum of squared face areas:
where
is the number of faces in hull
H,
is a normalization factor representing the maximum possible number of faces, and
is a weighting parameter. The squared area term emphasizes larger faces, which typically correspond to more significant structural elements of the point cloud.
In our experiments, we found that provides a good balance between face count and area objectives, though this parameter can be adjusted depending on the specific characteristics of the point cloud data.
4. Experimental Results
4.1. Datasets
We evaluated our algorithm on three standard point cloud datasets: ModelNet40 [
9], a collection of 12,311 CAD models from 40 object categories; ShapeNet [
2], a large-scale dataset of 3D shapes; and the Stanford 3D Scanning Repository, featuring high-quality 3D scans of real objects. Additionally, we collected a custom dataset of indoor corner environments specifically designed to evaluate performance in VR applications.
4.2. Evaluation Metrics
We employed three key metrics to assess reconstruction quality. First, the L2 Norm Error measures the average Euclidean distance between ground truth and reconstructed points. Second, the Hausdorff Distance captures the maximum distance between points in the reconstructed model and their closest points in the ground truth. Third, Surface Coverage quantifies the percentage of the ground truth surface that is adequately represented in the reconstruction.
4.3. Results and Discussion
Our algorithm consistently outperformed baseline methods across all datasets, with particularly significant improvements on complex geometries and sparse point clouds.
Table 1 presents quantitative results on the ModelNet40 dataset, showing an 18.7% reduction in L2 norm error compared to the best baseline method.
The depth-first search optimization proved effective at identifying high-quality hull configurations, typically converging within 200-300 iterations for moderately complex point clouds. The dual-objective function successfully balanced the competing goals of maximizing face count and face area, resulting in reconstructions that captured both fine details and major structural elements.
Figure 1 illustrates a qualitative comparison between our method and baseline approaches on a representative example from the Stanford Bunny model. Our reconstruction preserves more details while maintaining a coherent overall structure.
5. VR Applications
The proposed algorithm demonstrates particular utility for virtual reality applications, where accurate reconstruction of real-world environments is crucial for immersive experiences.
5.1. Corner Detection and Object Placement
One compelling application is the automated detection of corners and edges in indoor environments, allowing for strategic placement of interactive virtual objects. By identifying stable geometric features, our system enables positioning virtual controls at easily accessible locations, anchoring informational displays to walls and corners, and facilitating physics-based interactions that respect environmental constraints.
We implemented a prototype VR system that uses our reconstruction algorithm to place interactive objects at corners and edges. User studies with 24 participants indicated that object placement based on our algorithm was rated significantly more intuitive (p < 0.01) and accessible (p < 0.05) compared to baseline methods.
5.2. Dynamic Environment Mapping
The efficiency of our algorithm enables real-time updates to environmental maps as users move through space. This dynamic mapping capability supports several important VR functions: adaptive obstacle avoidance for safer navigation, progressive reveal of interactive elements based on user proximity, and contextual information displays that respond to specific environmental features detected by our algorithm.
6. Conclusion and Future Work
We have presented a novel depth-first search based algorithm for 3D point cloud coordinate reconstruction that leverages rotational convex hull gift wrapping with a dual-objective optimization approach. Our method achieves state-of-the-art performance on standard benchmarks and demonstrates particular promise for VR applications involving corner detection and object placement.
Future work will explore several promising directions. We plan to integrate neural network approaches for improved feature extraction and initial hull estimation. Our research will extend to dynamic point clouds for real-time tracking applications and adapt the algorithms for resource-constrained environments such as mobile VR headsets. Additionally, we will explore alternative objective functions that incorporate semantic information to further enhance reconstruction quality and contextual awareness.
The source code and datasets used in this study will be made publicly available to facilitate further research in this area.
References
- C. B. Barber, D. P. Dobkin, and H. Huhdanpaa. The quickhull algorithm for convex hulls. ACM Transactions on Mathematical Software, 22(4):469–483, 1996.
- A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015.
- R. L. Graham. An efficient algorithm for determining the convex hull of a finite planar set. Information Processing Letters, 1(4):132–133, 1972. [CrossRef]
- H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle. Surface reconstruction from unorganized points. In Proceedings of the 19th Annual Conference on Computer Graphics and Interactive Techniques, pages 71–78, 1992.
- R. A. Jarvis. On the identification of the convex hull of a finite set of points in the plane. Information Processing Letters, 2(1):18–21, 1973. [CrossRef]
- L. Nan, K. Xie, and A. Sharf. A search-classify approach for cluttered indoor scene understanding. ACM Transactions on Graphics, 31(6):1–10, 2012. [CrossRef]
- C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 652–660, 2017.
- R. Schnabel, R. Wahl, and R. Klein. Efficient ransac for point-cloud shape detection. Computer Graphics Forum, 26(2):214–226, 2007. [CrossRef]
- Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1912–1920, 2015.
Table 1.
Quantitative comparison of reconstruction methods on the ModelNet40 dataset.
Table 1.
Quantitative comparison of reconstruction methods on the ModelNet40 dataset.
| Method |
L2 Norm Error |
Hausdorff Dist. |
Surface Coverage (%) |
| ICP |
0.047 |
0.153 |
87.2 |
| RANSAC |
0.039 |
0.127 |
89.6 |
| PointNet |
0.032 |
0.118 |
91.3 |
| Ours |
0.026 |
0.098 |
93.5 |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).