Submitted:
22 December 2024
Posted:
24 December 2024
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. Materials and Methods
2.1. Experimental Setup

2.2. Camera Settings
2.3. Data Collection
2.3.1. Capturing Techniques
2.3.2. Scanning Methods Comparison
2.4. Comparative analysis criteria
3. Results
3.1. Orientation Impact on 3D Reconstruction Quality
3.2. Effect of Walking Speed on 3D Reconstruction Quality
3.3. Layering Technique for Enhanced 3D Reconstruction
- All three reconstructions (1, 3, and 5 layers) provided clear visibility of the tabletop.
- The scan with a single layer showed the sharpest 3D reconstruction with minimal noise.
- The scans with 3 and 5 layers, while acceptable, were slightly less sharp compared to the single-layer scan.
- The single-layer scan failed to capture the bottom of the table and the sides, including the legs of the chairs.
- Both the 3-layers and 5-layers scans successfully captured the bottom of the table and the chair legs.
- The 3-layers scan performed better in visualizing the sides and bottom of the table compared to the 5-layers scan.
- The ceiling in the single-layer scan appeared pitch black due to the absence of data, which the AI algorithm filled with black.
- The difference between the 3-layers and 5-layers scans was minimal, with the 3-layers scan being slightly sharper.
3.4. Most Optimal Scan Method
3.4.1. Summary of Methods
3.4.2. Method 5
3.4.3. Methods 8 and 10


4. Discussion
4.1. Comparison with Literature
4.2. Potential Applications
4.3. Limitations and Future Research
4.3.1. Limitations
4.3.2. Future Research
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Y. M. Mostafa, M. N. Al-Berry, H. A. Shedeed, and M. F. Tolba, “Data Driven 3D Reconstruction from 2D Images: A Review. Lecture Notes on Data Engineering and Communications Technologies 2023, 152, 812–823. [Google Scholar] [CrossRef]
- A. David, E. Joy, S. Kumar, and S. J. Bezaleel, “Integrating Virtual Reality with 3D Modeling for Interactive Architectural Visualization and Photorealistic Simulation: A Direction for Future Smart Construction Design Using a Game Engine. Lecture Notes in Networks and Systems 2022, 300, 180–192. [Google Scholar] [CrossRef]
- F. Remondino, S. El-Hakim, S. Girardi, A. Rizzi, S. Benedetti, and L. Gonzo, “3D VIRTUAL RECONSTRUCTION AND VISUALIZATION OF COMPLEX ARCHITECTURES-THE ‘3D-ARCH’ PROJECT”, Accessed: Jul. 15, 2024. [Online]. Available online: www.stefanobenedetti.com.
- M. G. Bevilacqua, M. Russo, A. Giordano, and R. Spallone, “3D Reconstruction, Digital Twinning, and Virtual Reality: Architectural Heritage Applications. Proceedings - 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops 2022, 2022, 92–96. [Google Scholar] [CrossRef]
- L. Gomes, O. Regina Pereira Bellon, and L. Silva, “3D reconstruction methods for digital preservation of cultural heritage: A survey. Pattern Recognit Lett 2014, 50, 3–14. [Google Scholar] [CrossRef]
- A. Cefalu, M. Abdel-Wahab, M. Peter, K. Wenzel, and D. Fritsch, “Image based 3D Reconstruction in Cultural Heritage Preservation”. [CrossRef]
- B. Rodriguez-Garcia, H. Guillen-Sanz, D. Checa, and A. Bustillo, “A systematic review of virtual 3D reconstructions of Cultural Heritage in immersive Virtual Reality. Multimedia Tools and Applications 2024, 2024, 1–51. [Google Scholar] [CrossRef]
- B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2020; 12346, 405–421. [CrossRef]
- B. Kerbl, G. Kopanas, T. Leimkuehler, and G. Drettakis, “3D Gaussian Splatting for Real-Time Radiance Field Rendering. ACM Trans Graph 2023, 42, 14. [Google Scholar]
- T. Nguyen-Phuoc, F. Liu, and L. Xiao, “SNeRF: Stylized Neural Implicit Representations for 3D Scenes. ACM Trans Graph 2022, 41, 11. [Google Scholar] [CrossRef]
- J. Kulhanek and T. Sattler, “Tetra-NeRF: Representing Neural Radiance Fields Using Tetrahedra. 2023, Accessed: Jul. 15, 2024. [Online]. Available online: https://github.com/jkulhanek/tetra-nerf.
- M. Tancik et al., “Nerfstudio: A Modular Framework for Neural Radiance Field Development. Proceedings - SIGGRAPH 2023 Conference Papers, 2023. [CrossRef]
- T. Müller, S. Nvidia, A. Evans, C. Schied, and A. 2022 Keller, “Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. ACM Trans. Graph 2022, 41, 102. [Google Scholar] [CrossRef]
- R. Liang, J. Zhang, H. Li, C. Yang, Y. Guan, and N. Vijaykumar, “SPIDR: SDF-based Neural Point Fields for Illumination and Deformation. Oct. 2022, Accessed: Jul. 15, 2024. [Online]. Available online: https://arxiv.org/abs/2210.08398v3.
- C. Reiser et al., “MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes. ACM Trans Graph 2023, 42. [Google Scholar] [CrossRef]
- D. Rangelov, J. D. Rangelov, J. Knotter, and R. Miltchev, “3D Reconstruction in Crime Scenes Investigation: Impacts, Benefits, and Limitations. Lecture Notes in Networks and Systems, 2024; 1065, 46–64. [Google Scholar] [CrossRef]
- D. Rangelov, S. Waanders, K. Waanders, M. van Keulen, and R. Miltchev, “Impact of Camera Settings on 3D Reconstruction Quality: Insights from NeRF and Gaussian Splatting. Sensors 2024, 24, 7594. [Google Scholar] [CrossRef]
- Sony, “Sony Alpha 7C Full-Frame Mirrorless Camera - Black| ILCE7C.” Accessed: Dec. 22, 2024. [Online]. Available online: https://electronics.sony.com/imaging/interchangeable-lens-cameras/all-interchangeable-lens-cameras/p/ilce7c-b?srsltid=AfmBOoo5N6vG9O3tR3d9p7ZKy9YqWMPZSzdnQnfZjfl4XP9WE2vRx1bz.
- Sigma, “14mm F1.4 DG DN | Art | Lenses | SIGMA Corporation.” Accessed: Dec. 22, 2024. [Online]. Available online: https://www.sigma-global.com/en/lenses/a023_14_14/.
- DJI, “DJI RS 4 - Gripping Storytelling - DJI.” Accessed: Dec. 22, 2024. [Online]. Available online: https://www.dji.com/bg/rs-4.
- Jawset, “Jawset Postshot.” Accessed: Dec. 22, 2024. [Online]. Available online: https://www.jawset.com/.
- Canon, “What Is Aperture Photography? | Canon U.S.A., Inc.” Accessed: Dec. 22, 2024. [Online]. Available online: https://www.usa.canon.com/learning/training-articles/training-articles-list/what-is-aperture.
- J. Zhang, C. Zhu, L. Zheng, and K. Xu, “ROSEFusion: Random Optimization for Online Dense Reconstruction under Fast Camera Motion. ACM Trans Graph 2021, 40, 56. [Google Scholar] [CrossRef]
- D. Lefcourt, “Portrait Vs Landscape Orientation In Photography (which Is Better?) - Lefcourt Photography.” Accessed: Dec. 22, 2024. [Online]. Available online: https://www.lefcourtphotography.com/portrait-vs-landscape-orientation-in-photography-which-is-better.
- F. Lu, B. Zhou, Y. Zhang, and Q. Zhao, “Real-time 3D scene reconstruction with dynamically moving object using a single depth camera. Visual Computer 2018, 34, 753–763. [Google Scholar] [CrossRef]











| Criteria | 1 | 2 | 3 | 4 | 5 |
|---|---|---|---|---|---|
| Noise | There is too much noise present, and nothing can be seen | There is too much noise present, but the room is visible | There is some noise present, however, the outline of the room is still visible | Almost no noise is present, and the room is quite clear in visibility | There is no noise |
| Details (focus on blue bottle, walls, whiteboard, TV and TCI coffee mug) | The reconstruction appears pixelated, yet it is discernible that an object should be present in that location | The reconstruction is pixelated, but it is still possible to discern the object type (e.g., table, chair, paper) | Identification of the object types is easily achievable | Capable of accurately identifying the object and providing brand information | Extremely detailed; there is no discernible difference between the model and the video |
| Scan Methods | Noise | Details |
|---|---|---|
| 1 | 3 | 4 |
| 2 | 2 | 3 |
| 3 | 2 | 2 |
| 4 | 4 | 3 |
| 5 | 4 | 4 |
| 6 | 3 | 3 |
| 7 | 2 | 4 |
| 8 | 4 | 4 |
| 9 | 2 | 3 |
| 10 | 4 | 5 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).