Submitted:
22 January 2026
Posted:
23 January 2026
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. Materials and Methods
2.1. Samples and Study Environments
2.2. Experimental Design and Baseline Comparison
2.3. Measurement Procedures and Quality Control
2.4. Data Processing and Model Formulation
2.5. Implementation and Reproducibility
3. Results and Discussion
3.1. Task Success and Path Efficiency
3.2. Planning Time and Replanning Behavior
3.3. Performance in Interactive Navigation Scenarios
3.4. Comparison with Prior Work and Limitations
4. Conclusion
References
- Gan, C.; Zhou, S.; Schwartz, J.; Alter, S.; Bhandwaldar, A.; Gutfreund, D.; Tenenbaum, J. B. The threedworld transport challenge: A visually guided task-and-motion planning benchmark towards physically realistic embodied ai. 2022 International conference on robotics and automation (ICRA), 2022, May; IEEE; pp. 8847–8854. [Google Scholar]
- Sevastopoulos, C.; Konstantopoulos, S. A survey of traversability estimation for mobile robots. IEEE Access 2022, 10, 96331–96347. [Google Scholar] [CrossRef]
- Yang, J.; Chen, T.; Qin, F.; Lam, M. S.; Landay, J. A. Hybridtrak: Adding full-body tracking to vr using an off-the-shelf webcam. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 2022, April; pp. 1–13. [Google Scholar]
- Mangalam, M.; Oruganti, S.; Buckingham, G.; Borst, C. W. Enhancing hand-object interactions in virtual reality for precision manual tasks. Virtual Reality 2024, 28(4), 166. [Google Scholar] [CrossRef]
- Bai, W.; Wu, Q.; Wu, K.; Lu, K. Exploring the Influence of Prompts in LLMs for Security-Related Tasks. Workshop on Artificial Intelligence System with Confidential Computing (AISCC 2024), San Diego, CA, 2024; USA. [Google Scholar]
- Wang, Y.; Feng, Y.; Fang, Y.; Zhang, S.; Jing, T.; Li, J.; Xu, R. HERO: Hierarchical Traversable 3D Scene Graphs for Embodied Navigation Among Movable Obstacles. arXiv 2025, arXiv:2512.15047. [Google Scholar] [CrossRef]
- Manakitsa, N.; Maraslidis, G. S.; Moysis, L.; Fragulis, G. F. A review of machine learning and deep learning for object detection, semantic segmentation, and human action recognition in machine and robotic vision. Technologies 2024, 12(2), 15. [Google Scholar] [CrossRef]
- Mao, Y.; Ma, X.; Li, J. Research on API Security Gateway and Data Access Control Model for Multi-Tenant Full-Stack Systems. 2025. [Google Scholar]
- Roüast, N. M.; Schönauer, M. Continuously changing memories: a framework for proactive and non-linear consolidation. Trends in Neurosciences 2023, 46(1), 8–19. [Google Scholar] [CrossRef] [PubMed]
- Mao, Y.; Ma, X.; Li, J. Research on Web System Anomaly Detection and Intelligent Operations Based on Log Modeling and Self-Supervised Learning. 2025. [Google Scholar] [CrossRef]
- Patil, A. G.; Patil, S. G.; Li, M.; Fisher, M.; Savva, M.; Zhang, H. Advances in Data-Driven Analysis and Synthesis of 3D Indoor Scenes. Computer Graphics Forum, 2024, February; Vol. 43, p. p. e14927. [Google Scholar]
- Sheu, J. B.; Gao, X. Q. Alliance or no alliance—Bargaining power in competing reverse supply chains. European Journal of Operational Research 2014, 233(2), 313–325. [Google Scholar] [CrossRef]
- Greve, E.; Büchner, M.; Vödisch, N.; Burgard, W.; Valada, A. Collaborative dynamic 3d scene graphs for automated driving. 2024 IEEE International Conference on Robotics and Automation (ICRA), 2024, May; IEEE; pp. 11118–11124. [Google Scholar]
- Hu, W. Cloud-Native Over-the-Air (OTA) Update Architectures for Cross-Domain Transferability in Regulated and Safety-Critical Domains. 2025 6th International Conference on Information Science, Parallel and Distributed Systems, 2025, September. [Google Scholar]
- Freidank, W.; Lindbeck, C.; Ahlin, K.; Balakirsky, S. Knowledge Driven Robotics (KDR). 2025 IEEE 21st International Conference on Automation Science and Engineering (CASE), 2025, August; IEEE; pp. 3219–3225. [Google Scholar]
- Yang, M.; Wang, Y.; Shi, J.; Tong, L. Reinforcement Learning Based Multi-Stage Ad Sorting and Personalized Recommendation System Design. 2025. [Google Scholar] [PubMed]
- Kurenkov, A.; Lingelbach, M.; Agarwal, T.; Jin, E.; Li, C.; Zhang, R.; Martın-Martın, R. Modeling dynamic environments with scene graph memory. International Conference on Machine Learning, 2023, July; PMLR; pp. 17976–17993. [Google Scholar]
- Liu, S.; Feng, H.; Liu, X. A Study on the Mechanism of Generative Design Tools' Impact on Visual Language Reconstruction: An Interactive Analysis of Semantic Mapping and User Cognition; Authorea Preprints, 2025. [Google Scholar]
- Jones, M.; Djahel, S.; Welsh, K. Path-planning for unmanned aerial vehicles with environment complexity considerations: A survey. ACM Computing Surveys 2023, 55(11), 1–39. [Google Scholar] [CrossRef]
- Du, Y. Research on Deep Learning Models for Forecasting Cross-Border Trade Demand Driven by Multi-Source Time-Series Data. Journal of Science, Innovation & Social Impact 2025, 1(2), 63–70. [Google Scholar]
- Slocum, T. A.; Pinkelman, S. E.; Joslyn, P. R.; Nichols, B. Threats to internal validity in multiple-baseline design variations. Perspectives on Behavior Science 2022, 45(3), 619–638. [Google Scholar] [CrossRef] [PubMed]
- Sirohi, K.; Marvi, S.; Büscher, D.; Burgard, W. Uncertainty-aware panoptic segmentation. IEEE Robotics and Automation Letters 2023, 8(5), 2629–2636. [Google Scholar] [CrossRef]
- Wang, H.; Qi, Y.; Liu, W.; Guo, K.; Lv, W.; Liang, Z. DPGNet: A Boundary-Aware Medical Image Segmentation Framework Via Uncertainty Perception. IEEE Journal of Biomedical and Health Informatics 2025. [Google Scholar] [CrossRef] [PubMed]


Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).