Preprint
Review

This version is not peer-reviewed.

A Survey-Driven Framework for Autonomous Mobile Robot Navigation Systems: The Perception–Cognition–Operation (PCO) Approach

Submitted:

05 March 2026

Posted:

06 March 2026

You are already at the latest version

Abstract
This paper introduces a novel theoretical framework for classifying Autonomous Mobile Robots (AMRs) into three hierarchical layers: Perception, Cognition, and Operation. Unlike prior hardware-centric taxonomies, our approach, grounded in a structured review of seminal works, foundational methodologies, and state-of-the-art advances, explicitly integrates locomotion mechanisms (wheeled, legged), application domains (industrial, agricultural), and autonomy levels with navigation strategies. The framework unifies terrestrial navigation techniques into a cohesive taxonomy, clarifying modular boundaries and interdependencies. Serving as both a conceptual guide and educational tool, it empowers researchers to evaluate trade-offs in sensor configurations, decision-making algorithms, and trajectory execution under real-world constraints. A comparative analysis positions this framework against established navigation architectures, highlighting its role as a high-level reference design for modular implementations. By bridging theoretical principles with system optimization, the framework enhances interoperability across robotic platforms. Ultimately, this work delivers a practical design atlas, structuring the end-to-end pipeline of autonomous navigation to guide researchers and practitioners in selecting algorithms suited to their specific robotic platforms and mission requirements.
Keywords: 
;  ;  ;  ;  ;  ;  
Subject: 
Engineering  -   Other

1. Introduction

Autonomous Mobile Robots (AMRs) are becoming increasingly essential in various sectors. They help humans perform complex, hazardous, or repetitive tasks. Initially created to improve productivity and safety in industrial settings, their scope has significantly broadened. Initially focusing on path planning for industrial manipulators [1], AMRs now use advanced algorithms to navigate without collisions. This expansion enables their operation in diverse and dynamic environments, extending beyond industrial settings [2,3]. Despite considerable advances, existing AMR navigational strategies are often focused on specific domains: terrestrial, aerial, or aquatic. These strategies typically adopt layered approaches from perception to control, each tailored to distinct operational environments, such as industrial settings [4], uneven terrains [5,6], and underwater exploration [7,8]. There is no unified framework that can be seamlessly integrated across all domains, a gap that this work aims to address.
Emerging modular self-reconfigurable robots, which can change morphology to tackle diverse tasks and terrains [9], further underline the need for platform-agnostic navigation frameworks such as the Perception–Cognition–Operation stack proposed here.
By adopting modular packages, the proposed classification improves the reusability and interoperability of components, facilitating easier integration across all domains of autonomous navigation [10,11].
This paper introduces a new comprehensive classification system aimed at streamlining the various aspects of autonomous navigation. The system acts as a fundamental framework, organizing the intricate relationships between phases, modules, and layers. It improves the comprehension and execution of autonomous navigation strategies, providing clear insights and ultimately providing a complete set of tools for practitioners to choose the best solution for a wide range of operational scenarios.
Here, we place particular emphasis on terrestrial robots, while retaining the applicability of the classification to broader AMR domains. We also explore how autonomy levels, inspired by the SAE J3016 guidelines, intersect with sensor selection, path-planning approaches, and control mechanisms.
Paper organisation. Section 2 introduces a multi-axis taxonomy of Autonomous Mobile Robots, spanning locomotion types, application domains and autonomy levels. Section 3 details the structured literature screening strategy and corpus construction criteria adopted to assemble the reference base used in this study. Building on this, Section 4 classifies ground robots by environment (indoor, hybrid, outdoor) and mission context. Section 5 presents the proposed Perception–Cognition–Operation (PCO) framework, while Section 6 surveys the key sensors and algorithms that populate each layer. Section 7 critically benchmarks mainstream navigation frameworks against the PCO architecture, highlighting relative strengths, gaps and integration pathways. Emerging trends, such as multi-modal locomotion, collaborative mapping and AI-driven decision-making, are explored in Section 8. Finally, Section 9 summarises the main findings and outlines future research avenues to advance autonomous navigation.

2. Autonomous Mobile Robots

Autonomous mobile robots (AMRs) are conventionally analyzed with respect to three interdependent axes: locomotion mechanisms, application domains, and autonomy levels. This tripartite framework is grounded in foundational references like the Springer Handbook of Robotics [12], which provides detailed typologies based on motion strategy and system architecture. Jahn et al. [13] extend this framework by linking robot capabilities with implementation requirements, while Ben-Ari and Mondada [14] emphasize how operational environments, structured (predictable) versus unstructured (dynamic), directly influence system design and functional expectations. Together, these perspectives enable a multidimensional analysis that guides sensor placement, control strategies, and algorithmic complexity, as detailed in Section 6.

2.1. Locomotion Mechanisms

The physical mode of locomotion shapes how robots interact with the environment and determines their suitability for specific terrains and missions. Each type reflects trade-offs between mobility, energy consumption, and mechanical complexity.
Wheeled Systems:
  • Differential Drive: Common in structured indoor spaces such as factories and warehouses, offering mechanical simplicity and efficient planar navigation [15].
  • Omnidirectional Wheels: Provide enhanced maneuverability in confined spaces, supporting applications in logistics and healthcare settings [16].
  • Tracked Bases: Used in rugged or mixed terrains such as search-and-rescue missions, prioritizing stability over speed [17].
Legged Systems:
  • Quadrupeds: Enable dynamic gait control over unstructured ground, with notable deployment in defense and exploration tasks [18].
  • Bipeds: Retain anthropomorphic benefits but are generally confined to research laboratories due to current stability limitations [19,20,21].
  • General advantages: On highly rugged terrain, legs enable true point-to-point mobility, clearing steps and minimising soil compaction, albeit at the price of higher energy consumption and more complex control [22].
Biomimetic Hybrids:
  • Octopod-Inspired: Combine rolling and climbing locomotion to traverse debris-laden or obstacle-rich environments [23].
  • Articulated Configurations: Snake-like and inchworm robots address confined inspection tasks, such as pipeline assessment or offshore infrastructure monitoring [24,25].
As noted by Sostero [26], wheeled configurations dominate indoor and flat industrial settings, while legged and hybrid systems provide more advantages in uneven or semi-structured outdoor environments.

2.2. Application Domains

Robotic deployment varies significantly by sector, depending on mission goals, environmental complexity, and system constraints. The following categories reflect the most prominent areas of implementation:
Industrial Automation:
  • Structural Inspection: Robots reduce human exposure to risk by automating infrastructure evaluations [27].
  • Smart Logistics: Material handling systems improve warehouse throughput and efficiency [28].
Agricultural Robotics:
  • Precision Farming: RTK-GNSS-based platforms enable centimeter-level crop row alignment and optimized planting [29].
  • Selective Harvesting: Vision-guided mechanisms automate yield-enhancing tasks like fruit detection and grasping [30].
Defense & Rescue:
  • Subterranean Exploration: Autonomous mapping in GNSS-denied environments supports tactical reconnaissance [31].
  • Hazardous Response: Robots equipped with adaptive control schemes assist in the recovery of casualties and scanning of disaster areas [32].
These sectors exemplify the structured/unstructured operational dichotomy highlighted in the taxonomy by Ben-Ari and Mondada [14], and are further explored in Section 4.

2.3. Autonomy Metrics

Autonomy reflects the ability of a robot to operate with minimal external intervention. It is assessed using both theoretical models and practical validation strategies:
  • Theoretical Frameworks: ALFUS [33] decomposes autonomy into mission complexity, environmental difficulty, and human independence. The SQuAL model [34] provides a structured autonomy rating based on decision granularity and developmental maturity.
  • Empirical Metrics: Pittman [35] introduces a metric based on edit-distance to track real-time compliance with SAE J3016 levels. Hwang [36] expands this by evaluating autonomy “enablers,” offering new techniques to assess the architectural robustness independent of the target system.
  • Human-Robot Collaboration: Beer et al. [37] propose the LORA (Levels of Robot Autonomy) scale, which maps autonomy onto the classic SENSE-PLAN-ACT cycle, distinguishing supervisory from full autonomy. Gervasi et al. [38] improves this by integrating adaptivity, training, and decision authority into a multidimensional framework that assesses collaborative fluency.

2.4. Synthesis: Taxonomy-Implementation Nexus

The interaction among locomotion, application, and autonomy becomes evident in robotic system design, further discussed in the Perception-Cognition-Operation framework (Section 5):
  • Locomotion choice influences sensor layout and physical capabilities.
  • Application context defines mapping, localization, and mission profiles.
  • Autonomy level impacts planning architecture and the required adaptability.
Although autonomy levels remain a common yardstick for robot intelligence, the first design driver is usually where the robot must operate. We therefore adopt a three-layer perspective: Environment, Application, and Locomotion (E-A-L).
Autonomy remains an attribute of the Application level, as it depends on the cognitive abilities required for each task (see Figure 1).
Figure 1. Dual-axis pyramid that links design layers to the information flow in autonomous mobile robots.
Figure 1. Dual-axis pyramid that links design layers to the information flow in autonomous mobile robots.
Preprints 201688 g001
Vertically, each layer is influenced by the one below. An indoor scenario favors dense vision-based localization; the cleaning mission calls for mid-level autonomy and task-oriented planning; the selected skid-steer chassis determines the kinematic controller. Horizontally, perception supplies the maps and state estimates required by cognition, which generates waypoints and commands executed by operation. Taken together, the two axes reconcile physical design with the algorithmic flow, serving as a reference map for the review developed in Section 4.

3. Methodology for Structured Literature Screening

To support the development of the proposed PCO framework, this study adopted a structured, concept-driven literature review approach. The reference base was assembled through searches in major academic sources, including ScienceDirect, IEEE Xplore, arXiv, Scopus, Web of Science, and Google Scholar, combined with iterative manual selection, citation chaining from relevant works, and the inclusion of seminal references considered foundational to the field.
The screening process prioritized studies directly related to autonomous navigation for ground mobile robots, with aerial and underwater robotics excluded when outside the scope of the proposed framework. In addition to topical relevance, the selected works were curated to preserve a balanced corpus containing seminal contributions, well-established and highly cited studies, and recent state-of-the-art publications representative of current research trends.
The final reference corpus comprises 263 works covering wheeled, tracked, legged, and biomimetic systems. This corpus supports the comparative discussion and the mapping of robotic systems onto the E-A-L framework introduced in Section 2, which in turn underpins the PCO synthesis developed throughout the following sections.
Each subsequent subsection discusses the prevailing sensing, planning, and control strategies for indoor, hybrid, and outdoor missions, highlighting how application requirements modulate autonomy and, finally, influence the locomotion mode design.

4. Bibliographic Review and Classification of Mobile Robots

In this section, we categorize ground-based mobile robots, often referred to as Unmanned Ground Vehicles (UGVs), by reviewing their sensor configurations, algorithmic approaches, locomotion methods, and existing robotics frameworks. We revisit and extend previous classifications through the unified Environment-Application-Locomotion (E-A-L) perspective, structured clearly into three concentric layers:
(i) Environment Layer - the outer boundary conditions
  • Indoor: highly structured and controlled;
  • Hybrid: partially structured, partially unstructured;
  • Outdoor: largely unstructured and open.
(ii) Application Layer - the mission context and its implicit autonomy demands. Grouping real-world use-cases (industrial inspection, agriculture, urban delivery, etc.) by environment clarifies the sensing, control, and navigation solutions most often adopted in practice.
(iii) Locomotion Layer - the mechanical means selected to meet those demands. Although the overall diagram (Figure 2) lists wheeled, tracked and legged options, detailed mechanical design lies beyond the scope of this review and does not modify the environment-centered taxonomy above.
Figure 2. Sunburst Chart of Mobile Robots: Distribution by environment, application, and locomotion types (wheeled, tracked, legged). Estimations based on the synthesis of the selected reference corpus, industry trends (IFR World Robotics Report [39]), and comparative analysis of locomotion features [40].
Figure 2. Sunburst Chart of Mobile Robots: Distribution by environment, application, and locomotion types (wheeled, tracked, legged). Estimations based on the synthesis of the selected reference corpus, industry trends (IFR World Robotics Report [39]), and comparative analysis of locomotion features [40].
Preprints 201688 g002

4.1. Indoor Environments

Indoor environments generally offer predictable boundaries, such as walls and corridors, yet present challenges, including narrow passageways, limited GNSS signals, and human-robot interaction. Within this category, we identify several key application domains:

4.1.1. Industrial Inspection

Robotic systems designed for industrial inspection often navigate confined metallic surfaces and vertical structures (e.g., tanks and pipelines). Karelics [41], for example, offers radar-equipped robots and autonomy software that detect structural anomalies in metal surfaces. Dissanayake [42] introduced a chain-climbing robot equipped with ultrasonic NDT sensors, enhancing inspection safety and efficiency. Meanwhile, Bui et al. [27] proposed a steel-bridge climbing platform with a magnetic array-based distance sensor that enables inch-worm adhesion and autonomous navigation. Furthermore, Eldemiry et al. [25] presented an autonomous solution using 3D LiDAR and ground-penetrating radar (GPR) to inspect bridges, achieving both surface and subsurface mapping for damage detection (e.g. corrosion). Although these structures can be outdoors, the navigation itself relies on fully structured surfaces and does not require external signals (e.g. GPS). Since we focus on algorithms rather than mechanical or environmental effects (like adverse weather), we classify this solution as “Indoor” for algorithmic purposes.

4.1.2. Logistics

In warehouses and large factories, wheeled platforms are widely used for material handling and delivery. Automated Guided Vehicles (AGVs) and Autonomous Transport Robots (ATRs) rely on predefined pathways and short-range sensors. Gueaieb and Miah [43] introduced an RFID-based fuzzy logic system for corridor navigation, while Montiel et al. [44] fused artificial potential fields (APF) with a bacterial evolutionary algorithm to enhance local obstacle avoidance. Logistics applications are also not restricted to traditional indoor settings; for instance, Shen et al. [45] proposed a sensor-network-based navigation system for baggage-handling robots in international airports, improving efficiency in complex indoor environments. Recent innovations in indoor logistics include air-bearing-based robotic platforms, significantly reducing wheel load and friction, thus improving efficiency and lowering maintenance requirements in heavy-duty internal logistics operations [46].

4.1.3. Service & Assistance

Indoor service robots deployed in corporative or household environments require precise navigation and robust human-robot interaction. Recent advances focus on sensor fusion, adaptive planning, and multi-modal perception for improved assistance capabilities.
For household applications, autonomous vacuum cleaners have become widely adopted, with models like Roomba and Bobsweep leading the market. Recent research has explored low-cost mapping algorithms for these robots, improving obstacle avoidance and efficiency [47].
In structured indoor environments, Toyota’s Human Support Robot (HSR) [48] integrates visual and force sensors to assist individuals with mobility impairments, performing tasks such as fetching objects and monitoring environments. Similarly, mobile hospital robots like Aethon’s TUG [49] automate logistics in medical facilities, handling medication delivery, linen transport, and lab sample distribution with autonomous elevator operation.
Meanwhile, advances in bipedal robotics have enabled service robots to operate in more complex and dynamic settings. Karkowski et al. [50,51] proposed an improved A* footstep planner integrated with 3D segmentation and prediction maps for real-time adaptation in cluttered indoor scenes. Similarly, [52] applied a self-adaptive differential evolution approach to optimize the static force capability of humanoid robots, improving their ability to push and pull objects in constrained environments. Their method explores kinematic redundancy and torque distribution to maximize interaction forces while ensuring stability, highlighting the potential of bioinspired optimization in legged robot control. Ruscelli et al. present COMAN+, a torque-controlled humanoid that combines compliant series-elastic actuators with the XBot2 middleware and the CartesI/O whole-body controller, enabling safe manipulation and bipedal locomotion in indoor assistance tasks [53]. One of the most advanced examples is Boston Dynamics’ Atlas, which leverages whole-body control and deep reinforcement learning to navigate complex terrains and perform dexterous manipulation tasks. Although primarily a research platform, Atlas showcases the future of humanoid robots in assistance and industrial applications. These modern service robots integrate autonomy and human collaboration, enhancing efficiency in structured indoor spaces.

4.2. Outdoor Environments

Outdoor environments are diverse and often unstructured, which poses significant challenges for perception, planning, and control. Here, we identify several prominent application domains.

4.2.1. Agriculture

Field robots in agriculture perform tasks such as planting, weeding, and harvesting in semi-structured farmlands. These robots typically rely on RTK-GNSS for precise positioning and LiDAR for obstacle detection. Redhead et al. [28] introduced the AgBot concept, a fleet of cooperative agricultural robots designed for large-scale autonomous farming. These lightweight, unmanned machines are developed to mitigate soil compaction and improve productivity through distributed operations.
Li et al. [29] reviewed autonomous agricultural vehicle guidance systems, highlighting the integration of GPS, machine vision, and inertial sensors for robust navigation. Meanwhile, Lopes et al. [30] applied deep learning-based instance segmentation using YOLOv8 for enhanced autonomous navigation in sugarcane fields, demonstrating significant improvements in obstacle avoidance and route optimization.
These advances demonstrate the increasing role of robotic systems in precision agriculture, leveraging sensor fusion and AI to enhance field operations while minimizing environmental impact.

4.2.2. Surveillance & Defense

In surveillance and defense applications, terrestrial robots must operate in hostile outdoor environments characterized by dynamic obstacles and harsh conditions. Quadruped platforms, such as Boston Dynamics’ Spot, employ multi-sensor fusion to navigate unpredictable terrains and detect potential threats. In this context, precise force modeling and torque distribution have been shown to enhance dynamic stability [54]. Moreover, hexapod designs that leverage multiple contact points to improve stability have been explored to achieve robust locomotion in unstructured environments [55,56].
Recent advances in land-robot technologies further underscore the importance of integrating cognitive systems into military and defense applications. As demonstrated by Sanaullah et al. [18], the incorporation of advanced sensor arrays, including high-definition cameras, thermal sensors, and LiDAR, enables ground robots to autonomously generate detailed environmental maps, recognize threats, and perform real-time decision-making. Complementing these developments, Ghute et al. [57] presented a design for a military surveillance robot that uses Arduino and Raspberry Pi-based platforms, coupled with a suite of sensors for enhanced video transmission and obstacle detection. Together, these innovations highlight the synergy between robust mechanical design and cognitive integration, significantly enhancing the operational reliability and situational awareness of terrestrial robotic systems in complex combat scenarios.

4.2.3. Remote Exploration

Exploration in remote regions requires robots to handle extreme and heterogeneous conditions. In polar terrains, advanced sensor fusion and specialized mobility strategies are essential to navigate icy, GNSS-challenged environments, addressing issues such as sensor freezing and energy management, as demonstrated by the polar rover developed by He et al. [58]. Ant-inspired navigation approaches have also shown promise in extreme outdoor settings, for example, Dupeyroux et al. [59] demonstrated that a hexapod robot can achieve precise homing using minimalistic, bioinspired sensors. Long-duration explorations in challenging environments have been validated by platforms such as Nomad [60], which successfully traversed the Atacama Desert. Furthermore, in remote areas characterized by rugged terrain, such as those found in the Amazon forest, robots must maintain effective traction control. For example, the Hybrid Environmental Robot (RAH) described by Silva [61] is an amphibious mobile robot designed primarily for terrestrial navigation, demonstrating significant adaptability on rough and heterogeneous ground. Moreover, innovative mobility strategies have been developed for polar operations. Luo et al. [62] introduced a polar robot that uses scalable wing sailing and snowboarding to enhance energy-efficient mobility, and Lever et al. [63] presented a polar rover equipped with ski and track mechanisms for autonomous GPR surveys.

4.2.4. Urban Autonomous Vehicles

Autonomous vehicles operating on highways require high-precision localization, real-time perception, and advanced motion planning to ensure safety and efficiency in high-speed environments. These systems integrate multi-sensor fusion, including GNSS, LiDAR, Radar, and vision-based methods, to achieve robust localization and obstacle avoidance.
Kuutti et al. [64] conducted a comprehensive review of localization techniques for autonomous vehicles, highlighting the importance of multi-sensor fusion and cooperative localization strategies. Similarly, Kuwata et al. [65] developed real-time motion planning algorithms for autonomous vehicles, ensuring safe and efficient trajectory execution in dynamic highway scenarios.
Li et al. [66] explored multi-vehicle cooperative local mapping, demonstrating how vehicle-to-vehicle (V2V) communication improves situational awareness and mapping accuracy, crucial for autonomous platooning systems. Yang et al. [67] introduced deep convolutional control strategies, significantly improving AVs’ ability to perform real-time path planning and collision avoidance.
These advances collectively contribute to the development of autonomous trucking, highway platooning, and cooperative driving systems, enabling safer and more efficient transportation solutions.

4.3. Hybrid Environments

Hybrid environments exhibit characteristics of both structured and unstructured settings, requiring robots to adapt across varying conditions. Two notable application domains in such contexts are:

4.3.1. Rescue Operations

Rescue operations often require ground robots capable of quickly accessing and traversing unstable, irregular terrains under harsh, unpredictable conditions. Hybrid robotic systems for rescue tasks combine multiple modes of locomotion to maximize mobility in environments that blend indoor and outdoor features. For example, kangaroo-inspired robots use hopping mechanisms to achieve dynamic, energy-efficient control in challenging terrains [68,69], while octopod systems that integrate walking, rolling, and climbing modes offer versatile solutions to access debris-laden or partially collapsed areas [23,70]. These advances highlight the potential of hybrid platforms to improve rescue operations in scenarios where conventional wheeled or tracked vehicles might be inadequate.
Early real-world deployments, such as those during the World Trade Center disaster, demonstrated the feasibility of using teleoperated, tracked systems to penetrate narrow voids and navigate complex rubble environments, providing critical situational awareness to rescue teams [71]. Comprehensive surveys have mapped the evolution of rescue robotics [72], emphasizing improvements in locomotion, sensor integration, and human-robot interaction. Moreover, specialized studies reveal tailored solutions: mine rescue robots designed for confined spaces and hazardous environments [73] and integrated systems developed in Japan that combine climbing and tracked locomotion to overcome obstacles in urban disaster scenarios [74]. Together, these developments underscore the promise of modern tracked platforms, often enhanced with climbing capabilities, as decisive, energy-efficient tools in terrestrial rescue operations.

4.3.2. Delivery Systems

Terrestrial delivery robots are increasingly deployed in hybrid environments, such as beaches, shopping malls, and airports, where they must navigate semi-structured terrains and dynamic human interactions. In coastal tourist areas, solar powered UGVs deliver chilled beverages and pre-packaged meals, using LiDAR and sand-resistant locomotion to efficiently avoid obstacles [75], while integrating customer satisfaction models in route optimization frameworks for tourist hubs [76].
In indoor and outdoor spaces, such as airport terminals and large malls, UGVs leverage multi-robot coordination and real-time pedestrian tracking to dynamically adjust routes for efficient parcel delivery, while ensuring regulatory compliance (e.g. speed limits and GDPR through encrypted data transmission and teleoperation safeguards) [77,78]. Furthermore, Wei et al. [79] developed a deep reinforcement learning approach with heuristic corrections for UGV navigation that significantly enhances collision avoidance and path efficiency in dynamic, unstructured settings, a key improvement for robust autonomous logistics in crowded or hybrid indoor-outdoor areas.
Recent advances in terrain-specific designs, such as reinforced treads for sandy conditions, silent propulsion for indoor use, and predictive crowd navigation algorithms, further enable UGVs to bridge last-mile logistics gaps where traditional vehicles face access restrictions or spatial constraints [80].

4.3.3. Subterranean Exploration

Subterranean exploration embodies an inherent duality. On the one hand, it shares several characteristics commonly associated with indoor environments, confined spaces, lack of GPS, and limited illumination. However, it involves irregular terrain and topographic variability often found in unstructured outdoor settings. This confluence of features demands robust navigation and mapping strategies that integrate advanced sensor-fusion techniques and adaptive algorithms. The significance of these subterranean scenarios was highlighted by the DARPA (Defense Advanced Research Projects Agency) Subterranean (SubT) Challenge, which accelerated the development of technologies capable of operating under adverse conditions, such as dust, smoke, and poor lighting, thus compelling robotics teams to innovate in terms of mobility, perception, and communication.
Recent studies underscore this integrative perspective. For example, Chung et al. [81] discuss systems engineered to endure degraded subterranean conditions, whereas Tranzatto et al. [31] provide a comprehensive overview of the CERBERUS system. This framework employs hybrid platforms, including the quadrupedal ANYmal C SubT, adapted for humid and dusty environments, and tracked units designed to extend network coverage, to conduct exploration, mapping, and artifact detection. Together, these solutions highlight how combining diverse mobility modes and advanced sensors can effectively overcome the distinctive obstacles posed by subterranean domains, paving the way for continued advances in autonomous robotics.

4.4. Locomotion and Upper Layers Overview

Mobile robots exhibit various locomotion modes: wheeled, tracked, legged, and biomimetic, which are intrinsically linked to their operating environments, applications, and specific tasks. Figure 2 consolidates insights from the structured synthesis of a 263-reference corpus, together with trend observations from industry reports such as the International Federation of Robotics World Robotics 2021 report [39], as well as the comparative analysis provided by Bruzzone and Quaglia [40]. These references highlight that wheeled platforms typically excel in structured or semi-structured environments due to high speed and energy efficiency, whereas tracked, legged, or hybrid systems offer enhanced mobility and obstacle-crossing capabilities in unstructured or challenging terrains, albeit with increased mechanical and control complexity.
Figure 2 results from an effort to map locomotion types according to specific applications and environments, given the scarcity of previous comprehensive studies addressing this precise correlation. It synthesizes insights from available literature and industry reports, acknowledging explicitly that the distribution depicted is inherently dynamic, evolving in response to technological advancements, design preferences, and specific application demands. Factors such as mobility features (speed, obstacle-crossing, climbing capabilities, and energy efficiency, as outlined by Bruzzone and Quaglia [40]) significantly influence the choice of locomotion type. As discussed in Section 8, emerging trends indicate an increasing adoption of hybrid and multi-modal locomotion solutions, as well as variations within each locomotion type, such as articulated passive frames in wheeled robots, independently controlled tracks, and adaptive dynamic gait modeling in legged systems.

4.5. Robotics Frameworks

Several autonomous navigation frameworks have been proposed, each characterized by distinct emphases, specializations, or application contexts. To categorize these approaches, we propose a structured classification:

4.5.1. Domain-Specific Frameworks

Domain-specific frameworks are tailored explicitly for particular robotic domains or tasks. Raja et al. [82] propose an architecture for handling complex urban intersections. Apollo [83] and Autoware [84,85] offer comprehensive autonomous driving solutions. Liu et al. [86] integrate classical and reinforcement learning techniques for robust lunar rover navigation. CLARAty, the reusable two-layer architecture adopted in NASA’s Mars Exploration Rovers [87], exemplifies the same category for space robotics. In the maritime domain, MOOS-IvP delivers a nested-autonomy scheme for AUVs and USVs [88]. ARMAR-6, a high-performance humanoid that relies on the ArmarX software stack to fuse 3-D perception, bimanual manipulation, and mobile base navigation for collaborative industrial tasks, illustrates a modern domain-specific framework tailored to humanoid assistance [89].
Youakim et al. [90] present a detailed software architecture for the Girona 500 AUV, emphasizing sensor fusion and precise navigation. Gonzalez et al. [91] propose a supervisory control-based framework optimized for Industry 4.0 environments. Similarly, the ROS2 Navigation Stack (Nav2) [92] provides robust navigation functionalities for industrial robots.

4.5.2. Application-Oriented Frameworks (Highly Specific)

Application-oriented frameworks target specific functionalities or operational conditions. Alam et al. [93] introduce an approach using fiducial markers and particle filters for reliable indoor localization. Sandeep et al. [94] propose CANTAV, emphasizing cloud-based navigation and traffic management.

4.5.3. Generalist Frameworks

Generalist frameworks are designed for broad applicability across various robotic platforms. Muñoz et al. [95] employ layered abstractions to rapidly validate navigation algorithms. Goodwin [96] develops a unified framework encompassing software, hardware, environment, and user interactions, facilitating reusability and modularity across robotic systems. In addition, NVIDIA Isaac offers a GPU-accelerated SDK and physics-accurate simulation that targets both manipulators and mobile robots [97]. Open-source autopilots such as ArduPilot/PX4 further extend generality to aerial, ground and marine vehicles through a unified code-base [98].
Engine-agnostic stacks are also emerging: EAGERx exposes a graph-based API that runs unchanged on PyBullet, Gazebo and real hardware, providing multirate synchronisation, delay simulation and domain-randomisation for sim-to-real RL [99].

4.5.4. Abstract and Complex Frameworks

Abstract and complex frameworks provide high-level, hierarchical frameworks offering conceptual rather than practical guides. The BERRA System by Örebäck [100] combines reactive, task execution, and deliberative layers in a complex hierarchical design. Fleury et al. [101] propose a global architecture conceptualizing task planning and functional modules without detailed implementation guidelines.

4.5.5. Ambiguous or Intricate Representations

Frameworks with ambiguous or intricate representations are characterized by unclear or overly complicated inter-module connections. Examples include UML-based implementation diagrams by Örebäck [100], often illustrating complex interconnections challenging to interpret, and automatically generated hierarchical structures [100] frequently introducing unnecessary complexity.

4.5.6. Benchmarking-Focused Frameworks

Benchmarking-focused frameworks are dedicated primarily to evaluating and comparing navigation algorithms. Ugwoke et al. [102] develop a comprehensive simulation platform to evaluate classical, heuristic, and metaheuristic planning algorithms. For co-simulation and HIL/SIL testing of automated-driving stacks, IPG CarMaker offers an integrated framework linking vehicle, environment and sensor models [103].

4.5.7. Frameworks Focused on Specific Technical Aspects

Frameworks focused on specific technical aspects concentrate on particular elements such as decision-making, behavior trees, or computing architectures. Wang et al. [104] employ hierarchical state machines for efficient decision-making in autonomous vehicles. Godin [105] presents ArMoR, a minimalist architecture that emphasizes simplicity and modularity, making it well suited to educational and experimental contexts. Axelsson et al. [106] propose a participatory co-design framework explicitly tailored for social-robot applications. Mingo Hoffman et al. introduce OpenSoT, an open-source prioritized-QP library that delivers hierarchical whole-body control for legged, mobile-manipulator, and humanoid robots [107]. Similar to the other task-oriented frameworks, its scope is confined to the control layer, leaving perception, mapping, and navigation to external stacks.
Despite the significant diversity of frameworks presented, a notable gap remains: most approaches address only specific tasks or domains and often lack clear modular boundaries and well-defined interdependencies. Furthermore, the comprehensive integration of hardware, software, and environmental interactions across varying levels of autonomy is still limited. Our review therefore highlights the need for a unified, modular, and pedagogically oriented framework that systematically links Environment–Application–Locomotion (E-A-L) characteristics with Perception–Cognition–Operation (PCO) modules. Addressing this gap can provide clearer educational guidance, simplify algorithm interchangeability, and enhance interoperability across diverse robotic platforms and environments.
Trajectory-Optimisation Toolkits
Beyond full-blown navigation stacks, several open–source SDKs focus on a single step of the PCO pipeline. Horizon [108] wraps CasADi, Pinocchio, and state-of-the-art SQP/iLQR solvers into a lightweight API that lets users transcribe and solve optimal–control problems: walking, jumping, or agile base motions— for platforms such as Spot®, TALOS, or Centauro in just a few lines of Python. In the path-planning realm, the Open Motion Planning Library (OMPL) [109] plays an analogous role, offering a large collection of sampling-based planners (RRT, PRM, KPIECE, FMT) through a common interface readily embedded in ROS or proprietary stacks. These toolkits exemplify the growing trend toward high-modularity “building blocks’’ that can be slotted into broader frameworks rather than replacing them outright.

5. Proposed Unified Architecture (Framework)

In this section, we introduce a layered architecture that organizes autonomous navigation into three main layers: Perception, Cognition, and Operation. This framework consolidates prior studies from the literature, which traditionally divided navigation into five phases (Phases I-V) [10,110,111]. By grouping these phases into three cohesive layers, we emphasize the interdependencies between sensing, planning, and control. Figure 3 illustrates this conceptual mapping, adapted from established classifications in the field.
Figure 3. Unified architecture for autonomous mobile robot navigation, structured in three hierarchical layers: Perception, Cognition, and Operation. The framework integrates and renames the original Phases II and III from prior literature to improve clarity. Additionally, it explicitly visualizes the influence of system design constraints: locomotion type, application domain, and autonomy level, on each functional layer. Each module is enriched with representative hardware or software components to support interpretability and modular analysis.
Figure 3. Unified architecture for autonomous mobile robot navigation, structured in three hierarchical layers: Perception, Cognition, and Operation. The framework integrates and renames the original Phases II and III from prior literature to improve clarity. Additionally, it explicitly visualizes the influence of system design constraints: locomotion type, application domain, and autonomy level, on each functional layer. Each module is enriched with representative hardware or software components to support interpretability and modular analysis.
Preprints 201688 g003

5.1. Overview of the Three Layers

The proposed framework organizes autonomous navigation into three hierarchical layers (Figure 3), while the vertical E-A-L coupling is conceptually summarized in Figure 1:
  • Perception Layer (Phase I: Environment Perception, Self-Location, Data Processing)
  • Cognition Layer (Phases II–III: Path Planning and Obstacle Avoidance)
  • Operation Layer (Phases IV–V: Motion Control and Trajectory Execution).
Each layer interfaces with the vertical axis of the pyramid (Environment–Application–Locomotion), creating a cohesive design paradigm:
  • Perception ↔ Environment: Sensor selection and perception algorithms are dictated by environmental constraints (indoor/outdoor/hybrid).
  • Cognition ↔ Application: Task complexity and autonomy requirements drive planning strategies.
  • Operation ↔ Locomotion: Control mechanisms are tailored to the robot’s kinematic design (wheeled/legged/tracked).

5.2. Perception Layer

The Perception Layer encompasses the complete sensing and interpretation pipeline, corresponding to Phase I of the navigation process. As detailed in Figure 3, this phase comprises four interconnected sub-phases:
1a 
Environment Sensing: LiDAR, cameras (RGB-D, thermal), IMUs, radar, and other sensors gather raw data.
1b 
Self-Localization: Pose estimation via GNSS, SLAM, or dead-reckoning odometry.
1c 
Data Fusion and Filtering: Kalman/Bayesian filters and HDR algorithms reduce noise and extract features.
1d 
Mapping: 2-D or 3-D grid representations built from fused data.
This layer realises the Environment ↔ Perception coupling (Figure 1), where sensor selection and algorithms are dictated by environmental constraints (indoor/hybrid/outdoor). Indoor robots thus rely on dense vision and proximity sensing, whereas outdoor platforms add GNSS and robust LiDAR (see Table 1). The output of this layer (maps, state estimates) directly feeds the Cognition Layer.
Table 1. Phase I – Well-established Sensors
Table 1. Phase I – Well-established Sensors
Sensor (Category) Indoor Hybrid Outdoor
General Sensor Overview (Survey) [2,112] [113]
Exteroceptive Sensors
Acoustic-Based Sensors
Audio Sensor (Microphone, Speaker) [57,114]
Ultrasonic Sensor [47,49,115,116,117,118,119,120,121,122,123] [80,124,125] [42,57,64,126,127]
Cooperative Localization Devices
Automatic Dependent Surveillance-Broadcast (ADS-B), Zigbee, Wireless [128] [66]
PetriNet Model [129]
Wireless Router (V2V Communication) [66]
Electromagnetic Waves Based Devices
Force Sensing Resistor (FSR) [122]
Global Positioning System (GPS) and/or DGPS [72,80,130] [11,18,29,58,62,63,64,66,78,114,126,131,132,133,134,135,136,137,138,139,140,141,142,143]
Ground Penetrating Radar (GPR) [25] [63,132,142]
Joint Position Sensor [122]
Pressure Sensor [20]
Radar [124,130,144] [64,126,131,133,134,137,138,141]
Ultra Wide Band (UWB) [45] [64]
WiFi [49,123,145]
Ground Beacon-based Locators
Radio Frequency Identification (RFID) [43,49,52,146]
Optical and Laser-based Sensors
Distance Measurement Sensor [25,27] [61,74]
Ground Sensors (Reflectivity) [14]
Infrared Sensor / Thermal Camera [49,123,147,148] [71,73,129,149] [57]
Laser Scanner (2D) [4,49,120,128,150,151,152,153,154,155,156,157,158,159,160] [71,77,79,125,161] [18,29,66,126,131,134,136]
LiDAR (3D) [25,27,159,160,162,163] [72,130,149,164,165] [6,29,64,114,131,134,141,142,166]
Optical Particle Counter (OPC) [132]
Optical Velocity Sensor (Corsys) [135]
Panospheric Camera (360 view) [30,60]
Scientific Sensors
Toxic Gas Sensor, Aethalometer, Wind [62,73,132]
Visual Sensors
Kinect, 3D Depth Camera [4,19,50,151,157,163,167] [73,125,165] [5,20,168]
RGB Camera [21,48,118,119,120,121,163,169,170,171] [3,27,71,72,77,80,124,130,165,172,173,174] [18,29,30,64,74,78,114,126,131,133,134,137,138,139,141,142,166,168,175,176]
UV Solar Based [59]
Proprioceptive Sensors
Geo-referencing Systems
Current Sensor (Motor) [135]
Haptic Sensor [177] [57]
Force/Torque Sensor [53]
Inertial & Attitude Sensors (INS & AHRS: IMU, Gyroscopes, Accelerometers, Compass, Magnetometers, Altimeter) [2,19,27,53,121,122,123,145,170] [20,72,130,144,145] [5,6,11,18,29,42,57,58,62,64,78,127,131,133,134,136,139,140,176]
Self Localization Apparatus (for Dead Reckoning Estimation)
Encoders (Odometer, Encoder, Optical Encoder) [19,53,118,119,123,151,156,178] [27,61,72,129,172,173,174] [29,58,66,127,139,176]

5.3. Cognition Layer

The Cognition Layer (Phases II–III) aligns with application requirements (Figure 1):
1.
Path Planning: Industrial robots often rely on deterministic methods such as A for structured navigation, whereas agricultural and off-road robots more frequently employ sampling-based planners such as RRT to cope with uneven terrain and partial observability. Thus, task complexity directly affects the balance between global and local planning granularity [179].
2.
Obstacle Avoidance: While path planning defines feasible routes, obstacle avoidance addresses immediate threats in dynamic scenes. Depending on autonomy requirements, this may range from reactive methods, such as potential fields, to more adaptive learning-based approaches (Table 6) [1].
The Application ↔ Cognition link ensures mission-driven adaptability, balancing computational efficiency with task performance [179]. Within the proposed framework, autonomy level is treated as a task-dependent input to the Cognition Layer: lower levels favor supervisory and reactive strategies, whereas higher levels require greater planning depth, tighter perception–planning integration, and broader decision authority. This interpretation is consistent with the autonomy literature discussed in Section 2.3, where ALFUS and SQuAL frame autonomy in terms of mission complexity and decision granularity [33,34], while Pittman and Hwang support viewing autonomy as an architectural driver rather than only a classification target [35,36].

5.4. Operation Layer

The Operation Layer integrates Phase IV (Motion Control) and Phase V (Trajectory Execution) to enact navigation commands, embodying the Locomotion ↔ Operation coupling (Figure 1):
  • Motion Control: translates planned trajectories into hardware commands. Wheeled and tracked systems employ PID or Model Predictive Control (MPC) for precise speed and steering regulation [65,139], while legged robots utilize whole-body controllers with gait adaptation for dynamic stability on rough terrain [54,56]. Machine learning approaches, particularly deep reinforcement learning, are increasingly deployed to adaptively regulate heading and steering in uncertain environments, enabling wheeled platforms to autonomously optimize traction on slippery surfaces [180] and legged systems to learn complex locomotion policies through high-dimensional continuous control [181].
  • Trajectory Execution: is adapted to environmental constraints [10]. In structured environments, offline dead reckoning ensures repeatable paths [49]. In unstructured settings, real-time Model Predictive Path Integral (MPPI) handles slippage at high speeds [182]; for hazardous missions (e.g., subterranean exploration), episodic execution enables teleoperation switches when autonomy limits are exceeded [183].
This Locomotion ↔ Operation coupling highlights hardware-algorithm co-design: Ackermann steering constraints in wheeled robots demand curvature-compliant paths, while legged systems prioritize foothold safety over speed. Industrial robots optimize for precision [184], whereas rescue-oriented hybrid platforms sacrifice efficiency for obstacle negotiability [81].

5.5. Advantages of the Unified Framework

By grouping Phases I–V into these three layers, we emphasize:
  • Clear separation of sensor/data processing tasks (Perception) from algorithmic decision-making (Cognition) and real-world execution (Operation).
  • Better interoperability: modules in each layer can be replaced or upgraded (e.g. switching from A* to RRT*, or from PID to MPC) without overhauling the entire system.
  • Easier mapping to different autonomy levels, since each layer can support more or fewer features as required.
This architectural view offers a simpler blueprint for practitioners aiming to design or analyze terrestrial AMRs, ensuring that changes in one layer, such as adding a new sensor or adopting a new control strategy, are logically encapsulated.

6. Sensors and Algorithms for Terrestrial Robots

In Section 5, we introduced a three-layer architecture (Perception, Cognition, Operation) to organize the tasks involved in autonomous navigation. These layers can be further detailed into five sequential yet interdependent phases (Phases I–V), each targeting a specific subset of sensing, planning, or control functions. In this section, we provide an in-depth look at the key sensors and algorithmic strategies relevant to each phase, referencing the environment-based considerations (Section 4).

6.1. Phase I: Environment Perception, Self-Location, and Data Processing

The Perception Layer discussed earlier corresponds to Phase I. Autonomous robots must acquire raw sensor data and fuse it into meaningful representations (e.g. occupancy grids and point clouds). Table 1 highlights well-established sensor technologies crucial for both indoor, outdoor, and hybrid domains.
Table 1 presents a broad overview of sensor utilization in robotic applications. A key observation is the predominant use of inertial sensors (IMU, AHRS) and GPS in outdoor robotics, where precise geo-referencing and dead reckoning are essential. GPR appears primarily in specialized applications like subterranean and polar exploration, while radar is mainly found in outdoor settings. Wireless localization technologies are clearly segmented, with RFID, Zigbee, and ADS-B used in indoor environments, whereas WiFi-based localization is confined to enclosed spaces.
Regarding LiDAR technology, a clear shift is observed: 2D LiDAR sensors remain common in older and cost-sensitive applications, particularly for indoor navigation where affordability and simplicity are prioritized. In contrast, 3D LiDAR is increasingly prevalent in modern and complex robotic systems, especially in outdoor environments where detailed environmental representation is crucial.
Figure 4. Normalized sensor usage heatmap by category and environment. Each cell represents the relative frequency (0–1) of grouped sensor entries across indoor, hybrid, and outdoor applications. This figure visualizes trends and does not imply absolute counts.
Figure 4. Normalized sensor usage heatmap by category and environment. Each cell represents the relative frequency (0–1) of grouped sensor entries across indoor, hybrid, and outdoor applications. This figure visualizes trends and does not imply absolute counts.
Preprints 201688 g004
Thermal and infrared cameras are predominantly found in military, security, and industrial inspection applications, spanning both indoor and outdoor settings. Their ability to detect heat signatures makes them particularly valuable in surveillance, search-and-rescue, and hazardous environment monitoring, where traditional optical sensors may struggle.
The increasing integration of RGB cameras across various environments, particularly in outdoor robotics with machine learning-based perception techniques (e.g. YOLO), further highlights evolving trends in robotic sensing.
Table 2. Phase I – Perception (a) Object Detection, (b) Sensor Fusion & Data Processing
Table 2. Phase I – Perception (a) Object Detection, (b) Sensor Fusion & Data Processing
a Object Detection
Method Indoor Hybrid Outdoor
LiDAR disparity / gap extraction [161]
Camera-based Detection
Boundary Extraction [158]
Canny Edged Detection [174]
CNN-Based Multi-Object [162] [149]
Color Marker-based Recognition [119]
Edge Based Terrain Classification [131]
Faster R-CNN [133]
Haar Cascade Classifier [174]
Hough Transform (Lane Detection) [72] [29,176]
Image Processing and Enhancement [112]
Online Boosting and Haar-like Features [151]
Point Cloud–based Detection [133]
Single Shot Detectors (YOLO/SSD) [30,133,166]
SVM-based Mobility Hazard [135]
b Sensor Fusion & Data Processing
Filter (Category) Indoor Hybrid Outdoor
General Sensor Fusion Overview [2]
Probabilistic Fusion Filters
Extended Kalman Filter (EKF) [120,121,150,151,157] [130] [64,131]
Kalman Filter [121,145] [72] [29,64]
Particle Filter (PF) [115,121] [1] [64,131]
Unscented Kalman Filter (UKF) [1]
Other Fusion Methods
Information Matrix Fusion [126]
Iterative Closest Point (ICP) [164]
Multi-Level Sensor Fusion [166]
SVD + ICP [158]
Track-to-Track Fusion (T2TF) [126]
Vector Auto-Regressive (VAR) Prediction [185]
Vision-Based Fusion Filters
Bayesian-based Filters [121,146,186] [64]
Complementary Filter State Estimator [53,171]
Dempster–Shafer [121] [64]
Extended H Filter (EHF) [187]
Gaussian-based Filters [188]
High Dynamic Range (HDR) [175]
Deep Learning–based Fusion Filters
CNN-based Sensor Fusion [180]
Hierarchical NN Fusion [123]
LSTM-based Predictive Fusion [189]
Optimization-based State Estimation
Genetic Algorithm (GA) [170]
The latest advancements in sensor technology and perception algorithms are primarily driven by the demanding requirements of extreme applications, such as autonomous racing. High-performance platforms, including Roborace and the Indy Autonomous Challenge (IAC), have catalyzed unprecedented developments in sensor fusion, real-time data processing, and robust failure-mode management. Under conditions characterized by intense vibrations, extreme temperatures, high velocities, and challenging lighting, these environments significantly accelerate hardware-software co-evolution, pushing the limits of algorithmic precision, reliability, and adaptability [130].
The analysis of detection strategies highlights a strong presence of deep learning-based methods, such as YOLO, particularly in outdoor applications where robust multi-object recognition is required. Traditional methods, such as edge-based techniques and Hough Transform, remain relevant in structured environments, especially for lane detection and feature extraction.
In sensor fusion, probabilistic approaches like Extended Kalman Filters (EKF) and Particle Filters (PF) are the most widely adopted across different environments due to their effectiveness in multi-sensor integration and state estimation. Kalman-based filters appear frequently in indoor and hybrid applications, while Particle Filters are more common in outdoor settings where handling non-Gaussian noise is crucial. Bayesian-based filters and Dempster-Shafer methods are mainly used in vision-based fusion tasks, whereas high-level integration methods, such as Track-to-Track Fusion and Information Matrix Fusion, are found in specialized applications requiring multi-source data synchronization.
Recent surveys report a gradual transition from classical probabilistic filters (EKF, UKF, PF) to deep-learning based fusion, often organised as hierarchical CNN/LSTM blocks that act on coarse sensor cues before refining pose estimates [190]. This shift mirrors the Environment ↔ Perception link in Figure 1: highly occluded indoor scenes or texture-poor outdoor areas increasingly favour neural fusion, whereas well-structured settings still benefit from lightweight Kalman or particle schemes.
Table 3 highlights self-localization and mapping methods.
SLAM-based localization is widespread, particularly in indoor environments where external positioning references are limited. Monte Carlo Localization (MCL) and Markov Localization are frequently used for probabilistic position estimation. In mapping, occupancy grid mapping remains predominant indoors, while Octomap and voxel grid methods appear in hybrid and outdoor scenarios due to their flexibility in 3D representation. Recently, merging grids techniques have been employed to optimize mapping, while collaborative localization approaches, including V2V and V2I, are being explored to enhance position estimation accuracy.
Several contemporary navigation architectures incorporate a concise prediction micro-phase logically positioned between Perception Section 5.2 and Cognition Section 5.3. This micro-phase transforms the fused detections produced in Phase I into short-horizon probabilistic forecasts of nearby agents or obstacles (e.g., position–velocity distributions). These forecasts are forwarded to the path/trajectory planner, enabling it to reason about imminent collisions rather than relying exclusively on the current scene. Such prediction mechanisms have been adopted in general-purpose frameworks, for instance, the Nav2 prototype that combines Vector Auto-Regressive (VAR) forecasts with MPC-based tracking [185], as well as in high-speed, safety-critical domains such as autonomous racing, where sensor parameters are actively tuned to deliver millisecond-level forecast updates [130].
Table 3. Phase I – Perception (c) Self-Localization, (d) Mapping
Table 3. Phase I – Perception (c) Self-Localization, (d) Mapping
a Self-Localization
Method Indoor Hybrid Outdoor
Visual Place Recognition: survey [191]
Ant-inspired PI-Full mode [59]
Evidence-Grids Continuous [192]
Kalman Filter-based Localization [131]
Landmark Localization [112]
Markov Localization [120]
Monte Carlo Localization (MCL) [120,156] [131]
NDT-based LiDAR Localization [187]
RFID-based Localization [43]
SLAM-based Localization [25,115,157,160] [140,142]
Vehicle-to-Vehicle (V2V) [64]
b Mapping
Method Indoor Hybrid Outdoor
C-SLAMMODT: Cooperative Factor-Graph SLAM [193]
Centralized Map Builder [162]
Color-Depth Map [151]
Continuous Metric Mapping [146]
Elevation Map [19] [20] [194]
Feature Map-Based Framework [133]
Local Perceptual Space (LPS) [118]
Merging Occupancy Grid [158] [66]
NDT-based LiDAR Mapping [187]
Occupancy Grid Mapping [120,150,152,169,195] [30,112]
Octomap (3D Mapping) [159]
Uncertainty Map [160]
SLAM-based Mapping (e.g. Gmapping) [163] [112] [18,196]
Stereo ORB-SLAM2 [168]
Voxel Grid based [4] [131,166]

6.2. Phase IIa: Path Planning: Graph Construction

Once the robot has a preliminary map or fused data from Phase I, the Cognition Layer refines it into graph structures or potential fields for planning feasible paths. Table 4 lists popular map-building methods.
Indoor robots typically rely on grid or mesh-based approaches, leveraging corridor geometry, while outdoor robots favor approximate decomposition or layered maps for large, unstructured spaces. Probabilistic roadmaps and potential fields are preferred in hybrid scenarios, whereas genetic algorithms and bioinspired methods are mainly used for optimization. Digital maps and lane marking-based approaches are common in structured outdoor environments.
Table 4. Phase IIa – Path Planning: Graph Construction
Table 4. Phase IIa – Path Planning: Graph Construction
Method Indoor Hybrid Outdoor
General Path Planning Overview (Survey) [113]
Classical, Heuristic and Meta-heuristic Planners (Survey) [102]
Genetic Algorithms
Bioinspired based [59]
Chaotic + Co-evolutionary GA (Swarm robots) [197] [197]
Genetic Algorithm for Map Merging Optimization [66]
Particle Swarm Optimization (PSO) [124]
Graph Search Maps
3D Delaunay Triangulation [168]
Boundary Planning [126]
Breadth-First Search (BFS) [10] [113]
Convex Feasible Region Mapping [21]
Depth-First Search (DFS) [10]
Exact Cell Decomposition [124,179]
Free-space Volume Extraction [168]
Grid-based Path Planning [195] [20] [11,58,132]
Lattice based Graph [152] [42,137,198]
Probabilistic Roadmaps [179]
Rapid Exploring Random Tree (RRT) based [27] [136]
Uncertainty Frontier Map (UM) [160]
Visibility Graphs [179]
Voronoi Diagram [116] [124,147,179]
Others Methods of Map Building
3D Segmentation [19,50]
Digital Map [126]
Elastic Bubble Band [199] [92] [200]
Elevation Map based [60,194]
Fast Marking Tree (FMT)* [3]
Gaussian Mixture Model (GMM) [131]
Hierarchical Finite State Machine (HFSM) [138]
Hybrid Walking Pattern Generator [53]
Lane Marking Based Mapping [176]
NF1 Algorithm [154]
Probabilistic Roadmap (PRM) [3] [5]
State Vector Machine (SVM) [135]
SuperVoxel Graph [5,6]
Uncertainty Map (UM) [160]
Potential Field Maps
Artificial Potential Field [147,179]

6.3. Phase IIb: Path Planning: Graph Search Algorithms

After constructing a suitable map, the robot must compute an optimal (or near-optimal) route. Table 5 shows widely adopted path-planning algorithms.
Table 5. Phase IIb – Path Planning: Graph Search Algorithms
Table 5. Phase IIb – Path Planning: Graph Search Algorithms
Method Indoor Hybrid Outdoor
General Path Planning Overview (Survey) [113]
Derived Algorithms from the previous graph search methods
Bacterial Potential Field [44]
High Autonomous Driving (HAD) Algorithms [138]
Multi-criteria Path Fusion Planner [60]
Particle Swarm Optimization (PSO) [141]
Potential Field based Algorithms [186]
Deterministic Graph Search
A* based Algorithms [4,19,25,50,116,152,195,201] [3,20,92,112,173] [5,42,58,131,137,141,176]
D* based Algorithms [120,150,156,202] [3] [5,126]
Dijkstra’s based Algorithms [185] [112,201] [141]
GPS based Coverage Approach [132,143]
Greedy and Heuristic Quadratic Programming (GH-QP) [21]
Smac Planner [173]
State Lattice Search [92,173] [198]
Utility-Based Decision Making [138]
Genetic/Evolutionary Based Algorithms
Firefly Algorithm (FA) [129]
Genetic Algorithm (GA) [76] [141]
Randomized Graph Search
OMPL and SBO Planners [109] [203]
Probabilistic Roadmap [141]
Rapidly Exploring Random Tree (RRT) based [45,160] [3] [136,141]
Spider Monkey Optimization (ISMO) [122]
Wall Follow & Random Walk [47]
A* and its variants appear in all environments, reflecting their reliability and adaptability. RRT-based planners are prominent in outdoor settings due to their ability to handle high-dimensional spaces. State lattice search and D*-based algorithms are frequently used in hybrid and outdoor environments, ensuring structured yet dynamic path planning. Genetic algorithms and particle swarm optimization focus primarily on optimizing routes rather than primary path planning.

Selecting the “Best” Planner Is a Mission–Specific Trade-Off

Although Table 4 and Table 5 catalogue the path–planning methods most frequently adopted for terrestrial robots, no single algorithm is universally superior. Instead, the selection must reflect the dominant requirements of the target Operational Design Domain (ODD), whether that is computational frugality on resource-constrained hardware, trajectory smoothness for passenger comfort, energy efficiency in long-duration agricultural tasks, or millisecond responsiveness on high-speed racing platforms.
Carvalho et al. [204] provide an empirical illustration of these trade-offs: in a controlled urban simulation they benchmarked several widely used planners and observed that improvements in safety margins or ride comfort often came at the expense of increased path length, energy consumption, or CPU load. Their findings underscore that planner selection is not a matter of choosing the “most advanced’’ algorithm, but of matching the algorithm’s cost envelope to the specific performance priorities and risk tolerance of the intended application.
In some frameworks, an intermediate phase between path planning Section 6.3 and motion control Section 6.5 is included, primarily focused on velocity optimization. Such methods are essential in real-time, high-speed, or safety-critical applications [182], and are also employed in general-purpose navigation frameworks like ROS 2 Navigation Stack (Nav2) [185]

6.4. Phase III: Obstacle Avoidance and Trap Landscapes

Even with a planned path, dynamic or unknown obstacles may appear. Phase III addresses collision avoidance, guaranteeing safe local navigation.Table 6 surveys established Collision Avoidance Systems (CAS) algorithms.
Table 6. Phase III: Obstacle Avoidance and Trap Landscape, CASs.
Table 6. Phase III: Obstacle Avoidance and Trap Landscape, CASs.
STRATEGIES CASs Indoor Hybrid Outdoor
Anti-target Approach Laws
Cone’s Geometry-based Calculated Rule [124]
Piecewise Continuous Bezier Curves [136]
Visibility Constraints-based Space Carving [168]
Genetic based Algorithms
Artificial Neural Networks [112]
BeeClust Algorithm [112]
Biological Approach (incl. Cockroach-Inspired Neural Escape Circuit) [127]
Evolutionary Behavior based on Genetic Programming [147,170]
Geometrical Methods
Collision Cone [144]
η 3-Spline [151]
Foot Collision Check [19,195] [20]
GPS-Based Path Correction [132]
GH-QP (Greedy and Heuristic QP in Convex Regions) [21]
Hybrid Regression Analysis-ISMO [122]
Markov Random Fields (MRF) [131]
Occupancy Likelihood-Based Merging [66]
SuperVoxel-based Cost Model [5]
Traditional Algorithms
Boundary Following (i.e. walls) [112,147]
Bug Algorithms [205,206,207] [147] [141]
Curvature Velocities Techniques (CVM) [208]
DWA + Elastic Band [154] [141]
Dynamic Windows Approaches [4,117] [124] [137,209]
Elastic Band Concept [4,25]
Follow the Gap (FTG) [161]
Machine Learning based [72,79,210] [29]
Nearness Diagram [155,156] [125]
Reactive Methods [147]
Vector Field Histogram (VFH) based algorithms [211] [124,147,212] [18]
Virtual Force Field (VFF) Methods
Costmap Segmentation based [30]
Dynamic Cost Map Refinement [131]
ML based Obstacle Detection via Haar Cascade Classifier [174]
Potential Field (Gradient Based) Methods [1,116,118,170,186,213] [3] [11,42,141]
Potential field-based methods are widely used in indoor settings due to their efficiency in structured environments. Machine learning-based obstacle avoidance is more common in hybrid and outdoor settings, leveraging data-driven adaptation. Traditional methods like vector field histograms and dynamic window approaches are versatile and can be applied across multiple environments. Outdoor applications also frequently employ spline optimization techniques, such as Bezier curves, for smooth obstacle avoidance. Autonomous vehicles, in particular, rely on these techniques to ensure smoother motion, reduce jerk, and improve passenger comfort during trajectory execution.

6.5. Phase IV: Motion Control and Robot Relocation

Phase IV translates planning outcomes into actuator commands for velocity, steering, or leg movement. Table 7 summarizes representative controllers.
Table 7. Phase IV – Motion Control and Robot Relocation
Table 7. Phase IV – Motion Control and Robot Relocation
Controllers Indoor Hybrid Outdoor
Behavior Based Controllers
Fuzzy Logic [21,43,151,167,174] [112,214]
Motion Generator / Shape Corrector [125]
Rotation Shim [92]
FTG Heuristic (model–free) [161]
Control-Theory Based Controllers
DCC, ACC, LCGA [138]
Robust/Optimal State-Feedback (LQR / H / LMI) [171,215,216]
Active / Optimised Disturbance-Rejection [217] [61]
Solar-Adaptive Speed [132]
Hybrid Controllers
Image-Segmentation Path-Following [172]
MPPI [218] [92] [139]
Linear Controllers
Lane Detection + Sliding Mode [147]
Lateral & Longitudinal PID [126]
Preview LQR [176]
Whole-Body QP [53]
PID (Pose / Velocity) [20,27,112,164] [18,42,58,135]
Machine Learning
CNN [133]
MobileNet [166]
Neural-Network (generic) [151] [127]
Reinforcement Learning [20]
Nonlinear Controllers
Bio-Inspired [116]
Dining Philosopher [122]
Exact Feedback Linearisation (FBL / Backstepping) [219,220]
Gradient-Based Speed & Steering [118]
iLQR [108,221]
Loop-Closure Pose Optimisation [168]
Lyapunov-Based [170]
Sliding-Mode Family (SMC / VG-NTSMC) [222,223]
MPC [185] [3,72,147,182] [5,6,11,18,131,137]
MSaDE-Static Force Opt. [52]
Nonlinear Optimal SDRE [213]
Optimized Sail Assistance [62]
Passivity-Based Formation / Tracking [224]
Pure Pursuit [218,225] [92] [11,18,114,226]
Rate / Nonlinear Pos. Mapping [177]
SC Impedance (SCIC) [53]
State Lattice Policy [198]
Time Elastic Band [4,218]
PID and fuzzy logic controllers dominate structured indoor environments due to their simplicity. In contrast, outdoor robots require more advanced control methods, such as Model Predictive Control (MPC) and nonlinear techniques, to handle dynamic and high-speed conditions. Bioinspired models appear less frequently due to their complexity and lower determinism. Machine learning-based controllers, while still emerging, are being explored for adaptive control and robust navigation.
A recent study by Schena [218] highlights the merits of automated, repeatable benchmarking frameworks for motion-control evaluation. Using a standardized test-bench, the work quantifies how alternative controllers emphasise distinct regions of the performance space, whether trajectory-tracking accuracy, obstacle-clearance margins, energetic efficiency, or computational latency. The results underscore that no controller is universally superior to an Operational Design Domain (ODD); rather, the optimal choice depends on the mission’s dominant requirements. Robust benchmarking infrastructures are therefore indispensable, providing objective data that exposes these trade-offs and enabling practitioners to align controller selection with application-specific priorities.

6.6. Phase V: Trajectory Execution

Finally, Phase V enacts the chosen trajectory, either via offline (predefined) or online(adaptive) methods. Table 8 lists examples of trajectory-execution algorithms:
Table 8. Phase V: Trajectory Execution Methods.
Table 8. Phase V: Trajectory Execution Methods.
Method/Algorithm Indoor Hybrid Outdoor
Episodic Planning (Deferred Execution) [21,119,154] [14] [63]
Hybrid Mode Switching (Autonomous-Manual Transitions) [227] [172] [18,78,138]
Integrated Planning and Execution (Continuous Replanning) [4,19,25,116,118,152,169,170,213] [20,72,182] [5,11,44,66,132,141,168,176]
Offline Trajectory Execution (Predefined Paths or Teleoperation) [164,177] [74] [42,135,200]
Real-Time Reactive Trajectory Execution (Local Adjustments) [53,171,195] [3,49,122,125,182] [30,126,127,131,137,166,209]
Integrated Planning and Execution (Continuous Replanning) refers to methods involving frequent updates to planned trajectories in response to significant environmental or mission-level changes. In contrast, Real-Time Reactive Trajectory Execution (Local Adjustments)typically handles immediate, localized adjustments to trajectories for obstacle avoidance and path tracking in highly dynamic scenarios.
Episodic and offline planning methods are commonly employed in structured environments where predefined routes or event-triggered decision-making suffice, as in museum robots and tour guides. Fully autonomous systems depend on integrated planning and real-time reactive trajectory execution for continuous adaptation to dynamic conditions. Outdoor robots frequently utilize these real-time capabilities to respond promptly to unpredictable terrains and hazards. In high-risk applications such as defense and rescue, Hybrid Mode Switching facilitates smooth transitions between autonomous and manual control, ensuring flexibility in mission-critical scenarios.
Recent industrial AMRs combine a velocity multiplexer with a ROS-based finite–state machine to arbitrate seamlessly between execution modes. The Q-CONPASS prototype [227], for instance, employs high-priority safety controllers that override lane–following or teleoperation commands, alongside velocity smoothing in densely populated areas. Such priority–based arbitration exemplifies modern implementations of Hybrid Mode Switching, ensuring uninterrupted transitions among autonomous, shared, and manual control modes within operational workflows.

Complementary Surveys by PCO Phase

Up to this point we have detailed representative algorithms. Readers who need broader surveys to compare methods within each PCO phase or follow research trends, we encourage to check Table 9. Each entry points to a recent, high-impact review mapped to its corresponding module.
Table 9. Complementary Surveys by PCO Layer / Phase
Table 9. Complementary Surveys by PCO Layer / Phase
Layer : Phase (module) Survey References
Perception : Detection & Self-Localization (Ia+Ib) [2,228]
Perception : Mapping & SLAM (Id) [229]
Perception : Sensor Fusion & Data Processing (Ic) [121,230,231,232]
Cognition : Graph Representation Builder (IIa) [233]
Cognition : Route Search Module (IIb) [3,102,113,124,234,235]
Cognition : Obstacle Avoidance – Reactive Module (IIc) [236]
Cognition : Decision-Making (III) (Adaptive Behaviour Selector) [237,238]
Operation : Motion Control (IV) [239,240]
Operation : Trajectory Execution (V) [241]
Perception–Cognition : Prediction (Ic→II) [242]
Perception–Operation : Sensor–Control Integration (I→IV) [243]
Cognition–Operation : Task & Motion Planning (II→IV) [244]
Perception–Cognition–Operation : End-to-end DL navigation [245]

6.7. Summary of Phases vs. Layers

The relationship between perception, cognition, and operation layers structures how autonomous robots process sensor data, plan actions, and execute trajectories. Indoor robots often depend on structured localization techniques such as SLAM and occupancy grids, where ultrasonic sensors, 2D LiDAR, and depth cameras are generally sufficient for navigation and obstacle avoidance in controlled environments. In contrast, outdoor robotic systems, particularly those operating in complex or safety-critical scenarios, require more detailed environmental perception. These systems rely on a combination of GNSS, 3D LiDAR, radar, and machine learning-based perception using RGB cameras to enhance situational awareness and enable safe operation in unstructured and dynamic environments.
By integrating perception-driven data acquisition with cognitive decision-making and operational execution, this framework highlights the interdependencies between sensing technologies, algorithmic strategies, and control paradigms. The growing demand for autonomy underscores the need for adaptive multi-modal sensor fusion, hybrid planning approaches, and flexible control mechanisms, ensuring reliable robotic operation across diverse environments.

7. Discussion and Comparison

Section 5 introduced the Perception–Cognition–Operation (PCO) framework and detailed its internal workflow. Now, we compare the proposed framework with literature mobile-robot navigation architectures. This section fulfils two complementary goals:
1.
Benchmarking insight: Table 10 details key architectural properties of representative frameworks - ranging from minimalist finite-state machines to large-scale autonomous-driving stacks - thereby clarifying the design space in which PCO operates.
2.
Conceptual mapping: Figure 5 provides a visual taxonomy based on level of abstraction and domain adaptability. By anchoring each quadrant with well-known examples (e.g., Autoware, Apollo, Nav2), the plot helps researchers infer where unreviewed or future frameworks might fall and, importantly, underscores PCO’s role as a high-level reference design intended to guide forthcoming functional implementations.
Together, these features highlight an opportunity for the scientific community: instantiating PCO as runnable code and rigorously benchmarking it against the alternatives listed in this paper will provide a shared baseline for quantitative evaluation - an essential step to accelerate reproducible research in autonomous mobile robots.
Table 10. Concise comparison of representative navigation frameworks by clusters presented in Figure 5. Bold cells mark the highest score per row.
Table 10. Concise comparison of representative navigation frameworks by clusters presented in Figure 5. Bold cells mark the highest score per row.
Criteria Decision patterns(FSM, BT) Academic / conceptual(ArMoR, AuRA, TCA, NASREM, CLARAty) Generalist / Control SDKs(MoveIt, Nav2, Isaac, EAGERx, OpenSoT) Domain stacks(Autoware, Apollo, Waymo, CarMaker, ArmarX) Cross-domain pilots(ArduPilot/PX4, MOOS-IvP) Proposed PCO
General architecture State graph / tree Layered or hybrid concepts Plugin-based ROS / GPU SDK Large multi-module monolith Real-time autopilot core Three orthogonal layers
Structural modularity Low Moderate High High High High
Domain adaptability Low–Moderate Low High Low Moderate–High High
Scalability Poor–Good Moderate High High High High
Ease of reuse / config. Limited Low (concept only) High (launch + plugins) Moderate (heavy setup) Moderate (parameter files) High (clear APIs)
Typical scope Toy demos, game AI Research prototypes, rovers Arms, AMRs, factories L4/L5 road vehicles UAVs, AUVs, UGVs Reference design for multiple domains
The synthesis in Table 10 and the taxonomy plotted in Figure 5 stem directly from the classification survey presented in Section 4.5.
Figure 5. Representative robotics frameworks mapped by structural modularity (X-axis) and domain adaptability (Y-axis). Colors denote: decision patterns (red/orange), academic or historic architectures (yellow/grey), general-purpose SDKs (pink), domain-specific stacks (magenta/cyan), and the proposed PCO concept (green, bold).
Figure 5. Representative robotics frameworks mapped by structural modularity (X-axis) and domain adaptability (Y-axis). Colors denote: decision patterns (red/orange), academic or historic architectures (yellow/grey), general-purpose SDKs (pink), domain-specific stacks (magenta/cyan), and the proposed PCO concept (green, bold).
Preprints 201688 g005
The comparison in Table 10 and the conceptual map in Figure 5 indicate that the PCO model bridges the gap between highly specialised industrial stacks (e.g. Autoware, Apollo) and low-level symbolic controllers (FSM, BT). By clearly decoupling perception, reasoning, and actuation, PCO offers a promising reference blueprint for both research prototypes and production-grade systems.
Future work could fruitfully explore three directions: (i) publishing open-source implementations of PCO across heterogeneous robot-middleware platforms; (ii) benchmarking those implementations against the alternatives surveyed here; and (iii) reporting quantitative evidence on cross-domain transferability, maintenance effort, and real-time performance. Such results would clarify the practical benefits of a layered navigation architecture and accelerate its adoption in autonomous mobile robotics.

9. Conclusion

This survey consolidates the state of the art in terrestrial robot navigation through the proposed Perception–Cognition–Operation (PCO) framework. By synthesizing a structured corpus of 263 references and cross-referencing sensing configurations, planning strategies, control methods, and environmental domains, the study provides a structured and platform-aware reference for analysing autonomous navigation systems. In contrast to fragmented or domain-specific formulations, the proposed architecture organizes the navigation pipeline into modular functional boundaries that help practitioners balance robustness, cost, adaptability, and algorithmic complexity in real-world applications.
The rise of modular self-reconfigurable robotic platforms further reinforces the need for hardware-agnostic navigation stacks [262]. In this context, the PCO architecture contributes a clearer design perspective by explicitly linking the algorithmic flow of autonomous navigation to the Environment–Application–Locomotion (E-A-L) axis. This integration helps reconcile physical platform constraints with sensing, planning, and execution requirements, offering a practical design guide for researchers and engineers working across indoor, hybrid, and outdoor domains.
Beyond summarizing prior work, this review highlights how the layered structure of the PCO framework supports interoperability and algorithm interchangeability within well-defined modular boundaries. Such organization enables the substitution or upgrade of individual modules, for example, replacing classical probabilistic fusion with deep-learning-based approaches, or switching planning and control strategies, without requiring a complete restructuring of the navigation architecture. In this sense, the framework serves not only as a survey synthesis, but also as a pedagogically oriented blueprint for the co-design of future autonomous robotic systems.

9.1. Limitations and Future Work

This review intentionally remains focused on algorithmic organization and literature synthesis. It does not benchmark hardware components such as motors, batteries, ECUs, or communication buses, and it does not yet address computational-resource constraints in detail, including CPU/GPU load, latency, or real-time scheduling trade-offs. Likewise, the proposed three-layer architecture remains theoretical at this stage, and its full end-to-end validation on physical robotic platforms is left for future work.
These scope boundaries define a clear roadmap for the next phase of research. Future investigations should validate the full PCO stack experimentally on real robots, including assessments of real-time performance, computational efficiency, and robustness under field conditions. Additional extensions should incorporate application-level tags through automated text-mining pipelines, enabling the taxonomy to evolve dynamically as new literature emerges. The framework can also be broadened to cover truly heterogeneous and cooperative systems, including UAV–UGV teams and other multi-robot configurations, where distributed optimisation, shared mapping, and decentralised task allocation become central architectural requirements [263]. In this direction, swarm-oriented and co-evolutionary planning strategies [197] provide a promising basis for examining how PCO modules may scale from single-platform navigation to resilient, cooperative autonomy in complex and mission-critical environments.

Author Contributions

Conceptualization, M.V.L.d.C. and R.S.; methodology, M.V.L.d.C. and R.S.; investigation, M.V.L.d.C.; formal analysis, M.V.L.d.C., L.R. and J.J.; visualization, M.V.L.d.C., L.R. and J.J.; writing, original draft preparation, M.V.L.d.C.; writing, review and editing, M.V.L.d.C., L.R., J.J. and R.S. R.S. proposed the architecture and the environment/scenario/application based study that supported the comparative atlas. L.R. and J.J. contributed to the benchmarking of the frameworks, including the comparative tables, figures, and targeted content refinement. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior-Brazil (CAPES)-Finance Code 001, and in part by the Fundação de Amparo à Pesquisa e Inovação do Estado de Santa Catarina (FAPESC).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

During the preparation of this manuscript, the authors used OpenAI’s ChatGPT to support language refinement, proofreading, and improvements in grammatical flow and writing clarity. The authors reviewed and edited all outputs and take full responsibility for the content of this publication. No AI-generated text was used to produce the scientific arguments, technical analysis, figures, or conclusions of this article; the tool was used exclusively to assist with writing refinement.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Khatib, O. Real-time obstacle avoidance for manipulators and mobile robots. In Autonomous robot vehicles; Springer, 1986; pp. 396–404.
  2. Panigrahi, P.K.; Bisoy, S.K. Localization strategies for autonomous mobile robots: A review. J. King Saud Univ., Comp. & Info. Sci. 2022, 34, 6019–6039. [CrossRef]
  3. Sánchez-Ibáñez, J.R.; Pérez-del Pulgar, C.J.; García-Cerezo, A. Path planning for autonomous mobile robots: A review. Sensors 2021, 21, 7898. [CrossRef]
  4. Macenski, S.; Martin, F.; White, R.; Ginés Clavero, J. The Marathon 2: A Navigation System. In Proceedings of the 2020 IEEE/RSJ IRO), 2020.
  5. Atas, F.; Grimstad, L.; Cielniak, G. Evaluation of sampling-based optimizing planners for outdoor robot navigation. arXiv preprint arXiv:2103.13666 2021.
  6. Atas, F.; Cielniak, G.; Grimstad, L. Navigating in 3D Uneven Environments through Supervoxels and Nonlinear MPC. In Proceedings of the 2023 ECMR. IEEE, 2023, pp. 1–8.
  7. Jalal, F.; Nasir, F. Underwater navigation, localization and path planning for autonomous vehicles: A review. In Proceedings of the 2021 International Bhurban Conference on Applied Sciences and Technologies (IBCAST). IEEE, 2021, pp. 817–828.
  8. Yang, J.; Huo, J.; Xi, M.; He, J.; Li, Z.; Song, H.H. A time-saving path planning scheme for autonomous underwater vehicles with complex underwater conditions. IEEE Internet of Things Journal 2022, 10, 1001–1013. [CrossRef]
  9. Yim, M.; Shen, W.M.; Salemi, B.; Rus, D.; Moll, M.; Lipson, H.; Klavins, E.; Chirikjian, G.S. Modular self-reconfigurable robot systems [grand challenges of robotics]. IEEE Robotics & Automation Magazine 2007, 14, 43–52. [CrossRef]
  10. Siegwart, R.; Nourbakhsh, I.R.; Scaramuzza, D.; Arkin, R.C. Introduction to autonomous mobile robots; 2011.
  11. Hajjaj, S.S.H.; Sahari, K.S.M. Bringing ROS to agriculture automation: hardware abstraction of agriculture machinery. Int. J. Appl. Eng. Res. 2017, 12, 311–316.
  12. Siciliano, B. Springer Handbook of Robotics. Springer-Verlag google schola 2008, 2, 15–35.
  13. Jahn, U.; Heß, D.; Stampa, M.; Sutorma, A.; Röhrig, C.; Schulz, P.; Wolff, C. A taxonomy for mobile robots: Types, applications, capabilities, implementations, requirements, and challenges. Robotics 2020, 9, 109. [CrossRef]
  14. Ben-Ari, M.; Mondada, F.; Ben-Ari, M.; Mondada, F. Robots and their applications. Elements of robotics 2018, pp. 1–20.
  15. Bach, S.H.; Khoi, P.B.; Yi, S.Y. Global UWB system: A high-accuracy mobile robot localization system with tightly coupled integration. IEEE Internet of Things Journal 2024, 11, 16618–16626. [CrossRef]
  16. Aydınocak, E.U. Robotics systems and healthcare logistics. In Health 4.0 and medical supply chain; Springer, 2023; pp. 79–96.
  17. Edlinger, R.; Föls, C.; Nüchter, A. An innovative pick-up and transport robot system for casualty evacuation. In Proceedings of the 2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). IEEE, 2022, pp. 67–73.
  18. Sanaullah, M.; Akhtaruzzaman, M.; Hossain, M.A. Land-robot technologies: The integration of cognitive systems in military and defense. NDC E-JOURNAL 2022, 2, 123–156.
  19. Kim, J.H.; Shin, Y.H.; Jeong, H.; Oh, J.H.; Park, H.W. Real time humanoid footstep planning with foot angle difference consideration for cost-to-go heuristic. In Proceedings of the 2023 20th International Conference on Ubiquitous Robots (UR). IEEE, 2023, pp. 92–99.
  20. Wang, S.; Piao, S.; Leng, X.; He, Z. Learning 3D bipedal walking with planned footsteps and Fourier series periodic gait planning. Sensors 2023, 23, 1873. [CrossRef]
  21. Gao, Z.; Chen, X.; Yu, Z.; Li, C.; Han, L.; Zhang, R. Global footstep planning with greedy and heuristic optimization guided by velocity for biped robot. Expert Systems with Applications 2024, 238, 121798. [CrossRef]
  22. Garcia, E.; Jimenez, M.A.; De Santos, P.G.; Armada, M. The evolution of robotics research. IEEE Robotics & Automation Magazine 2007, 14, 90–103.
  23. Sun, H.; Wei, C.; Yao, Y.a.; Wu, J. Analysis and Experiment of a Bioinspired Multimode Octopod Robot. Chinese Journal of Mechanical Engineering 2023, 36, 142. [CrossRef]
  24. Santos, H.F.; Perondi, E.A.; Wentz, A.V.; da Silva Junior, A.L.; Barone, D.A.; Galassi, M.; de Castro, B.B.; dos Reis, N.R.; Basso, E.D.; Pereira Pinto, H.L.; et al. Annelida, a Robot for Removing Hydrate and Paraffin Plugs in Offshore Flexible Lines: Development and Experimental Trials. SPE Production & Operations 2020, 35, 641–653. [CrossRef]
  25. Eldemiry, A.; Muddassir, M.; Zayed, T. Autonomous Data Acquisition of Ground Penetrating Radar (GPR) Using LiDAR-based Mobile Robot. In Proceedings of the ISARC. Proceedings of the International Symposium on Automation and Robotics in Construction. IAARC Publications, 2024, Vol. 41, pp. 206–212.
  26. Sostero, M. Automation and robots in services: review of data and taxonomy 2020.
  27. Bui, H.D.; Nguyen, S.; Billah, U.H.; Le, C.; Tavakkoli, A.; La, H.M. Control framework for a hybrid-steel bridge inspection robot. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020, pp. 2585–2591.
  28. Redhead, F.; Snow, S.; Vyas, D.; Bawden, O.; Russell, R.; Perez, T.; Brereton, M. Bringing the farmer perspective to agricultural robots. In Proceedings of the Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, 2015, pp. 1067–1072.
  29. Li, M.; Imou, K.; Wakabayashi, K.; Yokoyama, S. Review of research on agricultural vehicle autonomous guidance. International Journal of Agricultural and biological engineering 2009, 2, 1–16.
  30. de Avila Lopes, R.; de Carvalho, M.V.L.; Kitani, E.; de Assis Zampirolli, F.; Yoshioka, L.; Junior, L.A.C.; Ibusuki, U. Deep Learning-Based Instance Segmentation for Enhanced Navigation of Agricultural Vehicles. Revista de Informática Teórica e Aplicada 2025, 32, 136–142. [CrossRef]
  31. Tranzatto, M.; Miki, T.; Dharmadhikari, M.; Bernreiter, L.; Kulkarni, M.; Mascarich, F.; Andersson, O.; Khattak, S.; Hutter, M.; Siegwart, R.; et al. Cerberus in the darpa subterranean challenge. Science Robotics 2022, 7, eabp9742. [CrossRef]
  32. Kono, H.; Isayama, S.; Koshiji, F.; Watanabe, K.; Suzuki, H. Automatic Flipper Control for Crawler Type Rescue Robot using Reinforcement Learning. International Journal of Advanced Computer Science & Applications 2024, 15. [CrossRef]
  33. Huang, H.M.; Pavek, K.; Novak, B.; Albus, J.; Messin, E. A framework for autonomy levels for unmanned systems (ALFUS). Proceedings of the AUVSI’s unmanned systems North America 2005, pp. 849–863.
  34. Meakin, M. Quantifying Turing: a systems approach to quantitatively assessing the degree of autonomy of any system. Journal of Unmanned Vehicle Systems 2021, 9, 219–233. [CrossRef]
  35. Pittman, J.M. A Measure for Level of Autonomy Based on Observable System Behavior. arXiv preprint arXiv:2407.14975 2024.
  36. Hwang, G.J.; Katre, A.; Hart, K.M.; Rea, C.A. Analysis techniques of autonomy framework metrics for autonomous developers. In Proceedings of the Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022. SPIE, 2022, Vol. 12115, pp. 124–136.
  37. Beer, J.; Fisk, A.; Rogers, W. Toward a Framework for Levels of Robot Autonomy in Human–Robot Interaction. J. of Human–Robot Interaction 2014, 3. [CrossRef]
  38. Gervasi, R.; Mastrogiacomo, L.; Franceschini, F. A conceptual framework to evaluate human-robot collaboration. The International Journal of Advanced Manufacturing Technology 2020, 108, 841–865. [CrossRef]
  39. International Federation of Robotics. World Robotics 2021. https://ifr.org/, 2021. Accessed: 2 Mar. 2025.
  40. Bruzzone, L.; Quaglia, G. Locomotion systems for ground mobile robots in unstructured environments. Mechanical sciences 2012, 3, 49–62. [CrossRef]
  41. Karelics Oy. Karelics Radar Inspection Robot. https://karelics.fi/radar-inspections/. Accessed: 2 Mar. 2025.
  42. Dissanayake, M. Development of a Chain Climbing Robot and an Automated Ultrasound Inspection System for Mooring Chain Integrity Assessment. PhD thesis, London South Bank University, 2018.
  43. Gueaieb, W.; Miah, M.S. An intelligent mobile robot navigation technique using RFID technology. IEEE Transactions on Instrumentation and Measurement 2008, 57, 1908–1917. [CrossRef]
  44. Montiel, O.; Orozco-Rosas, U.; Sepúlveda, R. Path planning for mobile robots using Bacterial Potential Field for avoiding static and dynamic obstacles. Expert Systems with Applications 2015, 42, 5177–5191. [CrossRef]
  45. Shen, K.; Li, C.; Xu, D.; Wu, W.; Wan, H. Sensor-network-based navigation of delivery robot for baggage handling in international airport. International Journal of Advanced Robotic Systems 2020, 17, 1729881420944734. [CrossRef]
  46. KASURINEN, M. MOBILE ROBOTS IN INDOOR LOGISTICS 2017.
  47. Ong, R.; Azir, K.K. Low cost autonomous robot cleaner using mapping algorithm based on internet of things (IoT). In Proceedings of the IOP conference series: materials science and engineering. IOP Publishing, 2020, Vol. 767, p. 012071.
  48. Yamaguchi, U.; Saito, F.; Ikeda, K.; Yamamoto, T. HSR, human support robot as research and development platform. In Proceedings of the The Abstracts of the international conference on advanced mechatronics: toward evolutionary fusion of IT and mechatronics: ICAM 2015.6. The Japan Society of Mechanical Engineers, 2015, pp. 39–40. [CrossRef]
  49. Bloss, R. Mobile hospital robots cure numerous logistic needs. Industrial Robot: An International Journal 2011, 38, 567–571. [CrossRef]
  50. Karkowski, P.; Oßwald, S.; Bennewitz, M. Real-time footstep planning in 3D environments. In Proceedings of the 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids). IEEE, 2016, pp. 69–74.
  51. Karkowski, P.; Bennewitz, M. Prediction maps for real-time 3d footstep planning in dynamic environments. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 2517–2523.
  52. Pierezan, J.; Freire, R.Z.; Weihmann, L.; Reynoso-Meza, G.; dos Santos Coelho, L. Static force capability optimization of humanoids robots based on modified self-adaptive differential evolution. Computers & Operations Research 2017, 84, 205–215. [CrossRef]
  53. Ruscelli, F.; Rossini, L.; Hoffman, E.M.; Baccelliere, L.; Laurenzi, A.; Muratore, L.; Antonucci, D.; Cordasco, S.; Tsagarakis, N.G. Design and Control of the Humanoid Robot COMAN+: Hardware Capabilities and Software Implementations. IEEE Robotics & Automation Magazine 2024. [CrossRef]
  54. Khorshidi, S.; Dawood, M.; Nederkorn, B.; Bennewitz, M.; Khadiv, M. Physically-Consistent Parameter Identification of Robots in Contact. arXiv preprint arXiv:2409.09850 2024.
  55. Bayer, J. Autonomous Exploration of Unknown Rough Terrain with Hexapod Walking Robot. In Proceedings of the Conference on Intelligent Robots and Systems (IROS), 2016, pp. 2859–2866.
  56. Azayev, T.; Zimmerman, K. Blind hexapod locomotion in complex terrain with gait adaptation using deep reinforcement learning and classification. Journal of Intelligent & Robotic Systems 2020, 99, 659–671. [CrossRef]
  57. Ghute, M.S.; Kamble, K.P.; Korde, M. Design of military surveillance robot. In Proceedings of the 2018 First International Conference on Secure Cyber Computing and Communication (ICSCCC). IEEE, 2018, pp. 270–272.
  58. He, Y.; Chen, C.; Bu, C.; Han, J. A polar rover for large-scale scientific surveys: design, implementation and field test results. International Journal of Advanced Robotic Systems 2015, 12, 145. [CrossRef]
  59. Dupeyroux, J.; Serres, J.R.; Viollet, S. AntBot: A six-legged walking robot able to home like desert ants in outdoor environments. Science Robotics 2019, 4, eaau0307. [CrossRef]
  60. Wettergreen, D.; Bapna, D.; Maimone, M.; Thomas, G. Developing Nomad for robotic exploration of the Atacama Desert. Robotics and Autonomous Systems 1999, 26, 127–148. [CrossRef]
  61. Silva, A.F.B. Modelagem de Sistemas Robóticos Móveis para Controle de Tração em Terrenos Acidentados. PhD thesis, M. Sc. Thesis, Mech. Eng. Dept., Pontifical Catholic University of Rio de …, 2007.
  62. Luo, Y.; Liu, G.; Guo, L.; Zhu, Y.; Zhao, J. Scalable Wing Sailing and Snowboarding Enhance Efficient and Energy-Saving Mobility of Polar Robot. IEEE/ASME Transactions on Mechatronics 2024. [CrossRef]
  63. Lever, J.H.; Delaney, A.J.; Ray, L.E.; Trautmann, E.; Barna, L.A.; Burzynski, A.M. Autonomous gpr surveys using the polar rover yeti. Journal of Field Robotics 2013, 30, 194–215. [CrossRef]
  64. Kuutti, S.; Fallah, S.; Katsaros, K.; Dianati, M.; Mccullough, F.; Mouzakitis, A. A survey of the state-of-the-art localization techniques and their potentials for autonomous vehicle applications. IEEE Internet of Things Journal 2018, 5, 829–846. [CrossRef]
  65. Kuwata, Y.; Teo, J.; Fiore, G.; Karaman, S.; Frazzoli, E.; How, J.P. Real-time motion planning with applications to autonomous urban driving. IEEE Transactions on Control Systems Technology 2009, 17, 1105–1118. [CrossRef]
  66. Li, H.; Tsukada, M.; Nashashibi, F.; Parent, M. Multivehicle cooperative local mapping: A methodology based on occupancy grid map merging. IEEE Transactions on Intelligent Transportation Systems 2014, 15, 2089–2100. [CrossRef]
  67. Yang, S.; Wang, W.; Liu, C.; Deng, W. Scene Understanding in Deep Learning-Based End-to-End Controllers for Autonomous Vehicles. IEEE Transactions on Systems, Man, and Cybernetics: Systems 2019, 49, 53–63. [CrossRef]
  68. Liu, G.H.; Lin, H.Y.; Lin, H.Y.; Chen, S.T.; Lin, P.C. A bio-inspired hopping kangaroo robot with an active tail. Journal of Bionic Engineering 2014, 11, 541–555. [CrossRef]
  69. Yoshimura, S.; Suzuki, T.; Bando, M.; Yuzaki, S.; Kawaharazuka, K.; Okada, K.; Inaba, M. Design method of a Kangaroo robot with high power legs and an articulated soft tail. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2023, pp. 6631–6638.
  70. Grzelczyk, D.; Awrejcewicz, J. Dynamics, stability analysis and control of a mammal-like octopod robot driven by different central pattern generators. Journal of Computational Applied Mechanics 2019, 50, 76–89.
  71. Murphy, R.R. Trial by fire [rescue robots]. IEEE Robotics & Automation Magazine 2004, 11, 50–61.
  72. Delmerico, J.; Mintchev, S.; Giusti, A.; Gromov, B.; Melo, K.; Horvat, T.; Cadena, C.; Hutter, M.; Ijspeert, A.; Floreano, D.; et al. The current state and future outlook of rescue robotics. Journal of Field Robotics 2019, 36, 1171–1191. [CrossRef]
  73. Reddy, A.H.; Kalyan, B.; Murthy, C.S. Mine rescue robot system–a review. Procedia Earth and Planetary Science 2015, 11, 457–462. [CrossRef]
  74. Matsuno, F.; Tadokoro, S. Rescue robots and systems in Japan. In Proceedings of the 2004 IEEE international conference on robotics and biomimetics. IEEE, 2004, pp. 12–20.
  75. Navarro, A.S.; Monteiro, C.M.; Cardeira, C.B. A mobile robot vending machine for beaches based on consumers’ preferences and multivariate methods. Procedia-Social and Behavioral Sciences 2015, 175, 122–129. [CrossRef]
  76. Ko, Y.K.; Park, J.H.; Ko, Y.D. A development of optimal algorithm for integrated operation of UGVs and UAVs for goods delivery at tourist destinations. Applied Sciences 2022, 12, 10396.
  77. Shiomi, M.; Kanda, T.; Glas, D.F.; Satake, S.; Ishiguro, H.; Hagita, N. Field trial of networked social robots in a shopping mall. In Proceedings of the 2009 IEEE/RSJ international conference on intelligent robots and systems. IEEE, 2009, pp. 2846–2853.
  78. Hoffmann, T.; Prause, G. On the regulatory framework for last-mile delivery robots. Machines 2018, 6, 33. [CrossRef]
  79. Wei, C.; Li, Y.; Ouyang, Y.; Ji, Z. Deep reinforcement learning with heuristic corrections for UGV navigation. Journal of Intelligent & Robotic Systems 2023, 109, 18. [CrossRef]
  80. Srinivas, S.; Ramachandiran, S.; Rajendran, S. Autonomous robot-driven deliveries: A review of recent developments and future directions. Transportation research part E: logistics and transportation review 2022, 165, 102834. [CrossRef]
  81. Chung, T.H.; Orekhov, V.; Maio, A. Into the robotic depths: Analysis and insights from the darpa subterranean challenge. Annual Review of Control, Robotics, and Autonomous Systems 2023, 6, 477–502. [CrossRef]
  82. Raja, G.; Raja, K.; Kanagarathinam, M.R.; Needhidevan, J.; Vasudevan, P. Advanced Decision Making and Motion Planning Framework for Autonomous Navigation in Unsignalized Intersections. IEEE Access 2024. [CrossRef]
  83. Fan, H.; Zhu, F.; Liu, C.; Zhang, L.; Zhuang, L.; Li, D.; Zhu, W.; Hu, J.; Li, H.; Kong, Q. Baidu Apollo EM Motion Planner. arXiv e-prints 2018, pp. arXiv–1807.
  84. Würsching, G.; Mascetta, T.; Lin, Y.; Althoff, M. Simplifying Sim-to-Real Transfer in Autonomous Driving: Coupling Autoware with the CommonRoad Motion Planning Framework. In Proceedings of the 2024 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2024, pp. 1462–1469.
  85. Kato, S.; Tokunaga, S.; Maruyama, Y.; Maeda, S.; Hirabayashi, M.; Kitsukawa, Y.; Monrroy, A.; Ando, T.; Fujii, Y.; Azumi, T. Autoware on board: Enabling autonomous vehicles with embedded systems. In Proceedings of the 2018 ACM/IEEE 9th International Conference on Cyber-Physical Systems (ICCPS). IEEE, 2018, pp. 287–296.
  86. Liu, W.; Wan, G.; Liu, J.; Cong, D. Path Planning for Lunar Rovers in Dynamic Environments: An Autonomous Navigation Framework Enhanced by Digital Twin-Based A*-D3QN. Aerospace 2025, 12, 517. [CrossRef]
  87. Nesnas, I.A.; Simmons, R.; Gaines, D.; Kunz, C.; Diaz-Calderon, A.; Estlin, T.; Madison, R.; Guineau, J.; McHenry, M.; Shu, I.H.; et al. CLARAty: Challenges and steps toward reusable robotic software. International Journal of Advanced Robotic Systems 2006, 3, 5. [CrossRef]
  88. Benjamin, M.R.; Schmidt, H.; Newman, P.M.; Leonard, J.J. Nested autonomy for unmanned marine vehicles with MOOS-IvP. Journal of Field Robotics 2010, 27, 834–875. [CrossRef]
  89. Asfour, T.; Waechter, M.; Kaul, L.; Rader, S.; Weiner, P.; Ottenhaus, S.; Grimm, R.; Zhou, Y.; Grotz, M.; Paus, F. Armar-6: A high-performance humanoid for human-robot collaboration in real-world scenarios. IEEE Robotics & Automation Magazine 2019, 26, 108–121. [CrossRef]
  90. Youakim, D.; Ridao, P.; Palomeras, N.; Spadafora, F.; Ribas, D.; Muzzupappa, M. MoveIt!: autonomous underwater free-floating manipulation. IEEE Robotics & Automation Magazine 2017, 24, 41–51. [CrossRef]
  91. Gonzalez, A.G.; Alves, M.V.; Viana, G.S.; Carvalho, L.K.; Basilio, J.C. Supervisory control-based navigation architecture: a new framework for autonomous robots in industry 4.0 environments. IEEE Transactions on Industrial Informatics 2017, 14, 1732–1743. [CrossRef]
  92. Macenski, S.; Moore, T.; Lu, D.V.; Merzlyakov, A.; Ferguson, M. From the desks of ROS maintainers: A survey of modern & capable mobile robotics algorithms in the robot operating system 2. Robotics and Autonomous Systems 2023, 168, 104493. [CrossRef]
  93. Alam, M.S.; Gullu, A.I.; Gunes, A. Fiducial Markers and Particle Filter Based Localization and Navigation Framework for an Autonomous Mobile Robot. SN Computer Science 2024, 5, 748. [CrossRef]
  94. Sandeep, P.; Yerragudi, V.; Gangadhar, N. CANTAV: A Cloud Centric Framework for Navigation and Control of Autonomous Road Vehicles. In Proceedings of the 2017 IEEE International Conference on Cloud Computing in Emerging Markets (CCEM). IEEE, 2017, pp. 99–106.
  95. Muñoz-Bañón, M.Á.; del Pino, I.; Candelas, F.A.; Torres, F. Framework for fast experimental testing of autonomous navigation algorithms. Applied Sciences 2019, 9, 1997. [CrossRef]
  96. Goodwin, J.R.; Winfield, A. A unified design framework for mobile robot systems. PhD thesis, University of the West of England, Bristol, 2008.
  97. Kainova, T.D. Overview of the accelerated platform for robotics and artificial intelligence NVIDIA Isaac. In Proceedings of the 2023 seminar on information computing and processing (ICP). IEEE, 2023, pp. 89–93.
  98. Saadat, N.; Sharif, M.M.M. Application framework for forest surveillance and data acquisition using unmanned aerial vehicle system. In Proceedings of the 2017 International Conference on Engineering Technology and Technopreneurship (ICE2T). IEEE, 2017, pp. 1–6.
  99. van der Heijden, B.; Luijkx, J.; Ferranti, L.; Kober, J.; Babuska, R. Engine Agnostic Graph Environments for Robotics (EAGERx): A Graph-Based Framework for Sim2real Robot Learning. IEEE Robotics & Automation Magazine 2024. [CrossRef]
  100. Orebäck, A. A component framework for autonomous mobile robots. PhD thesis, Numerisk analys och datalogi, 2004.
  101. Alami, R.; Chatila, R.; Fleury, S.; Ghallab, M.; Ingrand, F. An architecture for autonomy. The International Journal of Robotics Research 1998, 17, 315–337. [CrossRef]
  102. Ugwoke, K.C.; Nnanna, N.A.; Abdullahi, S.E.Y. Simulation-based review of classical, heuristic, and metaheuristic path planning algorithms. Scientific Reports 2025, 15, 12643.
  103. Nalic, D.; Pandurevic, A.; Eichberger, A.; Rogic, B. Design and implementation of a co-simulation framework for testing of automated driving systems. Sustainability 2020, 12, 10476. [CrossRef]
  104. Wang, X.; Qi, X.; Wang, P.; Yang, J. Decision making framework for autonomous vehicles driving behavior in complex scenarios via hierarchical state machine. Autonomous Intelligent Systems 2021, 1, 1–12. [CrossRef]
  105. Godin, A. A simple architecture for modular robots.
  106. Axelsson, M.; Oliveira, R.; Racca, M.; Kyrki, V. Social robot co-design canvases: A participatory design framework. ACM Transactions on Human-Robot Interaction (THRI) 2021, 11, 1–39. [CrossRef]
  107. Hoffman, E.M.; Laurenzi, A.; Tsagarakis, N.G. The open stack of tasks library: Opensot: A software dedicated to hierarchical whole-body control of robots subject to constraints. IEEE Robotics & Automation Magazine 2024. [CrossRef]
  108. Ruscelli, F.; Laurenzi, A.; Tsagarakis, N.G.; Mingo Hoffman, E. Horizon: A trajectory optimization framework for robotic systems. Frontiers in Robotics and AI 2022, 9, 899025. [CrossRef]
  109. Sucan, I.A.; Moll, M.; Kavraki, L.E. The open motion planning library. IEEE Robotics & Automation Magazine 2012, 19, 72–82. [CrossRef]
  110. Kunchev, V.; Jain, L.; Ivancevic, V.; Finn, A. Path planning and obstacle avoidance for autonomous mobile robots: A review. In Proceedings of the International Conference on Knowledge-Based and Intelligent Information and Engineering Systems. Springer, 2006, pp. 537–544.
  111. Pham, H.; Smolka, S.A.; Stoller, S.D.; Phan, D.; Yang, J. A survey on unmanned aerial vehicle collision avoidance systems. arXiv preprint arXiv:1508.07723 2015.
  112. Ben-Ari, M.; Mondada, F. Elements of robotics; Springer Nature, 2017.
  113. Carvalho, M.V.L.d.; et al. A review of ROS based autonomous driving platforms to carry out automated driving functions 2022.
  114. Giribet, J.; Mas, I.; Roca, A.; Marzik, G.; Torre, G.; Castro, C.R.G. Base de Datos para Conduccion Autonoma. Sensores y Sincronizacion. Apertura (f stop), 1, 1–8.
  115. Yang, P. Efficient particle filter algorithm for ultrasonic sensor-based 2D range-only simultaneous localisation and mapping application. IET Wireless Sensor Systems 2012, 2, 394–401. [CrossRef]
  116. Sgorbissa, A.; Zaccaria, R. Planning and obstacle avoidance in mobile robotics. Robotics and Autonomous Systems 2012, 60, 628–638. [CrossRef]
  117. Fox, D.; Burgard, W.; Thrun, S. The dynamic window approach to collision avoidance. IEEE Robotics & Automation Magazine 1997, 4, 23–33. [CrossRef]
  118. Konolige, K. A gradient method for realtime robot control. In Proceedings of the Proceedings. 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000)(Cat. No. 00CH37113). IEEE, 2000, Vol. 1, pp. 639–646.
  119. Nourbakhsh, I.R.; Bobenage, J.; Grange, S.; Lutz, R.; Meyer, R.; Soto, A. An affective mobile robot educator with a full-time job. Artificial intelligence 1999, 114, 95–124. [CrossRef]
  120. Cechinel, A.K.; et al. Desenvolvimento de um sistema de logística para um robô móvel hospitalar utilizando mapas de grade 2018.
  121. Alatise, M.B.; Hancke, G.P. A review on challenges of autonomous mobile robot and sensor fusion methods. IEEE access 2020, 8, 39830–39846. [CrossRef]
  122. Kashyap, A.K.; Parhi, D.R. Multi-objective trajectory planning of humanoid robot using hybrid controller for multi-target problem in complex terrain. Expert Systems with Applications 2021, 179, 115110. [CrossRef]
  123. Magrin, C.E.; Todt, E. Multi-sensor fusion method based on artificial neural network for mobile robot self-localization. In Proceedings of the 2019 Latin American Robotics Symposium (LARS), 2019 Brazilian Symposium on Robotics (SBR) and 2019 Workshop on Robotics in Education (WRE). IEEE, 2019, pp. 138–143.
  124. Raja, P.; Pugazhenthi, S. Optimal path planning of mobile robots: A review. Int. J. Phys. Sci. 2012, 7, 1314–1320.
  125. Minguez, J.; Montano, L.; Khatib, O. Reactive collision avoidance for navigation with dynamic constraints. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2002, Vol. 1, pp. 588–594.
  126. Aeberhard, M.; Rauch, S.; Bahram, M.; Tanzmeister, G.; Thomas, J.; Pilat, Y.; Homm, F.; Huber, W.; Kaempchen, N. Experience, results and lessons learned from automated driving on Germany’s highways. IEEE Intelligent transportation systems magazine 2015, 7, 42–57. [CrossRef]
  127. Chen, C.T.; Quinn, R.D.; Ritzmann, R.E. A crash avoidance system based upon the cockroach escape response circuit. In Proceedings of the Proceedings of International Conference on Robotics and Automation. IEEE, 1997, Vol. 3, pp. 2007–2012.
  128. Li, H.; Savkin, A.V. An algorithm for safe navigation of mobile robots by a sensor network in dynamic cluttered industrial environments. Robotics and Computer-Integrated Manufacturing 2018, 54, 65–82. [CrossRef]
  129. Patle, B.; Pandey, A.; Jagadeesh, A.; Parhi, D. Path planning in uncertain environment by using firefly algorithm. Defence technology 2018, 14, 691–701. [CrossRef]
  130. Mar, M.; Chellapandi, V.P.; Yuan, L.; Wang, Z.; Dietz, E. Advanced Sensor Configurations for High-Speed Autonomous Racing Vehicles. IEEE Journal of Selected Areas in Sensors 2025. [CrossRef]
  131. Buehler, M.; Iagnemma, K.; Singh, S. The 2005 DARPA grand challenge: the great robot race; Vol. 36, Springer Science & Business Media, 2007.
  132. Ray, L.; Adolph, A.; Morlock, A.; Walker, B.; Albert, M.; Lever, J.H.; Dibb, J. Autonomous rover for polar science support and remote sensing. In Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium. IEEE, 2014, pp. 4101–4104.
  133. Yan, Z.; Li, J.; Wu, Y.; Zhang, G. A Real-Time Path Planning Algorithm for AUV in Unknown Underwater Environment Based on Combining PSO and Waypoint Guidance. Sensors 2019, 19, 20. [CrossRef]
  134. Noh, S. Decision-Making Framework for Autonomous Driving at Road Intersections: Safeguarding Against Collision, Overly Conservative Behavior, and Violation Vehicles. IEEE Transactions on Industrial Electronics 2019, 66, 3275–3286. [CrossRef]
  135. Trautmann, E.; Ray, L.; Lever, J. Development of an autonomous robot for ground penetrating radar surveys of polar ice. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2009, pp. 1685–1690.
  136. Yoon, S.; Lee, D.; Jung, J.; Shim, D.H. Spline-based RRT Using Piecewise Continuous Collision-checking Algorithm for Car-like Vehicles. Journal of Intelligent & Robotic Systems 2018, 90, 537–549. [CrossRef]
  137. Ferguson, D.; Howard, T.M.; Likhachev, M. Motion planning in urban environments: Part ii. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2008, pp. 1070–1076.
  138. Bahram, M.; Ghandeharioun, Z.; Zahn, P.; Baur, M.; Huber, W.; Busch, F. Microscopic traffic simulation based evaluation of highly automated driving on highways. In Proceedings of the 17th International IEEE Conference on Intelligent Transportation Systems (ITSC). IEEE, 2014, pp. 1752–1757.
  139. Williams, G.; Drews, P.; Goldfain, B.; Rehg, J.M.; Theodorou, E.A. Aggressive driving with model predictive path integral control. In Proceedings of the 2016 ICRA. IEEE, 2016, pp. 1433–1440.
  140. Bresson, G.; Alsayed, Z.; Yu, L.; Glaser, S. Simultaneous localization and mapping: A survey of current trends in autonomous driving. IEEE Transactions on Intelligent Vehicles 2017, 2, 194–220. [CrossRef]
  141. Wang, N.; Li, X.; Zhang, K.; Wang, J.; Xie, D. A survey on path planning for autonomous ground vehicles in unstructured environments. Machines 2024, 12, 31. [CrossRef]
  142. Lee, K.; Lin, W.H.; Javed, T.; Madhusudhan, S.; Sher, B.; Feng, C. Roofus: Learning-based Robotic Moisture Mapping on Flat Rooftops with Ground Penetrating Radar. In Proceedings of the 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2024, pp. 11773–11780.
  143. Rossmadl, A.; Gandorfer, M.; Kopfinger, S.; Busboom, A. Autonomous robotics in agriculture–a preliminary techno-economic evaluation of a mechanical weeding system. In Proceedings of the ISR Europe 2023; 56th International Symposium on Robotics. VDE, 2023, pp. 405–411.
  144. Chakravarthy, A.; Ghose, D. Obstacle avoidance in a dynamic environment: A collision cone approach. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans 1998, 28, 562–574. [CrossRef]
  145. Venkatnarayan, R.H.; Shahzad, M. Enhancing indoor inertial odometry with wifi. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2019, 3, 1–27. [CrossRef]
  146. Zhang, J.; Lyu, Y.; Patton, J.; Periaswamy, S.C.; Roppel, T. BFVP: A probabilistic UHF RFID tag localization algorithm using Bayesian filter and a variable power RFID model. IEEE Trans. Ind. Electron. 2018, 65, 8250–8259. [CrossRef]
  147. Hoy, M.; Matveev, A.S.; Savkin, A.V. Algorithms for collision-free navigation of mobile robots in complex cluttered environments: a survey. Robotica 2015, 33, 463–497. [CrossRef]
  148. Almasri, M.M.; Alajlan, A.M.; Elleithy, K.M. Trajectory planning and collision avoidance algorithm for mobile robotics system. IEEE Sensors journal 2016, 16, 5021–5028. [CrossRef]
  149. Rodríguez, D.A.; Tafur, C.L.; Daza, P.F.M.; Vidales, J.A.V.; Rincón, J.C.D. Inspection of aircrafts and airports using UAS: A review. Results in Engineering 2024, p. 102330. [CrossRef]
  150. Iturrate, I.; Antelis, J.M.; Kubler, A.; Minguez, J. A noninvasive brain-actuated wheelchair based on a P300 neurophysiological protocol and automated navigation. IEEE Transactions on Robotics 2009, 25, 614–627. [CrossRef]
  151. Xiao, H.; Li, Z.; Yang, C.; Yuan, W.; Wang, L. RGB-D sensor-based visual target detection and tracking for an intelligent wheelchair robot in indoors environments. International Journal of Control, Automation and Systems 2015, 13, 521–529. [CrossRef]
  152. Rufli, M.; Ferguson, D.; Siegwart, R. Smooth path planning in constrained environments. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation. IEEE, 2009, pp. 3780–3785.
  153. Schlegel, C. Fast local obstacle avoidance under kinematic and dynamic constraints for a mobile robot. In Proceedings of the Proceedings. 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems. Innovations in Theory, Practice and Applications (Cat. No. 98CH36190). IEEE, 1998, Vol. 1, pp. 594–599.
  154. Philippsen, R.; Siegwart, R. Smooth and efficient obstacle avoidance for a tour guide robot. In Proceedings of the Proceedings. 2003 IEEE International Conference on Robotics and Automation, September 14-19, 2003, The Grand Hotel, Taipei, Taiwan. IEEE Operations Center, 2003, Vol. 1, pp. 446–451.
  155. Minguez, J.; Montano, L. Nearness diagram (ND) navigation: collision avoidance in troublesome scenarios. IEEE Transactions on Robotics and Automation 2004, 20, 45–59. [CrossRef]
  156. Cechinel, A.K.; Perez, A.L.F.; Plentz, P.D.; De Pieri, E.R. Autonomous mobile robot using distance and priority as logistics task cost. In Proceedings of the IECON 2020 The 46th Annual Conference of the IEEE Industrial Electronics Society. IEEE, 2020, pp. 569–574.
  157. SCHNEIDER, D.G.; Stemmer, M.R. SISTEMA DE LOCALIZAÇÃO DE UM ROBÔ MÓVEL BASEADO EM FILTRO DE KALMAN ESTENDIDO PARA SLAM COM KINECT EM AMBIENTES INTERNOS. In Proceedings of the Congresso Brasileiro de Automática-CBA, 2019, Vol. 1.
  158. Nievas, M.; Araguás, G.; Paz, C.J. Fusion de mapas mediante encuentros frecuentes para la exploracion multirobot.
  159. Rico, F.M.; Hernández, J.M.G.; Pérez-Rodríguez, R.; Peña-Narvaez, J.D.; Gómez-Jacinto, A.G. Open source robot localization for nonplanar environments. Journal of Field Robotics 2024, 41, 1922–1939. [CrossRef]
  160. Sansoni, S.; Gimenez, J.; Castro, G.; Tosetti, S.; Capraro, F. Optimizing exploration with a new uncertainty framework for active SLAM systems. Robotics and Autonomous Systems 2025, 193, 105059. [CrossRef]
  161. Preto, F.Z.; Neto, A.C.; Arronte, C.; Freitas, C.V.G.; Marazia, F.R.; Angélico, B.A. Follow-the-Gap Control for Fast and Safe Autonomous Driving on F1Tenth Virtual Circuit. In Proceedings of the Proceedings of the XVII Simpósio Brasileiro de Automação Inteligente (SBAI), Sociedade Brasileira de Automática, Brazil, 2025. In press.
  162. Schneider, D.G.; Stemmer, M.R. CNN-Based Multi-Object Detection and Segmentation in 3D LiDAR Data for Dynamic Industrial Environments. Robotics 2024, 13, 174. [CrossRef]
  163. Udugama, B. Mini bot 3d: A ros based gazebo simulation. arXiv preprint arXiv:2302.06368 2023.
  164. Bedın, S.; Civera, J.; Nitsche, M. Teach and Repeat con calibracion odométrica para robots omnidireccionales con sensor LiDAR.
  165. Zhu, K.; Zhang, T. Deep reinforcement learning based mobile robot navigation: A review. Tsinghua Science and Technology 2021, 26, 674–691. [CrossRef]
  166. Mendez, J.; Molina, M.; Rodriguez, N.; Cuellar, M.P.; Morales, D.P. Camera-LiDAR multi-level sensor fusion for target detection at the network edge. Sensors 2021, 21, 3992. [CrossRef]
  167. Schneider, D.G.; da Silva, L.L.; Diehl, P.; Leite, A.H.R.; Bastos, G.S. Robot navigation by gesture recognition with ros and kinect. In Proceedings of the 2015 12th Latin American Robotics Symposium and 2015 3rd Brazilian Symposium on Robotics (LARS-SBR). IEEE, 2015, pp. 145–150.
  168. Ling, Y.; Shen, S. Building maps for autonomous navigation using sparse visual SLAM features. In Proceedings of the 2017 IEEE/RSJ IROS. IEEE, 2017, pp. 1374–1381.
  169. Snape, J.; Van Den Berg, J.; Guy, S.J.; Manocha, D. The hybrid reciprocal velocity obstacle. IEEE Transactions on Robotics 2011, 27, 696–706. [CrossRef]
  170. Clemente, E.; Meza-Sánchez, M.; Bugarin, E.; Aguilar-Bustos, A.Y. Adaptive behaviors in autonomous navigation with collision avoidance and bounded velocity of an omnidirectional mobile robot. Journal of Intelligent & Robotic Systems 2018, 92, 359–380. [CrossRef]
  171. Liu, D.; Yang, F.; Liao, X.; Lyu, X. DIABLO: A 6-DoF Wheeled Bipedal Robot Composed Entirely of Direct-Drive Joints. In Proceedings of the 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2024, pp. 3605–3612.
  172. De Cristóforis, P.; Nitsche, M.; Krajník, T.; Pire, T.; Mejail, M. Hybrid vision-based navigation for mobile robots in mixed indoor/outdoor environments. Pattern Recognit. Lett. 2015, 53, 118–128. [CrossRef]
  173. Macenski, S.; Booker, M.; Wallace, J. Open-Source, Cost-Aware Kinematically Feasible Planning for Mobile and Surface Robotics. arXiv preprint arXiv:2401.13078 2024.
  174. Singh, R.; Bera, T.K.; Chatti, N. A real-time obstacle avoidance and path tracking strategy for a mobile robot using machine-learning and vision-based approach. Simulation 2022, 98, 789–805. [CrossRef]
  175. Paul, N.; Chung, C. Application of HDR algorithms to solve direct sunlight problems when autonomous vehicles using machine vision systems are driving into sun. Computers in Industry 2018, 98, 192–196. [CrossRef]
  176. Zhang, X.; Zhu, X. Autonomous path tracking control of intelligent electric vehicles based on lane detection and optimal preview method. Expert Systems with Applications 2019, 121, 38–48. [CrossRef]
  177. Chicaiza, F.A.; Slawinski, E.; Mut, V. Teleoperacion de un Manipulador Dual Movil con Torso.
  178. Liu, Y.; Li, J.; Li, Z. An indoor navigation control strategy for a brain-actuated mobile robot. In Proceedings of the 2018 3rd International Conference on Advanced Robotics and Mechatronics (ICARM). IEEE, 2018, pp. 13–18.
  179. Gasparetto, A.; Boscariol, P.; Lanzutti, A.; Vidoni, R. Path planning and trajectory planning algorithms: A general overview. Motion and Operation Planning of Robotic Systems: Background and Practical Approaches 2015, pp. 3–27.
  180. Wei, S.; Zefran, M. Smooth path planning and control for mobile robots. In Proceedings of the Proceedings. 2005 IEEE Networking, Sensing and Control, 2005. IEEE, 2005, pp. 894–899.
  181. Duan, Y.; Chen, X.; Houthooft, R.; Schulman, J.; Abbeel, P. Benchmarking deep reinforcement learning for continuous control. In Proceedings of the International conference on machine learning. PMLR, 2016, pp. 1329–1338.
  182. Herrmann, T.; Wischnewski, A.; Hermansdorfer, L.; Betz, J.; Lienkamp, M. Real-time adaptive velocity optimization for autonomous electric cars at the limits of handling. IEEE Transactions on Intelligent Vehicles 2020, 6, 665–677. [CrossRef]
  183. Bayer, J.; Cížek, P.; Faigl, J. Autonomous multi-robot exploration with ground vehicles in darpa subterranean challenge finals. Field Robotics 2023, 3, 266–300.
  184. Chung, H.; Ojeda, L.; Borenstein, J. Accurate mobile robot dead-reckoning with a precision-calibrated fiber-optic gyroscope. IEEE transactions on robotics and automation 2001, 17, 80–84.
  185. Schöneberg, E.; Schröder, M.; Görges, D.; Schotten, H.D. Trajectory Planning with Model Predictive Control for Obstacle Avoidance Considering Prediction Uncertainty. arXiv preprint arXiv:2504.19193 2025. [CrossRef]
  186. Borenstein, J.; Koren, Y. Real-time obstacle avoidance for fast mobile robots. IEEE Transactions on systems, Man, and Cybernetics 1989, 19, 1179–1187. [CrossRef]
  187. Schratter, M.; Zubaca, J.; Mautner-Lassnig, K.; Renzler, T.; Kirchengast, M.; Loigge, S.; Stolz, M.; Watzenig, D. LiDAR-based mapping and localization for autonomous racing. In Proceedings of the Proceedings of the ICRA Workshop on Opportunities and Challenges of Autonomous Racing, 2021, pp. 1–6.
  188. Kwok, N.M.; Ha, Q.P.; Huang, S.; Dissanayake, G.; Fang, G. Mobile robot localization and mapping using a Gaussian sum filter. International Journal of Control, Automation and Systems 2007.
  189. Karle, P.; Török, F.; Geisslinger, M.; Lienkamp, M. Mixnet: Physics constrained deep neural motion prediction for autonomous racing. IEEE Access 2023, 11, 85914–85926. [CrossRef]
  190. Liu, Y.; Wang, S.; Xie, Y.; Xiong, T.; Wu, M. A review of sensing technologies for indoor autonomous mobile robots. Sensors 2024, 24, 1222. [CrossRef]
  191. Schubert, S.; Neubert, P.; Garg, S.; Milford, M.; Fischer, T. Visual Place Recognition: A Tutorial [Tutorial]. IEEE Robotics & Automation Magazine 2023, 31, 139–153. [CrossRef]
  192. Schultz, A.C.; Adams, W. Continuous localization using evidence grids. In Proceedings of the Proceedings. 1998 IEEE International Conference on Robotics and Automation (Cat. No. 98CH36146). IEEE, 1998, Vol. 4, pp. 2833–2839.
  193. Fang, S.; Li, H. Multi-vehicle cooperative simultaneous LiDAR SLAM and object tracking in dynamic environments. IEEE Transactions on Intelligent Transportation Systems 2024, 25, 11411–11421. [CrossRef]
  194. Atas, F.; Cielniak, G.; Grimstad, L. Elevation state-space: Surfel-based navigation in uneven environments for mobile robots. In Proceedings of the 2022 IEEE/RSJ IROS. IEEE, 2022, pp. 5715–5721.
  195. Missura, M.; Bennewitz, M. Fast footstep planning with aborting a. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021, pp. 2964–2970.
  196. Nobis, F.; Betz, J.; Hermansdorfer, L.; Lienkamp, M. Autonomous racing: A comparison of slam algorithms for large scale outdoor environments. In Proceedings of the Proceedings of the 2019 3rd international conference on virtual and augmented reality simulations, 2019, pp. 82–89.
  197. Stolfi, D.H.; Brust, M.R.; Danoy, G.; Bouvry, P. UAV-UGV-UMV multi-swarms for cooperative surveillance. Frontiers in Robotics and AI 2021, 8, 616950. [CrossRef]
  198. Pivtoraiko, M.; Knepper, R.A.; Kelly, A. Differentially constrained mobile robot motion planning in state lattices. Journal of Field Robotics 2009, 26, 308–333. [CrossRef]
  199. Quinlan, S.; Khatib, O. Elastic bands: Connecting path planning and control. In Proceedings of the [1993] Proceedings IEEE International Conference on Robotics and Automation. IEEE, 1993, pp. 802–807.
  200. Khatib, M.; Jaouni, H.; Chatila, R.; Laumond, J.P. Dynamic path modification for car-like nonholonomic mobile robots. In Proceedings of the Proceedings of International Conference on Robotics and Automation. IEEE, 1997, Vol. 4, pp. 2920–2925.
  201. Karamitsos, G.; Bechtsis, D.; Tsolakis, N.; Vlachos, D. Assessing Path Planning Algorithms of Mobile Robots: A ROS-Based Simulation Framework. In Disruptive Technologies and Optimization Towards Industry 4.0 Logistics; Springer, 2024; pp. 139–160.
  202. Kunz Cechinel, A.; De Pieri, E.R. Centralized multi-robot logistic system: An approach using the island model genetic algorithm as task scheduler. International Journal of Advanced Robotic Systems 2024, 21, 17298806241279595.
  203. Atas, F.; Cielniak, G.; Grimstad, L. Benchmark of sampling-based optimizing planners for outdoor robot navigation. In Proceedings of the International Conference on Intelligent Autonomous Systems. Springer, 2022, pp. 231–243.
  204. De Carvalho, M.V.; Simoni, R.; Yoshioka, L.R.; FJ Filho, J.; Kawakami, B.M. A Performance Evaluation of Open Source Autonomous Driving Frameworks: Case Studies of Apollo and Autoware. IEEE Access 2025.
  205. Kamon, I.; Rivlin, E.; Rimon, E. A new range-sensor based globally convergent navigation algorithm for mobile robots. In Proceedings of the Proceedings of IEEE International Conference on Robotics and Automation. IEEE, 1996, Vol. 1, pp. 429–435.
  206. Lumelsky, V.J.; Skewis, T. Incorporating range sensing in the robot navigation function. IEEE Transactions on Systems, Man, and Cybernetics 1990, 20, 1058–1069. [CrossRef]
  207. Lumelsky, V.J.; Stepanov, A.A. Path-planning strategies for a point mobile automaton moving amidst unknown obstacles of arbitrary shape. Algorithmica 1987, 2, 403–430. [CrossRef]
  208. Simmons, R. The curvature-velocity method for local obstacle avoidance. In Proceedings of the Proceedings of IEEE international conference on robotics and automation. IEEE, 1996, Vol. 4, pp. 3375–3382.
  209. Brock, O.; Khatib, O. High-speed navigation using the global dynamic window approach. In Proceedings of the Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No. 99CH36288C). IEEE, 1999, Vol. 1, pp. 341–346.
  210. Medina-Santiago, A.; Morales-Rosales, L.A.; Hernández-Gracidas, C.A.; Algredo-Badillo, I.; Pano-Azucena, A.D.; Orozco Torres, J.A. Reactive obstacle–avoidance systems for wheeled mobile robots based on artificial intelligence. Applied Sciences 2021, 11, 6468. [CrossRef]
  211. Ulrich, I.; Borenstein, J. VFH+: Reliable obstacle avoidance for fast mobile robots. In Proceedings of the Proceedings. 1998 IEEE international conference on robotics and automation (Cat. No. 98CH36146). IEEE, 1998, Vol. 2, pp. 1572–1577.
  212. Ulrich, I.; Borenstein, J. VFH/sup*: Local obstacle avoidance with look-ahead verification. In Proceedings of the Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No. 00CH37065). IEEE, 2000, Vol. 3, pp. 2505–2511.
  213. Rostami, S.M.H.; Sangaiah, A.K.; Wang, J.; Kim, H.j. Real-time obstacle avoidance of mobile robots using state-dependent Riccati equation approach. EURASIP Journal on Image and Video Processing 2018, 2018, 79. [CrossRef]
  214. Borrero, G.H.; Becker, M.; Archila, J.F.; Bonito, R. Fuzzy control strategy for the adjustment of the front steering angle of a 4WSD agricultural mobile robot. In Proceedings of the 2012 7th Colombian Computing Congress (CCC). IEEE, 2012, pp. 1–6.
  215. Lafmejani, A.S.; Farivarnejad, H.; Berman, S. H-optimal tracking controller for three-wheeled omnidirectional mobile robots with uncertain dynamics. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020, pp. 7587–7594.
  216. Gonzalez, R.; Fiacchini, M.; Alamo, T.; Guzman, J.L.; Rodriguez, F. Adaptive control for a mobile robot under slip conditions using an LMI-based approach. European Journal of Control 2010, 16, 144–155. [CrossRef]
  217. Zhu, Y.; Huang, Y.; Su, J.; Pu, C. Active disturbance rejection control for wheeled mobile robots with parametric uncertainties. IFAC-PapersOnLine 2020, 53, 1355–1360. [CrossRef]
  218. Schena, F. Development of an automated benchmark for the analysis of Nav2 controllers. PhD thesis, Politecnico di Torino, 2024.
  219. Kabanov, A. Feedback linearized trajectory-tracking control of a mobile robot. In Proceedings of the MATEC Web of Conferences. EDP Sciences, 2017, Vol. 129, p. 03029. [CrossRef]
  220. Zhang, K.; Chai, B.; Tan, M. Optimal enhanced backstepping method for trajectory tracking control of the wheeled mobile robot. Optimal Control Applications and Methods 2024, 45, 2762–2790. [CrossRef]
  221. Tassa, Y.; Erez, T.; Todorov, E. Synthesis and stabilization of complex behaviors through online trajectory optimization. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2012, pp. 4906–4913.
  222. Ibrahim, A.E.S.B. Wheeled Mobile Robot Trajectory Tracking using Sliding Mode Control. J. Comput. Sci. 2016, 12, 48–55. [CrossRef]
  223. Zhang, J.; Li, S.; Meng, H.; Li, Z.; Sun, Z. Variable gain based composite trajectory tracking control for 4-wheel skid-steering mobile robots with unknown disturbances. Control Engineering Practice 2023, 132, 105428. [CrossRef]
  224. Li, N.; Borja, P.; Scherpen, J.M.; Van Der Schaft, A.; Mahony, R. Passivity-based trajectory tracking and formation control of nonholonomic wheeled robots without velocity measurements. IEEE Transactions on Automatic Control 2023, 68, 7951–7957. [CrossRef]
  225. Macenski, S.; Singh, S.; Martin, F.; Gines, J. Regulated Pure Pursuit for Robot Path Tracking. Auton. Robots 2023. [CrossRef]
  226. Chen, Y.; Zhu, J.J. Pure Pursuit Guidance for Car-Like Ground Vehicle Trajectory Tracking. In Proceedings of the ASME 2017 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2017, pp. V002T21A015–V002T21A015.
  227. Sidiropoulos, A.; Konstantinidis, D.; Karamanos, X.; Mastos, T.; Apostolou, K.; Chatzis, T.; Papaspyropoulou, M.; Marini, K.; Karamitsos, G.; Theodoridou, C.; et al. A Novel Autonomous Robotic Vehicle-Based System for Real-Time Production and Safety Control in Industrial Environments. Computers 2025, 14, 188. [CrossRef]
  228. Schubert, S.; Neubert, P.; Garg, S.; Milford, M.; Fischer, T. Visual Place Recognition: A Tutorial. IEEE Robotics & Automation Magazine 2024, 30, 139–154. [CrossRef]
  229. Irshad, M.Z.; Comi, M.; Lin, Y.C.; Heppert, N.; Valada, A.; Ambrus, R.; Kira, Z.; Tremblay, J. Neural Fields in Robotics: A Survey. arXiv preprint arXiv:2410.20220 2024.
  230. Luo, R.C.; Chang, C.C. Multisensor fusion and integration: A review on approaches and its applications in mechatronics. IEEE Transactions on Industrial Informatics 2011, 8, 49–60. [CrossRef]
  231. Ruan, S.; Wang, R.; Shen, X.; Liu, H.; Xiao, B.; Shi, J.; Zhang, K.; Huang, Z.; Liu, Y.; Chen, E.; et al. A Survey of Multi-sensor Fusion Perception for Embodied AI: Background, Methods, Challenges and Prospects. arXiv preprint arXiv:2506.19769 2025.
  232. Panduru, K.; Walsh, J.; et al. Exploring the Unseen: A Survey of Multi-Sensor Fusion and the Role of Explainable AI (XAI) in Autonomous Vehicles. Sensors (Basel, Switzerland) 2025, 25, 856. [CrossRef]
  233. Raychaudhuri, S.; Chang, A.X. Semantic Mapping in Indoor Embodied AI–A Comprehensive Survey and Future Directions. arXiv preprint arXiv:2501.05750 2025.
  234. Karur, K.; Sharma, N.; Dharmatti, C.; Siegel, J.E. A Survey of Path Planning Algorithms for Mobile Robots. Vehicles 2021, 3, 448–468. [CrossRef]
  235. Qin, H.; Shao, S.; Wang, T.; Yu, X.; Jiang, Y.; Cao, Z. Path planning for ´ mobile robots using bacterialalgorithms for mobile robots. Drones 2023, 7, 211.
  236. Yang, G.; An, L.; Zhao, C. Collision/Obstacle Avoidance Coordination of Multi-Robot Systems: A Survey. In Proceedings of the Actuators. MDPI, 2025, Vol. 14, p. 85. [CrossRef]
  237. Lauri, M.; Hsu, D.; Pajarinen, J. Partially Observable Markov Decision Processes in Robotics: A Survey. IEEE Transactions on Robotics 2022, 39, 21–40. [CrossRef]
  238. Iovino, M.; Scukins, E.; Styrud, J.; Ögren, P.; Smith, C. A survey of behavior trees in robotics and ai. Robotics and Autonomous Systems 2022, 154, 104096. [CrossRef]
  239. Xiao, X.; Liu, B.; Warnell, G.; Stone, P. A Survey of Motion Control for Mobile Robot Navigation Using Machine Learning. In Proceedings of the AAAI Spring Symposium Series, 2021.
  240. Rybczak, M.; Popowniak, N.; Lazarowska, A. A Survey of Machine Learning Approaches for Mobile Robot Control. Robotics 2024, 13, 12. [CrossRef]
  241. Nascimento, T.P.; Dórea, C.E.; Gonçalves, L.M.G. Nonholonomic mobile robots’ trajectory tracking model predictive control: a survey. Robotica 2018, 36, 676–696. [CrossRef]
  242. Dal’Col, L.; Oliveira, M.; Santos, V. Joint Perception and Prediction for Autonomous Driving: A Survey. IEEE Transactions on Intelligent Transportation Systems 2024. arXiv:2412.14088. [CrossRef]
  243. Luo, J.; Zhou, X.; Zeng, C.; Jiang, Y.; Qi, W.; Xiang, K.; Pang, M.; Tang, B. Robotics perception and control: Key technologies and applications. Micromachines 2024, 15, 531. [CrossRef]
  244. Guo, H.; Wu, F.; Qin, Y.; Li, R.; Li, K.; Li, K. Recent trends in task and motion planning for robotics: A survey. ACM Computing Surveys 2023, 55, 1–36. [CrossRef]
  245. Golroudbari, A.A.; Sabour, M.H. Recent advancements in deep learning applications and methods for autonomous navigation: A comprehensive review. arXiv preprint arXiv:2302.11089 2023.
  246. Jeon, H.; Park, K.; Sun, J.Y.; Kim, H.Y. Particle-armored liquid robots. Science Advances 2025, 11, eadt5888. [CrossRef]
  247. Zhai, Y.; Yan, J.; De Boer, A.; Faber, M.; Gupta, R.; Tolley, M.T. Monolithic Desktop Digital Fabrication of Autonomous Walking Robots. Advanced Intelligent Systems 2025, p. 2400876.
  248. Tkachenko, E.; Merkulov, D.; Pelevina, D.; Turkov, V.; Vinogradova, A.; Naletova, V. Mathematical model of a mobile robot with a magnetizable material in a uniform alternating magnetic field. Meccanica 2023, 58, 357–369. [CrossRef]
  249. Ramirez, J.P.; Hamaza, S. Multimodal locomotion: next generation aerial–terrestrial mobile robotics. Advanced Intelligent Systems 2023, p. 2300327. [CrossRef]
  250. Low, K.; Hu, T.; Mohammed, S.; Tangorra, J.; Kovac, M. Perspectives on biologically inspired hybrid and multi-modal locomotion. Bioinspiration & biomimetics 2015, 10, 020301. [CrossRef]
  251. Swaminathan, N.; Reddy, S.R.P.; RajaShekara, K.; Haran, K.S. Flying cars and eVTOLs—Technology advancements, powertrain architectures, and design. IEEE Transactions on Transportation Electrification 2022, 8, 4105–4117. [CrossRef]
  252. Daler, L.; Mintchev, S.; Stefanini, C.; Floreano, D. A bioinspired multi-modal flying and walking robot. Bioinspiration & biomimetics 2015, 10, 016005. [CrossRef]
  253. Zhang, R.; Wu, Y.; Zhang, L.; Xu, C.; Gao, F. Tie: An autonomous and adaptive terrestrial-aerial quadrotor. arXiv preprint arXiv:2109.04706 2021.
  254. Fagundes-Júnior, L.A.; Barcelos, C.O.; Silvatti, A.P.; Brandão, A.S. UAV–UGV Formation for Delivery Missions: A Practical Case Study. Drones 2025, 9, 48. [CrossRef]
  255. Alzu’bi, H.; Mansour, I.; Rawashdeh, O. Loon copter: Implementation of a hybrid unmanned aquatic–aerial quadcopter with active buoyancy control. Journal of field Robotics 2018, 35, 764–778. [CrossRef]
  256. Chernousko, F. Locomotion principles for mobile robotic systems. Procedia Computer Science 2017, 103, 613–617. [CrossRef]
  257. Shamsuddoha, M.; Nasir, T.; Fawaaz, M.S. Humanoid Robots like Tesla Optimus and the Future of Supply Chains: Enhancing Efficiency, Sustainability, and Workforce Dynamics. Automation 2025, 6, 9. [CrossRef]
  258. Kawasaki Heavy Industries. Showcasing CORLEO – A New Type of Futuristic, Off-Road Personal Mobility Vehicle. https://global.kawasaki.com/en/corp/newsroom/news/detail/?f=20250403_9193, 2025. Press release, accessed on 5 July 2025.
  259. Tang, Y.; Zhao, C.; Wang, J.; Zhang, C.; Sun, Q.; Zheng, W.X.; Du, W.; Qian, F.; Kurths, J. Perception and navigation in autonomous systems in the era of learning: A survey. IEEE Transactions on Neural Networks and Learning Systems 2022, 34, 9604–9624. [CrossRef]
  260. Musa, M.A.; Mashori, S. Solar Powered Autonomous RC Robot. Progress in Engineering Application and Technology 2023, 4, 133–144.
  261. Cha, Y.; Hong, S. Energy harvesting from walking motion of a humanoid robot using a piezoelectric composite. Smart Materials and Structures 2016, 25, 10LT01. [CrossRef]
  262. Seo, J.; Paik, J.; Yim, M. Modular reconfigurable robotics. Annual Review of Control, Robotics, and Autonomous Systems 2019, 2, 63–88.
  263. Shorinwa, O.; Halsted, T.; Yu, J.; Schwager, M. Distributed optimization methods for multi-robot systems: Part 1—a tutorial [tutorial]. IEEE Robotics & Automation Magazine 2024, 31, 121–138. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated