Preprint
Article

This version is not peer-reviewed.

Data Fusion Applied to the LBBA Algorithm to Improve the Localization of Mobile Robots

A peer-reviewed article of this preprint also exists.

Submitted:

25 November 2024

Posted:

27 November 2024

You are already at the latest version

Abstract
Bioinspired optimization algorithms derive their effectiveness from the number of particles used to find the global minimum or maximum of a problem. However, as the number of potential solutions increases, the computational effort also increases due to the exponential expansion of the calculations required to evaluate the objective functions of each particle throughout the algorithms. Another fundamental aspect to be considered is the algorithm's ability to avoid local optima efficiently and not compromise the results, leading to failure to achieve an ideal or acceptable outcome. This article discusses a solution to this challenge in the context of localizing mobile robots in known environments, proposing sensory data fusion applied to the Leader-Based Bat Algorithm (LBBA). This new approach exploits the algorithm's ability to work with leader bats, whose function is to influence specific groups (colonies), employing information from a sensor (compass) to assist in the distribution of particles (bats) on the map during the search process. The idea is to allow precise localization of Unmanned Ground Vehicles (UGV) on known maps when a motion capture system is unavailable, allowing controlling such robots based on good position feedback.
Keywords: 
;  ;  ;  

1. Introduction

The usage of autonomous robots, particularly Unmanned Ground Vehicles (UGVs), has dramatically accelerated in recent years, generating new navigation and control challenges. Among them, achieving accurate localization remains a cornerstone for successful autonomous operations across various environments [1,2]. High-resolution location data, typically provided by motion capture systems, is crucial information for controlling the robot’s movement and is unavailable in many situations. Cheaper solutions use other advanced sensing technologies, such as Laser Distance Detection and Measurement (LIDAR). However, the computational demands of processing high volumes of sensor data often exceed the capabilities of mobile robotic platforms, especially in resource-constrained settings [3,4].
Significant strides have been made in robot control at the Laboratory of Intelligent Autonomous Robots (LAB-AIR), associated with the Department of Electrical Engineering of the Federal University of Espírito Santo (UFES), Brazil. The laboratory’s infrastructure, based on the OptiTrack motion capture system1, supports precise experiments. Nonetheless, the impossibility of using such systems in external environments underscores the urgent need for alternative localization methods that are both cost-effective and robust as well [5,6,7].
This paper addresses these challenges by developing and validating a novel Leader-Based Bat Algorithm (LBBA) [8] for mobile robot localization. The idea underlying such a proposal is to use sensory data fusion to enhance the accuracy and robustness of location estimates. Moreover, the proposed approach capitalizes on the inherent strengths of bio-inspired optimization algorithms — specifically, the bat algorithm’s ability to navigate complex landscapes using echolocation-like mechanisms. By incorporating readings from a digital compass, the LBBA optimizes the distribution of computational resources, thereby reducing the complexity of the localization task and improving operational efficiency in known environments.
The experiments run aim to locate a ground robot, being controlled to navigate on a pre-existing map [4,7,9,10], optimizing performance through sensory data fusion. Therefore, this article contributes significantly to applied research in mobile robot localization, offering a viable alternative for environments where advanced and costly motion capture systems are unavailable. The suggested methodology opens new avenues for autonomous exploration in outdoor environments, thus being accessible to a broader range of users.
Summarizing, the contribution of this work is twofold, namely to introduce a modified bat algorithm incorporating leader dynamics to guide the swarm towards optimal solutions, effectively minimizing the localization error in complex environments, and to demonstrate the practicality of the proposed approach through a series of experiments that not only validate the algorithm under controlled laboratory conditions but also explore its applicability in realistic scenarios.
To address such topics, the remainder of this paper is organized as follows: Section 2, which reviews related works and situates our contributions in the state-of-the-art in the area, and Section 3, which shows the problem formulation, besides the theoretical underpinnings of our approach. In the sequel, Section 4 presents the experimental setup and results, comparing the performance of our algorithm against that of some traditional methods. Finally, Section 5 summarizes the main findings and some potential directions for future research.

2. Related Works

Mobile robot localization has experienced extensive developments driven by the escalating demand for more accurate and autonomous systems in different environments. As technologies advance, integrating various sensors and algorithms becomes crucial to enhance the accuracy and reliability of localization systems. In [1], for instance, the challenges and solutions related to integrating manipulators on autonomous robots, including Unmanned Aerial Vehicles (UAVs), Unmanned Underwater Vehicles (UUVs), and UGVs. Such work highlights how additional mechanisms like manipulators can affect performance through increased weight, slow convergence, and errors in path planning, offering a detailed analysis of these robots’ dynamic models and performance limitations.
The seminal book entitled Probabilistic Robotics [11] introduces various algorithms and theoretical underpinnings for stochastic systems in robotics. It provides a comprehensive foundation for probabilistic approaches in robotics, emphasizing robustness and accuracy in the face of uncertainty, making it a cornerstone reference in the field. Following this line, the Bat Algorithm (BA), a new metaheuristic based on the echolocation behavior of bats, was proposed in [12,13]. It combines the advantages of existing algorithms, such as particle swarm optimization and harmony search. Through detailed formulation and implementation explanations, the author demonstrates that BA offers improved performance compared to traditional methods like genetic algorithms and particle swarm optimization, thus providing a promising tool for solving complex optimization problems.
In addition, recent advances have focused on mixing multiple sensory inputs to enhance localization precision. Techniques such as Simultaneous Localization and Mapping (SLAM) [14,15] have become fundamental in this area, allowing robots to construct a map of an unknown environment while simultaneously keeping track of their location within that map. These techniques are particularly beneficial in GPS-denied environments, such as indoor or dense urban areas, where traditional GPS signals are unreliable.
Bio-inspired algorithms based on Sequential Monte Carlo Optimization [16], commonly called Particle Filters, are more robust than Kalman Filters because they maintain the full posterior probability distribution of the problem, which can accommodate multiple hypotheses and non-Gaussian shapes. Since robot localization often involves multiple hypotheses, bio-inspired algorithms are particularly suitable for localization objectives. Specifically, the Bat Algorithm (BA) demonstrates rapid convergence to solutions [13], and its variants, such as the Leader-Based Bath Algorithm (LBBA) [8], offer enhanced performance and the possibility of data fusion.
Integrating data fusion techniques with bio-inspired algorithms represents a promising direction in robotics research. By combining the strengths of each approach, namely robust statistical foundations and adaptive, nature-inspired mechanisms, researchers aim to develop localization systems that provide high accuracy, are computationally efficient, and also adaptable to environmental changes. This work goes further on these foundations, proposing an LBBA version incorporating digital compass data to optimize the localization process. This innovative approach reduces computational overhead and accelerates convergence by leveraging leader dynamics that guide the swarm toward the most probable solutions, thus enhancing accuracy.

3. Problem Formulation

The problem of mobile robot localization involves determining a robot’s precise position and orientation within a given environment. This is critical for ensuring effective navigation and task execution in known and unknown settings. Indeed, the localization problem can be formally defined within the state estimation framework, where the goal is to estimate the robot’s state vector based on noisy sensor measurements.
This work focuses on the localization of a unicycle-type robot, also called a differential robot. Due to its simplicity and effectiveness in various applications, this robot is likely the most common ground robot. The objective is to estimate the robot’s pose using measurements from a LIDAR sensor and a digital compass.

3.1. Uncertainty and Sensor Fusion

Localization involves dealing with uncertainties, often of different natures, and with distinct probability density functions. These uncertainties affect all aspects of the experiments, from sensor measurements to the robot’s displacement when executing control laws. They also affect the environment with which the robot interacts, deviating it from its original path and changing the environmental configuration with static and dynamic obstacles. To address these uncertainties, we employ a probabilistic approach that models the robot’s state as a probability distribution. The key challenge is updating this distribution based on new sensor measurements. Specifically, we use a particle filter approach, where a set of particles represents the posterior distribution of the state, each with an associated weight. These particles are propagated through the state space according to the robot’s motion model and updated using sensor measurements.
The state estimation process can be divided into prediction and update steps. The prediction step uses the robot’s motion model to propagate each particle to a new position, which corresponds to take
x t + 1 i = f ( x t i , u t ) + w t i
where x t i is the state of particle i at time t, f ( x t i , u t ) is the state transition function, u t represents the control inputs, and w t i is the process noise for particle i, typically modeled as Gaussian with covariance Q due to the Central Limit Theorem (CLT).
In the update step, sensor measurements are used to update the weights of each particle. The measurement model is given by
z t = h ( x t ) + v t
where h ( x t ) is the measurement function, and v t is the measurement noise, typically Gaussian with covariance R , also due to the CLT.
The weight of each particle is updated based on the likelihood of the measurement, given the particle’s state, such as
w t i = w t 1 i · p ( z t | x t i )
where w t i is the weight of particle i at time t. Here, p ( z t | x t i ) is the likelihood of the measurement z t given the state x t i of the particle.
The likelihood is typically modeled as a Gaussian probability density function based on the sensor measurements, reflecting the discrepancy between the actual sensor reading and the predicted measurement from the particle’s state. The weight w t i represents the degree of importance of particle i. A higher weight means that the sensor measurements from the particle’s state closely match the actual sensor measurements, suggesting that the particle’s state is similar to the robot’s true state. Therefore, particles with higher weights are more likely to be in locations similar to that of the robot within the environment.
To model the uncertainties in sensor measurements, we use standard probabilistic formulations. The robot’s position and orientation uncertainty are modeled through error covariances. The likelihood function for sensor data fusion is defined as
p ( z | x ) = N ( z ; h ( x ) , R )
where z is the measurement vector, h ( x ) is the measurement function relating the state x to the measurements z , and R is the sensor noise covariance matrix.
The prediction and correction steps of the particle filter are updated as new measurements are received, adjusting the probability distribution of the robot’s state according to
x t + 1 p ( x t + 1 | x t ) p ( x t | z 1 : t ) d x t
with
p ( x t + 1 | z 1 : t + 1 ) p ( z t + 1 | x t + 1 ) p ( x t + 1 | z 1 : t )
where p ( x t + 1 | x t ) is the state transition probability, and p ( z t + 1 | x t + 1 ) is the measurement likelihood.
The posterior distribution of the state p ( x t | z 1 : t ) is approximated by the weighted set of particles, or
p ( x t | z 1 : t ) i = 1 N w t i δ ( x t x t i )
where N is the number of particles and δ is the Dirac delta function. While the individual likelihoods p ( z t | x t i ) are typically modeled as Gaussians due to the Central Limit Theorem, the combination of these likelihoods in the particle filter framework does not necessarily result in a Gaussian posterior. Instead, this approach approximates the true posterior distribution of the robot’s state, which can have any shape, thus capturing the full posterior and allowing for multiple hypotheses and non-Gaussian distributions.
Unlike Bayesian processes that often assume a Gaussian posterior, the final posterior distribution in Monte Carlo methods, such as the Bat Algorithm, is not necessarily Gaussian. One of the main advantages of using Monte Carlo-based methods is their ability to approximate the full posterior distribution, which can have arbitrary shapes and accommodate multiple hypotheses. This flexibility allows for a more accurate representation of the uncertainties and the robot state in complex, dynamic environments.
Resampling is performed to prevent particle depletion and ensure that particles with higher weights are more likely to be selected for the next iteration. This step is critical, aiming to maintain a diverse and representative set of particles, which helps to track the posterior distribution over time accurately.

3.2. Leader-Based Bat Algorithm (LBBA)

Bats’ echolocation behavior inspires the Leader-Based Bat Algorithm (LBBA). Virtual bats explore the search space to find the best solutions. Each bat represents a candidate solution for the robot’s pose. The algorithm iterates through a series of movements and updates to refine these solutions.
Key components of the LBBA include:
  • Initialization: The algorithm starts with a population of bats, each one with a random initial position and velocity.
  • Leader Selection: A subset of bats is designated leaders based on their fitness values. Leaders influence the movements of other bats, guiding the swarm towards promising areas of the search space.
  • Movement Update: Bats update their positions and velocities based on the current best solutions and the influence of the leaders. The updating equations are
    f i = f min + ( f max f min ) β ,
    v i t = v i t 1 + ( x i t 1 x * ) f i ,
    and
    x i t = x i t 1 + v i t ,
    where v i t and x i t are the velocity and position of bat i at time t, x * is the current best solution, f i is the frequency factor, and β is a random vector drawn from a uniform distribution in
    [ 0 , 1 ]
    .
  • Evaluation: Each bat’s position is evaluated using a fitness function that measures the discrepancy between the estimated pose and the actual sensor measurements.
  • Termination: The algorithm iterates until a stopping criterion is met, such as a maximum number of iterations or a convergence threshold.
Integrating the LBBA with sensor fusion techniques enables the efficient combination of LIDAR and compass data, for instance, to improve localization accuracy and robustness. The focus of this work is precisely the fusion of data provided by the LIDAR sensor with the robot orientation provided by a digital compass, and the results shown ahead validate this proposal.
LBBA is a non-parametric filter with a finite number of individuals, each corresponding to approximately one region in the search space. Non-parametric filters are well-adapted to represent complex multi-modal beliefs, such as in the case of mobile robot localization. Similar to other bio-inspired algorithms, Leader-Based Bat Algorithm (LBBA) represents the priori state of individuals by a set of random samples. This characteristic enables the simultaneous representation of multiple optimal localizations and allows the algorithm to explore a much more extensive space of distributions than Gaussian models [11].
The optimization process through the standard Bat Algorithm (BA) uses both the best global g and the best individual x . The reason for employing x is to increase diversity in the search for the globally optimal solution [17].
Bats can be territorial, especially regarding their resting places or shelters. This territoriality can vary significantly depending on the bat species. Some species tend to be more solitary and defend their territories against other bats, while others are quite social and live in large colonies, showing less individual territorial behavior.
Fruit bats, for instance, may defend feeding territories rich in resources, such as fruit trees, against other bats to ensure a constant food supply. On the other hand, bats living in large colonies, like those inhabiting caves, may exhibit more cooperative behavior within their colonies, although still showing territoriality against bats from rival colonies or other species.
Territoriality in bats can also be observed in the defense of resting places against intruders, in competition for mates during the mating season, and in the protection of offspring. The methods used to defend territories can include vocalizations, aggressive physical behavior, and scent marking. Considering that the leadership proposal characterizes the LBBA algorithm, creating a metric to represent scent marking (the leader’s area of influence) on the map becomes crucial.
To address this issue, the relationship between the total area of the map and the area of interest that the actual robot would cover was established. The proposal consists of finding the number of mobile bases necessary to cover the entire map area where the localization mission will be carried out, considering that this is the part of the robot that would effectively contribute to covering the map.
Thus, the total area of the map was defined as 6 m 2 , in the laboratory tests reported ahead, as well as the area of interest that each robot covers, adopting the area of the P3-DX robot used in the experiments here reported, as 0.1519 m 2 . Then, the number of robots needed to cover the map is determined by dividing the total map area A m a p by the area of interest A r o b o t that will be covered by the robotic structure, which means
N l e a d e r s = A m a p A r o b o t .
The result is rounded to the nearest integer greater than or equal to the calculated value. In the proposed test scenario, nine leaders would be necessary to achieve complete and uniform coverage of the map area. This parameter is not linked to the quantity of the bat population since this constitutes the operational sphere of swarm algorithms. Thus, it significantly contributes to improving the LBBA algorithm, considering that the number of bat leaders is a constant that seeks to be related to the bio-inspired characteristics of bats. It also reduces the number of parameters to be adjusted in a non-empirical manner.
The proposal to use several leaders stimulated the development of different colonies simultaneously. This aims to decrease flight randomness in the search for the global optimum, making it more efficient to scan the solution space.
Initially, the LBBA establishes the number of bats, N, used in the optimization process, following
M = { N N + | n i , ( x i , v i ) , 0 < i < N } ,
where M represents the set of individuals n i , each one having two characterizing parameters, the position x i and the speed v i .
Among the N bats, some are chosen as leaders during the algorithm execution. The number of leaders is defined according to the complexity of the environment. Bats fly randomly with a velocity v i at a certain position x i , varying the loudness A i to search for prey and the frequency of their emitted pulses r i in a range of [ 0 , 1 ] depending on the proximity of the target. The loudness is assumed to vary from a large positive A o to a minimum constant value A m i n . For each position, it is necessary to compute an objective function O ( x i ) , given by
O ( x i ) = m i m m ,
where the parameter m i represents the measurement performed by the simulated sensor of the i-th individual and m is the measurement performed by the real robot sensor. The bats with the best weights become the leaders. The optimal value function for the i-th individual, f i , plus its velocity and position, are dynamically updated by the algorithm. The parameters f max and f min represent the maximum and minimum frequency of the pulse emission rate, respectively, which are predefined variables. The velocity and position vectors are generated as
f i = f min + ( f max f min ) β v i t = v i t 1 + ( x i t x * ) f i x i t = x i t 1 + v i t
where the parameter β represents the change of the pulse rate.
The above equation works with the strategy that the closest leader of each colony influences the search for the best position. The distance equation of each bat to the leaders is given by
d i s t = X 2 + Y 2 ,
where
X = Leader . x Bat . x Y = Leader . y Bat . y

3.3. The proposed LBBA Algorithm

This study describes an experiment on the localization of mobile robots employing a differential robot that utilizes an RPLIDAR A2M12 sensor capable of measuring distances in 360 at a high sampling rate. This allows for quick and efficient measurements, making it suitable for real-time applications like autonomous robot navigation. The robot used in the experiment was the AGILE X LIMO platform, configured as a differential one (see Figure 1), equipped with the LIDAR sensor.
The experiment employed foam mat tiles to simulate the characteristics of a mapped environment. These tiles enabled the creation of various test scenarios inside the laboratory, allowing for significant versatility in configuring scenarios and making it easier to conduct experiments under different conditions.
One of the challenges faced was the high computational complexity resulting from the large volume of range measurements provided by the LIDAR sensor. To improve computational efficiency, the proposed localization algorithm was implemented in C++. A digital compass is also incorporated to provide precise information about the robot’s orientation on the map to improve localization accuracy. This integration allowed a more accurate determination of the robot’s orientation in the controlled environment. The fusion of LIDAR sensor data with the orientation information provided by the compass significantly improved the robot’s localization capabilities in complex scenarios. Moreover, this experiment offers valuable insights into the effectiveness of mobile robot localization in controlled environments, underscoring the importance of optimizing sensor fusion and sensing techniques to achieve accurate and efficient results.
Figure 2 elucidates the basic idea behind the proposed LBBA algorithm. Initially, the bat population is randomly spread throughout the established limits. In this example, the blue bats are the leaders. At each interaction, the positions are updated according to the algorithm. This leads to an improvement in the direction of the target, as shown in the right part (first and second interactions). Notice that the leaders are either in ambiguous positions or very close to the actual mobile robot position. This figure also shows that the population inside the region of interest (ROI) of the search space is instigated to follow the respective leader. This characteristic is extremely useful in complex environments and avoids “traps” in which the standard BA algorithm could enter.
Figure 3 reproduces the scenario within the Laboratory of Autonomous Intelligent Robots (Lab-AIR) facility, featuring the experiment with the LIMO robot navigating the mapped environment, mimicking the accomplishment of a mission inside a warehouse. The robot navigates around obstacles, including shelves, other robots, walls, and columns, some of which can be potentially hazardous. The robot autonomously traverses the known map, highlighting the importance of obtaining the most accurate position of the robot for trajectory control during the mission. However, wheel slippage, for example, can introduce uncertainties in the robot’s current position within the map. In such cases, one can provide accurate robot localization using the OptiTrack motion capture system, available in the Lab-AIR facility. However, this work aims to develop a localization strategy capable of determining the positions of the robot inside the laboratory in situations where a motion capture system is not available, a real possibility in most working environments.
Developing an efficient technique to locate the robot on a known map in a controlled environment is extremely important for research. For instance, it becomes necessary when GPS signals are unavailable or imprecise. Furthermore, the increasing use of drones for inspection and indoor monitoring highlights the relevance of this development [18]. Some proposals, such as that of [18], bring a practical approach to trajectory control, demonstrating that the model can be applied to complex trajectories. However, the main limitation lies in the dependence on a processing rate that requires specific hardware. Effectively, in natural and large-scale environments, this need can represent a significant challenge when adopting this solution.
This work aims to add an orientation sensor capable of not only providing the robot’s orientation within a known map but also reducing the number of calculations required in the optimization algorithm. Thus, greater precision and speed in estimating the robot’s position on the map as a whole will be obtained, whether for developing a SLAM in constructing a map or pursuing a trajectory, such as a lemniscate one.
Bioinspired algorithms are characterized by spreading particles randomly throughout the known map. Note that the robot’s orientation directs the particles in the distribution process. However, the robot adopted in the experiment does not have a compass to provide information about its orientation. Then, the OptiTrack installation available in the test arena is used to get the robot orientation. Since the robot’s orientation is known, the particles are distributed according to this orientation, which limits particle dispersion, as illustrated in Figure 4.
As the robot moves, the simulated agents, represented by bats, must follow this motion synchronized in linear and angular velocity. The digital compass plays an essential role in orientation, aiding in estimating the bats’ position within the known map. The simulation shown in Figure 5 demonstrates that, after an incremental movement of the robot, the bats begin their "flight" toward the leaders, highlighting the interaction and alignment with the leader robot’s movements.
To clarify the central idea of the proposed LBBA, the flowchart corresponding to it is presented in Figure 6. Initially, the algorithm defines the objective function and the search parameters. Then, the particles are scattered on the map, oriented according to the robot orientation, as illustrated in Figure 4. If the objective function or the maximum number of iterations, the stopping criterion, is achieved, the LBBA finds the global solution. Otherwise, it recalculates the objective function, repeating this recalculation until fulfilling the objective function or reaching the maximum number of iterations.
To further emphasize the importance of the digital compass, it is integrated into the LBBA methodology to reduce the number of particles needed for accurate localization. The compass precisely estimates the robot’s orientation, allowing the algorithm to disregard particles with significantly deviating orientations. This reduction in the search space enhances the efficiency and speed of the localization process and maintains a high level of accuracy by focusing on the most probable poses. Thus, the digital compass is pivotal in optimizing the LBBA algorithm, enabling it to operate with fewer computational resources while still delivering robust and precise localization. This integration of compass data with LIDAR measurements exemplifies the synergy of sensor fusion techniques in improving the overall performance of mobile robot localization systems. Therefore, the digital compass is crucial for the LBBA method because it significantly reduces the number of particles (bats) required for accurate localization. With the orientation provided by the compass, the algorithm can limit the dispersion of particles, focusing only on the most probable orientations. This results in:
  • Reduction of Search Space: It limits the possible orientations of the robot, allowing for a more efficient and faster search.
  • Improved Accuracy: The precise orientation from the compass helps better align the localization estimates with reality, increasing the algorithm’s accuracy.
  • Reduction in Computational Complexity: Fewer particles are needed, reducing the computational load and allowing real-time use of the algorithm.

4. Results

In the experiments and simulations presented in this section, different scenarios were used to find the best configuration for locating a mobile robot in a controlled environment. Cameras from the OptiTrack system were used in the experiments as a ground-truth reference, enabling the analysis of the effectiveness of the LBBA algorithm and providing reliable information for the robot’s (LIMO) orientation, emulating a digital compass.

4.1. Performance Associated to the Real Tests

This section describes the experiments conducted with the LIMO robot in a controlled environment, utilizing the proposed LBBA algorithm. The main objectives were to validate the accuracy of the global localization algorithm and evaluate the effectiveness of trajectory-tracking control in autonomous navigation tasks. Such a robot was equipped with a LiDAR laser sensor capable of performing 180 measurements per scan. In addition, the OptiTrack installation was used as a ground truth reference to validate the localization and trajectory-tracking control. Additionally, a digital compass mimicked by the OptiTrack system was incorporated into the system, helping to reduce the randomness in the algorithm’s estimates and guiding the particles toward a more accurate orientation. This approach improved the efficiency of the localization process and reduced the computational load.
The experiments were conducted on a previously known map containing obstacles that avoid excessive ambiguities. Initially, the robot had no information about its exact location in the environment. The LBBA algorithm was used to perform global localization, and the tests demonstrated its effectiveness in quickly determining the robot’s initial position. This step is critical, as the mission only begins when the global localization returns a reliable position estimate. Figure 7 describes the test arena, consisting of a Linux PC running ROS melodic, which establishes Wi-Fi communication between the control computer and the robot, with the control signals sent to the vehicles at a rate of 30 H z . As for the controller adopted, it was the kinematic one discussed in [19].
The mission began after the global localization phase, and during it, the local localization algorithm was applied. This algorithm focuses on a probability region along the robot’s movement, optimizing the use of particles in the area where the robot is located. This procedure allowed the robot to maintain a suitable frequency of position updates for trajectory control during the missions.
The experiments used two types of trajectories: an oval circuit and a lemniscate-shaped trajectory. The lemniscate, or Bernoulli curve, is characterized by its 8 shape and is widely used in robotics experiments to test the ability to control smooth and precise movements. The continuous changes in direction and curvature make this trajectory challenging and highly effective for validating the performance of control algorithms. Figure 8 and Figure 9 illustrate both trajectories followed by the robot during the tests. In the second case, human intervention takes the robot to a point entirely out of the trajectory through a joystick in two different points along the ellipsoidal trajectory, marked as cyan squares, and one can see that the localization system could provide the necessary information to make possible to the controller to guide the robot back to the trajectory.
In addition to the two trajectory-tracking tasks, the robot also performed a positioning mission, in which it was programmed to reach five waypoints distributed throughout the map (Figure 10).
Using the LBBA algorithm for position estimation was crucial for trajectory control and positioning success. Instead of spreading particles across the entire map, the local localization strategy allowed the use of a smaller number of particles without compromising the accuracy of the estimates. Figure 11 shows the flowchart of the localization algorithm, detailing the global and local localization steps.
The flowchart presented describes a robotic navigation system using ROS (Robot Operating System), comprising several nodes responsible for the location and control of the robot. The process begins with the activation of all nodes: Global Location (Node 1), Local Location (Node 3), Robot Control (Node 2) and Ground Truth (Node 4). Initially, the Global Location node estimates the robot’s position in the entire map. If the estimated position is considered acceptable, the robot begins its movement towards the desired path, with the management of this movement performed by the Robot Control node. From this moment on, Local Location becomes responsible for continuously estimating the robot’s position during navigation. The Ground Truth node provides the robot’s orientation to the localization system and the robot position, which is used to compare the estimated positions and to assist in correcting the robot’s orientation within the working environment. This feedback loop ensures accurate navigation and continuous adjustments as the robot moves across the map.
The results obtained were validated by the OptiTrack system, which provided the ground truth for comparison with the estimates generated by the proposed LBBA algorithm. The close correspondence between the estimated trajectory and the real trajectory demonstrated the algorithm’s accuracy, as measured by OptiTrack, which is known for its high accuracy. Figure 8 and Figure 9 present an overlay of the trajectories estimated by the LBBA and the real trajectories provided by the OptiTrack system.
The figures presented demonstrate the accuracy of the estimates concerning the reference values (ground truth). Tools such as error graphs and boxplots illustrate the distribution of errors and evaluate the performance of the localization system. The estimates tend to converge to the reference values over time or during the algorithm’s execution, which can be evidenced by trajectory graphs that compare the estimated trajectory with the real one. Indeed, visual analysis of overlapping line graphs allows us to verify the progressive approximation of the estimates to the ground truth. In the experiment related to the lemniscate trajectory, the localization algorithm demonstrated the ability to estimate the position accurately and efficiently over 15 minutes.
On the other hand, in the experiment in which the robot follows an elliptical trajectory, in addition to proving the ability to follow the projected path, the robustness of the method was evaluated. A manual intervention with a joystick was applied during the experiment, forcing the LIMO robot to deviate from the desired trajectory. The system, however, demonstrated the ability to recover and resume the trajectory efficiently, validating the robustness of the proposed algorithm.
The accuracy of mobile robot position estimation is a crucial factor directly impacting performance in tasks related to online trajectory planning, motion control, and environmental interaction. This study performs a quantitative analysis of the position estimation error of the LIMO robot in three different types of trajectories: elliptical, waypoints, and lemniscate. The results are presented in the form of error graphs over time, providing a critical assessment of the robustness of the navigation system and the challenges encountered in each type of trajectory (Figure 12).
The analysis was performed by comparing the position the robot’s navigation system estimated with the real position obtained by high-precision sensors. The errors were recorded over time, and the results were visualized in graphs with a trend line representing the average error. This approach allowed a detailed analysis of the system’s accuracy in different navigation contexts.
The elliptical trajectory, characterized by continuous and smooth curves, presented a moderate variation in error over time. The maximum error recorded was approximately 0.2 meters, with more pronounced peaks observed around 50 seconds (top graph). These peaks can be attributed to the system’s difficulty in maintaining high accuracy on curves, where slight deviations in the estimation of the angular orientation end up propagating to the linear position error. The average error on this trajectory was 0.03 meters, showcasing a good performance of the estimation algorithm.
The waypoint navigation presented a distinct error behavior, with more pronounced fluctuations in the first 15 seconds. The maximum error recorded was approximately 0.1 meters, with higher peaks between 30 and 35 seconds, which coincide with transition moments between different waypoints (center graph). This suggests the navigation system faces challenges during the robot’s reorientation at these transition points. The average error along this navigation was 0.02 meters, the lowest among the three experiments analyzed, indicating that the robot could perform accurate position corrections throughout most of the route.
The lemniscate trajectory, a figure-of-eight pattern, introduced greater complexity due to the continuous variation in curvature and direction. The maximum error recorded was similar to that of the elliptical trajectory, around 0.2 meters, but presented more frequent fluctuations over time (lower graph). The average error was 0.03 meters, which reflects the ability of the navigation system to maintain acceptable overall accuracy, even on a trajectory with continuous orientation changes.
The results show that the LIMO robot’s position estimation system’s performance varies according to the trajectory’s complexity. Trajectories such as elliptical and lemniscate, which involve continuous changes in curvature, presented higher error peaks, suggesting that the estimation and control algorithm are more sensitive to abrupt variations in angular orientation and linear velocity. This behavior indicates the need for information fusion with the compass sensor, especially in scenarios with more complex trajectories, such as those analyzed in this experiment.
Although Table 1 and Table 2 indicate that the filtered algorithm presents a higher error compared to the algorithm without filtering, the use of filtered data provided better conditions for controlling the robot during the missions. This behavior can be justified by the ability of the filters to smooth the sensory data, removing rapid oscillations and noise, which could induce undesired behaviors in the controller. Although the absolute numerical error is higher, data smoothing allows the controller to generate more consistent and stable movement commands, resulting in more efficient dynamic behavior. Filtering also reduces the controller’s sensitivity to small fluctuations, which, in the case of raw data, could result in abrupt and ineffective corrections, highlighting the advantage of using filtered data to improve control performance.
The analysis was based on the Root Mean Square Error (RMSE) calculation, which measures the difference between the estimated values and the real values (ground truth) over time and is a metric widely used to evaluate the accuracy of localization algorithms. The equation used to calculate the RMSE is defined
RMSE = 1 n i = 1 n ( Estimate i Ground Truth i ) 2
where n is the number of samples, Estimate i are the estimated values, and Ground Truth i ) are the true values. RMSE allows us to quantify the mean square error. Although the filtered data had higher RMSE values, it provided smoother and more predictable data, which resulted in improved robot control.
Additionally, filters help mitigate temporary error spikes that can severely impact robot performance if not corrected properly. By providing a more stable and predictable data set, the controller can predict the robot’s movement more effectively, facilitating smoother tracking of complex trajectories, such as lemniscate and elliptical. In this way, despite a more significant numerical error, the filtered data provides greater overall stability to the control system, allowing the robot to maintain robust and more efficient performance throughout the mission.
Figure 13 shows the analysis of the errors of the localization algorithm in a lemniscate trajectory experiment, comparing the estimates with the real position of the robot. The distribution of errors, illustrated by the boxplot, demonstrates a significant concentration of values close to zero, as indicated by the interquartile range. The median of the errors, represented by the red line inside the box, is slightly below the center, suggesting a slight asymmetry in the data. The narrow width of the interquartile range indicates that most of the errors remain within a narrow range, reflecting the algorithm’s consistency most of the time. However, many outliers are observed above the box, revealing larger errors at specific moments. These atypical values indicate that, although the algorithm’s overall performance is accurate, there are situations in which the robot’s localization error increases considerably. Furthermore, the whiskers, which show the variability of errors without considering outliers, are relatively short. This reinforces that most errors remain within a small range, with occasional exceptions that indicate deviations in the system’s behavior at certain moments of the experiment.
The histogram shows the Euclidean error distribution along the lemniscate trajectory. Analysis of this graph (Figure 14) reveals that most of the errors are concentrated in low values, with the highest frequency of occurrences around errors close to zero, which indicates an excellent overall performance of the localization algorithm.
A sharp drop in frequency is observed as the error increases, with a few values above 0.05, suggesting that most of the algorithm’s estimates are quite accurate. However, there are a small number of more significant errors, reaching values around 0.15 or more, which indicates the occurrence of some estimates with more significant errors but which are rare.
The greater concentration around small errors, combined with the low frequency of larger errors, reinforces the idea that the algorithm performs well in most cases, with a few exceptions where the error is more pronounced. This asymmetric distribution, with a long tail on the right, suggests that the most significant deviations are sporadic and may be related to specific conditions of the experiment, such as noise or temporary limitations in the localization process. Therefore, the histogram confirms that the localization algorithm has high accuracy most of the time, with few moments of significant error.
Therefore, one can conclude that all experiments were successfully conducted, demonstrating that the LBBA algorithm is effective for localization and trajectory control in known environments, establishing itself as a promising tool for applications in autonomous robotics. Figure 15 shows the LIMO robot in action during the experimental tests.

4.2. Performance Evaluation Through Simulated Tests

In this section, we compare the e performance of the Leader-based Bat Algorithm (LBBA) here proposed against those of two other bioinspired algorithms, the Manta Ray Foraging Optimization (MRFO) and the Black Widow Optimization (BWO). All of them are focused on optimization and applicable to mobile robot localization. Each of these three algorithms is based on natural behaviors and is designed to handle complex global optimization problems in search spaces with multiple local optima, which makes them particularly suitable for the robotic localization problem. While MRFO and BWO yielded strong simulation results, LBBA outperformed them in accuracy and robustness, indicating that it may offer distinct advantages in this context.
Essentially, LBBA is an extension of the Bat Algorithm (BA). The difference, as detailed in this text, is the inclusion of multiple leaders that direct the swarm to specific regions of the search space. This feature provides a unique distributed exploration framework, efficiently covering larger areas and avoiding traps in local optima. Considering robotic localization, the algorithm can obtain more accurate estimates of the robot’s initial position by exploring different regions simultaneously and adjusting itself to avoid suboptimal solutions that could result in significant errors. By its turn, MRFO, inspired by the foraging behavior of manta rays, incorporates three distinct strategies, namely chain, cyclone, and somersault, that seek a balance between exploration and intensive exploration in promising regions. Each strategy uses structured moves to ensure that the algorithm can switch between a global search and localized exploration, adapting to the configuration of the search space. This makes MRFO suitable for situations where adaptation to new regions is critical. However, this approach may require more particles to maintain accuracy in restricted areas, such as an ROI (Region of Interest) [20]. The BWO, in turn, is based on the reproductive and cannibalistic behavior of the black widow spider, where low-fitness individuals are eliminated to optimize exploration. This process of discarding unpromising hypotheses improves the algorithm’s convergence rate, which helps reduce the initial calculation time in complex environments [21]. However, the dynamics of cannibalism, by rapidly reducing particle diversity, can limit its adaptability in restricted areas, such as an ROI, making it difficult to adjust the robot’s position accurately.
Accurate and rapid robot localization in a mapped environment initially requires a global estimate of its position. For this task, particles are randomly distributed throughout the map, allowing the algorithm to compare the readings of a real sensor with simulated readings to identify the robot’s position. However, this initial phase requires a significant amount of particles, which increases the computational cost. At this point, the robustness of LBBA in exploring multiple regions with independent leadership proves advantageous, ensuring broad coverage and rapid convergence to an accurate initial estimate.
After obtaining a reliable initial position, the algorithm switches to a local localization procedure based on the definition of an ROI, a circle centered on the last estimated robot position, with a radius of 20 centimeters, inside which the robot is supposed to be. This refinement significantly reduces the number of particles required since the algorithm focuses the search on the area with the highest probability of containing the robot. This improves computational efficiency, allowing for precise localization in real-time without requiring the entire map to be explored again.
A relevant aspect contributing to the accuracy and robustness of localization is the integration of a compass, which assists in orientation and avoids unfavorable positioning hypotheses. Data fusion between the compass and other sensors (LIDAR and odometry) reduces variability and the need for more particles, improving the algorithm’s reliability and enabling localization with lower computational cost. This feature is especially useful in ROI, where orientation is more sensitive to deviations.

4.3. Algorithm Performance

With its multi-leader structure, the LBBA algorithm proposed here is highly effective in both the global localization and ROI phases. Its ability to quickly explore large regions and focus on a specific region allows for high accuracy with fewer particles throughout the robot’s navigation. As proposed here, integrating a compass and using multiple leaders ensure additional robustness, minimizing calculation time without compromising accuracy.
While effective in the global localization phase, the MRFO requires more particles in the ROI phase to maintain good accuracy. This is because its cyclone-based and somersault-based approaches, while excellent for initial exploration, may not be as adaptable in the ROI, where more localized and less cyclical movements are preferable for continuous and accurate robot localization.
The BWO algorithm has proven fast in global localization by eliminating low-fitness particles, facilitating efficient initial convergence. However, as the robot moves through the ROI, the reduction in diversity imposed by cannibalism limits its ability to adapt to small variations, which can compromise accuracy in areas with multiple potential solutions nearby.
A comparison between these three algorithms is based on three criteria, namely
  • Convergence Speed in Number of Epochs: The number of epochs required for each algorithm to converge to the final solution, that is, the estimated x, y robot position and its orientation θ , was measured. This criterion is important for assessing how quickly the algorithm reaches a viable solution, especially relevant for real-time applications and embedded systems.
  • Final Solution Quality: The solution quality was assessed by measuring the standard deviation of the difference between the algorithm’s estimated position and the robot’s true position. The standard deviation reflects the precision level of the algorithm’s estimate, with lower values indicating a closer approximation to the robot’s true position. This metric is particularly crucial in autonomous navigation systems, where high precision is necessary to avoid undesirable deviations.
  • Computational Time: The execution time was measured in seconds for each method, allowing the assessment of computational efficiency. The same hardware was adopted in each case to allow a fair comparison. This criterion is essential as it indicates the computational cost of each algorithm, a metric especially relevant in systems with limited processing resources.

4.4. Results and Discussion

Figure 16, Figure 17 and Figure 18 show the performance of the LBBA, as proposed here, MRFO and BWO algorithms, respectively. They show the algorithm’s performance in the three criteria described in the previous subsection, using three plots: the first one shows the convergence speed in epochs, the second one shows the standard deviation of the error between the estimated robot position/orientation and the real ones, and the third one shows the computation time to calculate the robot position, always considering the number of particles as a parameter.
The results show that our LBBA algorithm performed better, considering the three criteria. This algorithm demonstrated fast convergence, high computational efficiency, and good final solution quality, proving to be a robust and practical choice for robot localization applications. Specifically, the LBBA stood out for its simplicity of implementation and lower number of computational steps without compromising the final solution’s precision. Notice that regarding the error standard deviation, the BWO algorithm outperforms the LBBA one, but its convergence is worse, and so is the computation time needed to get a solution. Regarding the MRFO algorithm, one can see that it performs well in terms of convergence speed and computation time to get a solution but shows a more significant error standard deviation than the BWO and LBBA algorithms. Therefore, considering the whole performance, the LBBA algorithm proposed here is an excellent tool for robot localization along its navigation.
Therefore, the LBBA algorithm proposed here stands out as a choice for embedded systems in mobile robotics due to its balance between rapid convergence and sufficient accuracy, achieved with a low computational cost. Although the BWO algorithm demonstrates superior accuracy, it demands a significantly higher computational time, making it less viable for real-time applications with limited processing power and battery capacity. The MRFO algorithm, on the other hand, achieves faster convergence with fewer particles but at the expense of precision, which is critical for reliable localization. The LBBA successfully bridges these two extremes, providing an efficient solution that combines adequate accuracy with computational efficiency, thus meeting the stringent requirements of embedded systems, where processing speed and resource optimization are essential.

5. Conclusion

This work presented a detailed analysis of the fusion of LIDAR sensor data and a digital compass for position estimation of a LIMO robot, showing significant gains in efficiency and accuracy. A video of the experiment is available at https://youtu.be/QhTW7dMyVK4. The main conclusion of this study is that integrating these sensors allowed a significant reduction in the number of particles used in the Random Search-Based Localization (LBBA) algorithm, which resulted in a significant reduction in computation. This optimization significantly improved the time required to obtain the global localization of the robot, making the discussed methodology a viable and effective tool for control experiments that require precise localization in real-time.
In addition, the results demonstrated that, although the navigation system maintains a high average accuracy in different trajectories, the geometric complexity of these trajectories can cause considerable error peaks. This behavior suggests the need to improve the localization algorithm, with machine learning techniques being a promising possibility. These techniques could be explored to predict and dynamically adjust localization errors, mainly when accomplishing more complex trajectories. Likewise, adopting advanced adaptive control techniques can contribute to mitigating these peaks, improving the performance of autonomous navigation in challenging scenarios.
Another highlight was the real-world testing using the C++ programming language, which proved effective in controlling the robot using the position estimation provided by the localization algorithm exclusively. The embedded implementation of these codes in the robots could bring additional processing gains and reduce packet loss in the network, which is a crucial aspect for applications that demand low latency and high reliability in control computer-robot communication.
Integrating a UAV (drone) with ground vehicles is a perspective for continuing this study. This approach would allow investigation of the robot’s localization in three dimensions, not limited to the XY plane of the known map. The fusion of data provided by additional location sensors would also contribute to increasing the accuracy and robustness of the system, especially in more complex three-dimensional environments.
In summary, this work contributes significantly to the advancement of localization techniques for mobile robots, demonstrating that intelligent sensor fusion and optimized control algorithms are promising strategies for improving the performance and efficiency of autonomous navigation systems in dynamic environments.

Short Biography of Authors

Preprints 140783 i001Wolmar Araujo-Neto holds a Bachelor’s degree in Control and Automation Engineering from the Federal University of Ouro Preto (2012), a Master’s degree in Electrical Engineering from the Federal University of Juiz de Fora (2014), and a Ph.D. in Electrical Engineering from the same university (2019). He is currently a postdoctoral intern at the Federal University of Espírito Santo. He has experience in the field of Electrical Engineering, with an emphasis on Electronic Process Control and Feedback, and works mainly on topics such as robotics, control, artificial intelligence, optimization, PLC (Programmable Logic Controller), Petri nets, automata, energy efficiency, sustainability, and environmental comfort.
Preprints 140783 i002Leonardo Olivi is currently a professor in the Electrical Engineering - Robotics and Automation program at the Federal University of Juiz de Fora (UFJF, 2014) and in the Graduate Program in Built Environment (PROAC, UFJF, 2024). He holds a degree in Control and Automation Engineering from the Federal University of Ouro Preto (UFOP, 2006), a master’s degree in Electrical Engineering with a focus on Automation and Control of Dynamic Systems from the University of São Paulo (USP, 2009), and a PhD in Electrical Engineering specializing in Automation and Mobile Robotics from the State University of Campinas (UNICAMP, 2014). He coordinated the Electrical Engineering - Robotics and Automation program at UFJF from 2015 to 2018 and 2020 to 2023, and he coordinated the Laboratory of Robotics and Automation (LABRA) from 2018 to 2020. His main areas of expertise include mobile robotics and robotic manipulators, control of dynamic processes, stochastic process filtering, and artificial intelligence.
Preprints 140783 i003Daniel Villa is a professor in the Department of Electrical Engineering at the Federal University of Espírito Santo (UFES). He holds a degree in Electrical Engineering from the Federal University of Viçosa (UFV), where he conducted research in control and automation. In 2017, he completed a master’s degree in Agricultural Engineering at UFV, focusing on synthesizing and implementing controllers for precision agriculture. In 2022, he earned his Ph.D. in Electrical Engineering from UFES, specializing in adaptive and sliding mode control applied to aerial robots, particularly for cargo transportation. He is an expert in robotics, control, and automation, with research interests that include control of multi-robot systems, nonlinear control, optimal control, and state estimation.
Preprints 140783 i004Mário Sarcinelli-Filho received the B.S. degree in Electrical Engineering from Federal University of Espírito Santo, Brazil, in 1979, and the M. Sc. and Ph. D. degrees, also in Electrical Engineering, from Federal University of Rio de Janeiro, Brazil, in 1983 and 1990, respectively. He is currently a Professor at the Department of Electrical Engineering, Federal University of Espírito Santo, Brazil, a researcher of the Brazilian National Council for Scientific and Technological Development (CNPq), Senior Editor of the Journal of Intelligent and Robotic Systems, and a senior member of the Brazilian Society of Automatics, an IFAC national member organization. He has authored two books, co-authored more than 70 journal papers, over 370 conference papers, and 17 book chapters. He has also advised 22 Ph.D. and 28 M.Sc. students. His research interests are nonlinear control, mobile robot navigation, coordinated control of mobile robots, unmanned aerial vehicles, multi-articulated robotic vehicles, and coordinated control of ground and aerial robots.

Acknowledgments

The authors thank CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico, an agency of the Brazilian Ministry of Science, Technology, Innovations, and Communications that supports scientific and technological development, and FAPES - Fundação de Amparo à Pesquisa e Inovação do Espírito Santo, an agency of the State of Espírito Santo, Brazil, that supports scientific and technological development, for financing this work. Dr. Araujo-Neto, in particular, thanks FAPES for the post-doc scholarship he granted, allowing full-time dedication to this research.

Note

1

References

  1. Liu, B. Recent advancements in autonomous robots and their technical analysis. Mathematical Problems in Engineering 2021, 2021, 1–12. [Google Scholar] [CrossRef]
  2. Elmokadem, T.; Savkin, A.V. Towards fully autonomous UAVs: A survey. Sensors 2021, 21, 6223. [Google Scholar] [CrossRef] [PubMed]
  3. Xu, X.; De Soto, B.G. On-site autonomous construction robots: A review of research areas, technologies, and suggestions for advancement. ISARC. Proceedings of the International Symposium on Automation and Robotics in Construction. IAARC Publications, 2020, Vol. 37, pp. 385–392.
  4. Ohradzansky, M.T.; Rush, E.R.; Riley, D.G.; Mills, A.B.; Ahmad, S.; McGuire, S.; Biggie, H.; Harlow, K.; Miles, M.J.; Frew, E.W. Multi-agent autonomy: Advancements and challenges in subterranean exploration. arXiv preprint 2021, arXiv:2110.04390. [Google Scholar]
  5. Sarcinelli-Filho, M.; Carelli, R. Control of Ground and Aerial Robots; Springer: Cham, Switzerland, 2023. [Google Scholar]
  6. Sarcinelli-Filho, M. Controle de sistemas multirrobos; Blucher: São Paulo, Brazil, 2023. [Google Scholar]
  7. Melenbrink, N.; Werfel, J.; Menges, A. On-site autonomous construction robots: Towards unsupervised building. Automation in construction 2020, 119, 103312. [Google Scholar] [CrossRef]
  8. Araujo Neto, W.; Pinto, M.F.; Marcato, A.L.M.; da Silva, I.C.; Fernandes, D.A. Mobile Robot Localization Based on the Novel Leader-Based Bat Algorithm. Journal of Control, Automation and Electrical Systems 2019, 30, 337–346. [Google Scholar] [CrossRef]
  9. Nampoothiri, M.H.; Vinayakumar, B.; Sunny, Y.; Antony, R. Recent developments in terrain identification, classification, parameter estimation for the navigation of autonomous robots. SN Applied Sciences 2021, 3, 1–14. [Google Scholar] [CrossRef]
  10. Udupa, S.; Kamat, V.R.; Menassa, C.C. Shared autonomy in assistive mobile robots: a review. Disability and Rehabilitation: Assistive Technology 2023, 18, 827–848. [Google Scholar] [CrossRef] [PubMed]
  11. Thrun, S.; Burgard, W.; Fox, D. Probabilistic robotics, 1st ed.; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  12. Yang, X.S. A new metaheuristic bat-inspired algorithm. In Nature inspired cooperative strategies for optimization (NICSO 2010); Springer, 2010; pp. 65–74.
  13. Yang, X.S. Chapter 10 - Bat Algorithms. In Nature-Inspired Optimization Algorithms; Yang, X.S., Ed.; Elsevier: Oxford, 2014; pp. 141–154. [Google Scholar] [CrossRef]
  14. Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping: part I. IEEE Robotics & Automation Magazine 2006, 13, 99–110. [Google Scholar] [CrossRef]
  15. Lin, B.H.; Shivanna, V.M.; Chen, J.S.; Guo, J.I. 360° Map Establishment and Real-Time Simultaneous Localization and Mapping Based on Equirectangular Projection for Autonomous Driving Vehicles. Sensors 2023, 23. [Google Scholar] [CrossRef] [PubMed]
  16. Doucet, A.; de Freitas, N.; Gordon, N. An Introduction to Sequential Monte Carlo Methods. In Sequential Monte Carlo Methods in Practice; Doucet, A., de Freitas, N., Gordon, N., Eds.; Springer New York: New York, NY, 2001; pp. 3–14. [Google Scholar] [CrossRef]
  17. Yang, X.S. Nature-inspired optimization algorithms; Academic Press, 2020.
  18. Rodziewicz-Bielewicza, J.; Korzena, M. Sparse Convolutional Neural Network for Localization and Orientation Prediction and Application to Drone Control 2024.
  19. Sarcinelli-Filho, M.; Carelli, R. Motion Control. In Control of Ground and Aerial Robots; Springer International Publishing: Cham, Switzerland, 2023. [Google Scholar] [CrossRef]
  20. Zhao, W.; Zhang, Z.; Wang, L. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Engineering Applications of Artificial Intelligence 2020, 87, 103300. [Google Scholar] [CrossRef]
  21. Hayyolalam, V.; Kazem, A.A.P. Black widow optimization algorithm: a novel meta-heuristic approach for solving engineering optimization problems. Engineering Applications of Artificial Intelligence 2020, 87, 103249. [Google Scholar] [CrossRef]
Figure 1. Robotic base used in the experiments run to validate the novel algorithm here proposed.
Figure 1. Robotic base used in the experiments run to validate the novel algorithm here proposed.
Preprints 140783 g001
Figure 2. Exemplifying the operation of the algorithm.
Figure 2. Exemplifying the operation of the algorithm.
Preprints 140783 g002
Figure 3. A snapshot exemplifying the robot navigating in a mapped environment.
Figure 3. A snapshot exemplifying the robot navigating in a mapped environment.
Preprints 140783 g003
Figure 4. Particles distributed on the map, oriented according to a compass emulated by a motion capture system. The leader bats are in blue.
Figure 4. Particles distributed on the map, oriented according to a compass emulated by a motion capture system. The leader bats are in blue.
Preprints 140783 g004
Figure 5. Particles distributed on the map, oriented according to a compass emulated by a motion capture system, after a slight motion of the robot. Once more, the leader bats are blue. Notice the grouping of the bats around the leaders.
Figure 5. Particles distributed on the map, oriented according to a compass emulated by a motion capture system, after a slight motion of the robot. Once more, the leader bats are blue. Notice the grouping of the bats around the leaders.
Preprints 140783 g005
Figure 6. Flowchart corresponding to the proposed LBBA algorithm.
Figure 6. Flowchart corresponding to the proposed LBBA algorithm.
Preprints 140783 g006
Figure 7. A sketch of the test arena where the experiments were run.
Figure 7. A sketch of the test arena where the experiments were run.
Preprints 140783 g007
Figure 8. Lemniscate trajectory tracked and the algorithm efficiency obtained.
Figure 8. Lemniscate trajectory tracked and the algorithm efficiency obtained.
Preprints 140783 g008
Figure 9. Elliptical trajectory tracked and the algorithm efficiency obtained.
Figure 9. Elliptical trajectory tracked and the algorithm efficiency obtained.
Preprints 140783 g009
Figure 10. Position estimation for the experiment of waypoint navigation.
Figure 10. Position estimation for the experiment of waypoint navigation.
Preprints 140783 g010
Figure 11. Flowchart of the Localization Algorithm Based on Random Search (LBBA) Implemented in C++.
Figure 11. Flowchart of the Localization Algorithm Based on Random Search (LBBA) Implemented in C++.
Preprints 140783 g011
Figure 12. Euclidean errors over time for an elliptical trajectory, a waypoints sequence, and a lemniscate trajectory, respectively.
Figure 12. Euclidean errors over time for an elliptical trajectory, a waypoints sequence, and a lemniscate trajectory, respectively.
Preprints 140783 g012
Figure 13. Boxplot of the Euclidean Error Distribution in the Lemniscate Trajectory Experiment.
Figure 13. Boxplot of the Euclidean Error Distribution in the Lemniscate Trajectory Experiment.
Preprints 140783 g013
Figure 14. Histogram of the distribution of Euclidean error along the lemniscate trajectory.
Figure 14. Histogram of the distribution of Euclidean error along the lemniscate trajectory.
Preprints 140783 g014
Figure 15. Top view of real tests with LIMO.
Figure 15. Top view of real tests with LIMO.
Preprints 140783 g015
Figure 16. Performance of the LBBA algorithm in terms of the number of particles: Convergence Speed (in epochs), Error Standard Deviation (in meters), and Computational Speed to Solution (in seconds).
Figure 16. Performance of the LBBA algorithm in terms of the number of particles: Convergence Speed (in epochs), Error Standard Deviation (in meters), and Computational Speed to Solution (in seconds).
Preprints 140783 g016
Figure 17. Performance of the BWO algorithm in terms of the number of particles: Convergence Speed (in epochs), Error Standard Deviation (in meters), and Computational Speed to Solution (in seconds).
Figure 17. Performance of the BWO algorithm in terms of the number of particles: Convergence Speed (in epochs), Error Standard Deviation (in meters), and Computational Speed to Solution (in seconds).
Preprints 140783 g017
Figure 18. Performance of the MRFO algorithm in terms of the number of particles: Convergence Speed (in epochs), Error Standard Deviation (in meters), and Computational Speed to Solution (in seconds).
Figure 18. Performance of the MRFO algorithm in terms of the number of particles: Convergence Speed (in epochs), Error Standard Deviation (in meters), and Computational Speed to Solution (in seconds).
Preprints 140783 g018
Table 1. Eliptic Case: Root Mean Square Error (RMSE) comparisons for LBBA and Filtered LBBA.
Table 1. Eliptic Case: Root Mean Square Error (RMSE) comparisons for LBBA and Filtered LBBA.
Error Metric RMSE X (m) RMSE Y (m)
LBBA 0.0494 0.0238
LBBA Filtered 0.0789 0.0619
Table 2. Leminiscate Case: Root Mean Square Error (RMSE) comparisons for LBBA and Filtered LBBA.
Table 2. Leminiscate Case: Root Mean Square Error (RMSE) comparisons for LBBA and Filtered LBBA.
Error Metric RMSE X (m) RMSE Y (m)
LBBA 0.0330 0.0181
LBBA Filtered 0.0585 0.0659
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated