Preprint
Article

This version is not peer-reviewed.

Control of Multiple Mobile Robots Based on Data Fusion from Proprioceptive and Actuated Exteroceptive Onboard Sensors

A peer-reviewed article of this preprint also exists.

Submitted:

17 December 2024

Posted:

18 December 2024

You are already at the latest version

Abstract
This paper introduces a team of Automated Guided Vehicles (AGVs) equipped with open-source, perception-enhancing rotating devices. Each device has set of ArUco markers, employed to compute the relative pose of other AGVs. These markers also serve as landmarks, delineating a path for the robots to follow. The authors combined various control methodologies to track the ArUco markers on another rotating device mounted on the AGVs. Behavior trees are implemented to facilitate task-switching or to respond to sudden disturbances, such as environmental obstacles. The Robot Operating System (ROS) is installed on the AGVs to manage high-level controls. The efficacy of the proposed solution is confirmed through a real experiment. This research contributes to the advancement of AGV technology and its potential applications in various fields.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

The paper presents a team of Automated Guided Vehicles (AGVs) equipped with open-source, perception-enhancing rotating devices. Each device has 10 ArUco markers, employed to compute the relative pose of other AGVs. Behavior trees are used to rotate the devices on each AGVs. ArUco diamond markers are used as landmarks. Each AGV calculates its own global pose after the detection of the landmarks and also detects the other AGVs in the environment. The goal of task is autonomous localization of multiple robots and also avoiding collisions with other robots in the environment and to reach to their virtual positions. The proposed control is verified using OptiTrack motion capture system providing ground truth in the experimental results.
The paper proposes a novel combination of nonlinear continuous motion control algorithm for robots or AGVs which uses an on-board actuated camera, data fusion (proprioceptive sensors and exteroceptive sensors) and the decision on the direction of perception realized based on behaviour trees to cope with limited computing and sensory resources. In the paper [1] authors proposed actuated camera devices, but it required different hardware configuration for leader and follower robots. The limitation of using different hardware for leader-follower robots is overcome by using proposed hardware [2] by authors which supports any type of robot or AGV’s.
Fiducial markers to which ArUco belongs have become very popular for camera-based positioning. They are often used for comparative studies, e.g. [3,4] and also used in pose estimation experiments [5]. For example, ArUco markers were experimentally compared with other types of markers in [3]. It turned out that their advantages were: good position and orientation results, great detection rate, and low computational cost for single markers. However, they had such disadvantages as: sensitivity to smaller marker sizes and larger distances, computational cost scales with multiple markers.
Figure 1. Robots with rotating platform.
Figure 1. Robots with rotating platform.
Preprints 143279 g001
ArUco markers are used in various fields of robotics. Some works on manipulators like [6,7] can be mentioned. Examples of papers on mobile robots in which such a markers were applied are [8,9,10]. Other systems where such markers are useful are flying robots [11,12,13,14,15,16,17]. Markers are also used in cooperative robotics and nuclear decommissioning [18] or to determine the position of a passively cooperating spacecraft [19].
The Extended Kalman Filter (EKF) is used for the localization of an indoor robot [20,21,22,23]. It is also possible to detect cyber attacks using an EKF as shown in [24]. Paper [25] proposes a positioning method for a Magnetic Encoder Guided Vehicle (MEGV) using an EKF. It can also be noted that the neural network EKF is proposed to linearize the output signals [26] or to model unknown disturbances and improve the robot state transition model [27].
The Robot Operating System (ROS) is installed on the AGVs to manage high-level controls. The efficacy of the proposed solution is confirmed through a series of experiments. This research contributes to the advancement of AGV technology and its potential applications in various fields.
The considerations presented in Subsections 2–4 pertain to algorithms operating within the onboard systems of robots based on locally available data. For simplicity, authors have omitted the robot indices. However, it should be noted that they appear N times, corresponding to the number of mobile robots. Only in Subsections 5 and 6 authors introduces indices in the variables to denote the robot numbers, as the algorithm discussed there accounts for the interactions between agents.
An important value of this work is the fact that the presented theoretical considerations were supported by research carried out using a real experiment. Therefore, it can be concluded that the proposed approach to the issues under consideration has undoubted value in laboratory practice involving multi-agent systems. The behavior tree is explained in Section 2. One degree of freedom rotating platform is described in Section 3. Detection of ArUco markers, pose calculation, averaging of ArUco markers, and relative pose between the devices are explained in Section 4. The EKF method is presented in Section 5. All control methods for rotating platforms, collision avoidance techniques, and the kinematic model of robots are explained in Section 6. The methodology of the experiments is explained in Section 7. The experimental results are given in Section 8. The conclusion is discussed in Section 9.

2. Behavior Tree

Behavior trees (BT) are used in computer science, control systems, robotics, and especially in video games. BT is a mathematical model of task execution and the main advantage of BT is to perform complex tasks divided into simple subtasks. The graphical representation of BT is a directed tree consisting of the root node, control nodes, and leaf nodes.
Figure 2 represents the BT for the robot’s rotating platform, which gives the freedom to observe the environment irrespective of the movement of the robot’s mobile platform. Each robot has its own behavior tree. The Python library [28] is used to implement the BT in ROS. The topmost block in Figure 2 represents the root that generates a "tick" signal that propagates through the tree. After the root, the "Sequence 1" node, first sends the tick to the "Look towards Landmarks", the same action node is responsible for moving the rotating platform to look towards the landmarks on the walls for 5 seconds. Control details of the rotating platform are given in Section 6. When the functioning of the block is completed, it sends "SUCCESS" back to the same sequence node, otherwise, it sends "RUNNING". After receiving the "SUCCESS", the next block which is a "Fallback" node is executed. A fallback node works as an "OR" function. When any block below a fallback block gives a "FAILURE" then the other block is executed. The "Sequence 2" node and "Sequence 3" node are under the "Fallback" node. If another robot is detected in the environment in the camera range then the action node "Robot Detected = 1" under the sequence node will give a "SUCCESS" and then the next action node "Look towards Closest Robot" will be executed. The action node "Look towards Closest Robot" will reorient the rotating platform towards the closest robot in the environment around the observing robot. The "Sequence 2" node will send the "FAILURE" to the "Fallback" node when no robot is detected nearby, and then "Sequence 3" is executed, first, it verifies with the action node "Robot Detected = 0". When no robot is detected, the action node "Robot Detected = 0" will give "SUCCESS" and then the "Look straight" node will get a tick, and then the robot will look toward the center of the x-axis of the mobile platform for 5 seconds.

3. Devices for Multi-Agent Systems

The authors have used a one-degree-of-freedom rotating platform designed for multi-agent systems. The device rotates independently from the movement of the mobile platforms of the robot, which is one of its main advantages. Each device was installed on each agent. Figure 3 shows the exploded view of the device. All of its elements were 3D printed. The authors have uploaded the Standard Triangle Language (.stl) file containing models of [29] of the parts. The Fused Deposition Modeling (FDM) 3D printer is used to print these (.stl) files. The PLA (Polylactic Acid) material is used for the 3D printing.
Each device has a total of 10 ArUco markers pasted on it. Part 1 as seen in Figure 3 is of hexagonal shape which has 6 ArUco markers over it of 5x5 dictionary of 5 cm size each and part 3 is of cuboidal shape has 4 ArUco markers over it of 5x5 dictionary of 2.5 cm size. Table 1 shows the mechanical and electronic parts used in the devices.
The dynamixel motor operates from 9 V to 12 V. The motor has a resolution of 0.29 degrees with precise control of position with 1024 steps. As a result, approximately from 0 to 300 degrees [30] comes with feedback. A library [31] is used to connect Arduino Mega to the dynamixel motor. The IC mentioned in Table 1 is used to convert a half-duplex communication to full-duplex communication. The whole device is used as a ROS node, the library [32] is responsible for using the device as a ROS node.

4. ArUco Markers

ArUco markers are squared-shaped fiducial markers and the main advantage is that a single marker is enough to calculate the pose of the camera. A calibrated camera is required to extract the correct pose from the ArUco markers. The distortion matrix D and the camera matrix K of the calibrated camera are as follows:
D = k 1 k 2 c 1 c 2 k 3 ,
K = f x 0 c x 0 f y c y 0 0 1 .
The Intel RealSense cameras are precalibrated, and the distortion matrix (D) and the camera matrix (K) are taken from the ROS topic of the camera information [33].

4.1. Pose Calculation of ArUco Marker

ArUco markers on the devices have some translation distances from the central axis. To get the center point of one device from another device, virtual rotation and translation are required. OpenCV library [34] for ArUco marker generates the translation t v e c , and rotation r v e c vectors for each marker when it comes into the camera view frame. Figure 4 shows the representations of the axes between the camera and the ArUco marker. The rotation matrix and the translation matrix are calculated from the translation vector t v e c , and rotation vector r v e c .
The rotation angle θ v of the rotation vector r v e c n is:
θ v = | | r v e c n | | .
The rotation vector r v e c n is normalized as:
u = r v e c n θ v .
The skew-symmetric matrix K v :
K v = 0 u z u y u z 0 u x u y u x 0 .
The rotation matrix R C T n is calculated as:
R C T n = cos θ v I + ( 1 cos θ v ) u u T + sin θ v K v ,
where I is the identity matrix. The OpenCV library has a build-in function (Rodrigues) [35] to calculate the directly from rotation vector ( r v e c ) to the rotation matrix (camera frame relative to the marker tag) R C T n . Now rotation matrix R C T n is transformed as follows to change of frames.
R T C n = R C T n T .
To get the center of another device from one device, rotation of the ArUco marker is required before virtual translation towards the center. The aim of the rotation virtually rotate the XY plane of the ArUco marker to make it parallel to the XY plane of the camera. Rotation matrix R T C n from Equation (7) is used to calculate the Euler angles ( ϕ R n , θ R n , ψ R n ) where ϕ R n , θ R n and ψ R n represents the roll, pitch and yaw of the n t h marker detected in the camera frame.
The rotation matrix R y n ( θ R n ) to rotate about the y-axis of the ArUco marker is written as:
R y n ( θ R n ) = cos θ R n 0 sin θ R n 0 1 0 sin θ R n 0 cos θ R n .
The new rotation matrix R T C n ´ is calculated by multiplying R y n ( θ R n ) from Equation (8) and rotation matrix R T C n from Equation (7):
R T C n ´ = R y n ( θ R n ) R T C n .
Now, the distance matrix ( t d i s t ) is added to translation ( t v e c ) vector as follows:
t ´ v e c n = t v e c n + t d i s t ,
where t d i s t = d x n d y n d z n ] T , d x n = 0 , d y n = 0 in every case and d z is 4.808 cm when the marker is detected on hexagonal shape (3D printed Part 1) or d z is 1.5 cm when marker is detected on cubical shape (3D printed Part 3).
  • The translation matrix t c t n is calculated as follows:
t c t n = R T C n ´ t ´ v e c n .
where R T C n ´ is taken from Equation (9) and t ´ v e c n is from Equation (10). The transpose of t c t n is as follows:
( t c t n ) T = x m n y m n z m n ,
where x m n , y m n , and z m n represent the relative distances from the center of one device to another.

4.2. Averaging of ArUco Marker Measurments

When one device detects another device in the environment, in general more than one ArUco marker is detected in an image frame. All 10 markers on a device should theoretically give the exact center point of the device after virtual rotation and translation, but practically the markers give slightly different measurements. The authors have used the averaging method to obtain one measurement from different markers. Upon detection of the ArUco marker, the OpenCV library [34] also gives the corner points of each marker. Figure 5 shows the corner points of the ArUco markers.
The Shoelace formula [36] is used to calculate the pixel area of each marker a n which is written as:
a n = 1 2 | c x 1 n c y 2 n + c x 2 n c y 3 n + c x 3 n c y 4 n + c x 4 n c y 1 n c x 1 n c y 4 n c x 2 n c y 1 n c x 3 n c y 2 n c x 4 n c y 3 n |
where c x 1 , . . . , c x 4 are the x-coordinates of the corner points and c y 1 , . . . , c y 4 represents for y-coordinates of the ArUco marker.
  • The total area A is calculated as follows:
A = n = 1 N m a n ,
Average translation matrix t c t ¯ is calculated as follows:
t c t ¯ = 1 A n = 1 N m a n t c t n ,
where A is taken from Equation (14), a n is from Equation (13) and t c t n is from Equation (11).
t c t ¯ T = x m a v g y m a v g z m a v g ,
where x m a v g , y m a v g , and z m a v g are the average relative distances from the center of one device to another.

4.3. Relative Pose of One Device to Another

Relative pose of one device to another one is written as follows:
D p i = [ d x i d y i d θ i ]
where i represents the i t h device in the environment. As the coordinate frame of the camera and the ArUco marker are different, the distance in the global frame of the x coordinate between the devices will be the distance of the z-axis ( d x i = z m a v g , from Equation (16)). Similarly, the global y-coordinate distance of the device is the x-coordinate of ArUco marker d y i = x m a v g from the same Equation (16). The orientation of the device d θ i is calculated as follows:
d θ i = θ R n + θ M i + θ O i .
where θ R n is the pitch angle calculated from Equation (7), the marker with the largest area a n is considered, θ M i of dynamixel servo motor which is taken from servo feedback. The angle of the observer θ O i depends on the relative positions between the devices. Figure 6 shows the top view of the rotating device. Suppose IDs 1 to 6 are pasted in the hexagonal part of the rotating device as shown in Figure 6. Now another observer rotating device is just behind the rotating device placed in Region 1 and looks forward to the center of the device, and it detects the ID 1 on another device then θ O i = 0 rad as the pose of the ArUco marker also gives the pose of the device itself. In another case where the observer rotating device is in Region 2 and detects ID 2 on another device, θ O i = 1.0472 rad is added to correct the pose of another detected device. Similarly in another situation θ O i is as follows:
  • θ O i = 2.094 rad in Region 3.
  • θ O i = 3.141 rad when the pitch angle ( θ R n ) is positive for the detected marker (ID4) in Region 4.
  • θ O i = 3.141 rad when the pitch angle ( θ R n ) is negative of the detected marker (ID4) in Region 4.
  • θ O i = 2.094 rad in Region 5.
  • θ O i = 1.047 rad in Region 6.

4.4. Pose From Landmarks

Figure 7a shows the image of the ArUco Diamond marker which is used as a landmark in the environment. ArUco Diamond marker structure has 4 markers with size 3 x 3 squares [37,38]. The authors have chosen ArUco Diamond markers as landmarks as the extracted pose information is the average of four ArUco markers on the diamond structure which is more stable as compared to single ArUco markers. Figure 7b shows the environment with 5 landmarks. Different IDs of ArUco markers are selected to differentiate between markers.
Each landmark in the environment gives the robot its global pose after modification in extracted pose from the detected ArUco diamond in the frame. Figure 4 also shows the axis representations between the camera and the ArUco Diamond markers, as they are the same as those of the single ArUco marker. The rotation vector ( r v e c l n ) and the translation vector are ( t v e c l n ) of the ArUco Diamond marker extracted by the OpenCV library [38] once detected in the camera frame where l n denotes the landmark number ( l n = 1 , 2 , . . . 5 ).
The rotation matrix ( R C L l n ) is calculated from ( r v e c l n ) using the built-in function (Rodrigues) [35] and after translation ( R L C l n ) the rotating matrix is written as:
R L C l n = R C L l n T ,
The new translation matrix ( t ´ v e c l n ) is calculated by adding some translation distance in the y-axis of the markers which is expressed as follows:
t ´ v e c l n = t v e c l n + L d i s t ,
where L d i s t = [ l l n , 0 , 0 ] , the value of l l n are given in Section 8.
The translation matrix t c l n is calculated as follows:
t c l = R L C l n t ´ v e c l n .
The vector t c l T represents the distance from the landmarks and is written as:
t c l T = x l l n y l l n z l l n ,
The rotation matrix R L C l n from Equation (19) is used to calculate the Euler angles ( ϕ L l n , θ L l n , ψ L l n ) where ϕ L l n , θ L l n and ψ L l n represents the roll, pitch, and yaw of the l n t h landmark detected in the camera frame. The global pose of the robot in the environment is written:
P = z l l n x l l n θ L l n ,

5. Extended Kalman Filter

The aim of the EKF is to localize the robot in the environment using the data coming from the encoder and the global pose from the landmarks. EKF model is taken from the paper [39]. The state vector of the system of EKF is written as:
X r = x r y r θ r v r ω r ,
where r represents the robot r t h and ( x r , y r , θ r ) represents the pose expressed in the global coordinate frame. Linear velocity and angular velocity are represented as v r and ω r , respectively.
The discrete-time model of the system X r ( k ) = [ x r ( k ) y r ( k ) θ r ( k ) v r ( k ) ω r ( k ) ] is written as follows:
X r ( k ) = x r ( k 1 ) + v r ( k ) t cos θ r ( k 1 ) y r ( k 1 ) + v r ( k ) t sin θ r ( k 1 ) θ r ( k 1 ) + ω r ( k ) t v r ( k 1 ) ω r ( k 1 ) ,
The state model Equation (26) and measurement model Equation (27) of the filter are as follows:
X r ( k ) = f r ( X r ( k 1 ) ) + w r ( k 1 ) .
z r ( k ) = h r ( X r ( k ) ) + υ r ( k ) ,
where f r represents the nonlinear system and h r represents the measurement model. The g w r ( k 1 ) represents the Gaussian noise of the dynamic system and g υ r ( k ) represents the Gaussian noise of the measurement. Both noise g w r ( k 1 ) and g υ r ( k ) are zero mean Gaussian noise and associated with covariance matrices are Q r ( k ) and R r ( k ) as:
g w r ( k ) N ( 0 , Q r ( k ) ) ,
g υ r ( k ) N ( 0 , R r ( k ) ) ,
The nonlinear function f r and h r is to be linearized to obtain the linearized model of the system. A Jacobian matrix of f r and h r at the operating point is calculated as:
F r ( k ) = f r ( X ) X r | X ^ r k
F r ( k ) = 1 0 v r ( k ) t sin θ r ( k ) t cos θ r ( k ) 0 0 1 v r ( k ) t cos θ r ( k ) t sin θ r ( k ) 0 0 0 1 0 t 0 0 0 1 0 0 0 0 0 1 .
The prediction equations are as follows:
X ^ r ( k ) = f r ( X ^ r ( k 1 ) + ) ,
P r ( k ) = F r ( k ) P r ( k 1 ) F r ( k ) T + Q r ( k ) .
The update equations are as follows:
K r ( k ) = P r ( k ) H r ( k ) T ( H r ( k ) P r ( k ) H r T + R r ( k ) ) 1 ,
X ^ r ( k ) + = X ^ r ( k ) + K r ( k ) ( z r ( k ) h r ( X ^ r ( k ) ) ) ,
P r ( k ) + = ( I K r ( k ) H r ( k ) ) P r ( k ) .
where K r ( k ) represents the Kalman gain, X ^ r ( k ) , P r ( k ) , X ^ r ( k ) + and P r ( k ) + are estimates of the a priori and a posteriori state.

5.1. Measurement model

The Subsection 5.1.1 explains the calculation of linear ( v w h e e l s r ) and angular ( ω w h e e l s r ) velocities of the robot from the wheel encoders. The other subsection 5.1.2 explains the use of the global pose in the EKF of the robot after the detection of the landmark.

5.1.1. Data from wheel encoders

All robots are equipped with encoders on their motors. The wheel angular velocities of the left and right wheels are expressed as:
w a L r = ( ϕ L c r ϕ L p r ) / t ϕ L · π / 180 ,
w a R r = ( ϕ R c r ϕ R p r ) / t ϕ R · π / 180 ,
where angular velocities (rad/s) are w a L r of the left wheel and w a R r of the right wheel of the r-th robot. The current encoder ticks and previously saved encoder ticks for the left wheel and right wheel are ϕ L c r , ϕ L p r and ϕ R c r and ϕ R p r respectively. The time between the two consecutive readings of the readings of the left and right wheels are t ϕ L and t ϕ R , respectively.
The linear velocities of wheels is calculated by multiplying wheel radius R w h e e l r to Equations (37) and (38) as:
V L r = w a L r · R w h e e l r ,
V R r = w a R r · R w h e e l r ,
Equations (39) and (40) are used to calculate the linear and angular velocities of robots as follows:
v w h e e l s r = ( V R r + V L r ) / 2 ,
ω w h e e l s r = ( V R r V L r ) / L r ,
where the robot’s length of the wheel separation is L r . v w h e e l s r and ω w h e e l s r are the linear and angular velocities of the robots calculated from the readings of the wheels encoders.
The measurement matrix z e n c o d e r s r ( k ) is written as:
z e n c o d e r s r ( k ) = v w h e e l s r ω w h e e l s r ,
The H e n c o d e r s r ( k ) matrix is as follows:
H e n c o d e r s r ( k ) = 0 0 0 1 0 0 0 0 0 1 .

5.1.2. Landmark Data

After the detection of the landmarks by the robot camera P r matrix from Equation (23) gives the global pose of the robot. The direct angle from the landmark is not considered as the robots are equipped with the rotating platform. The calculation of the angle from the landmark L θ r is written as:
L θ r = θ r + θ L r l n + θ M i r + θ r c
where θ r is the robot orientation, θ L l n is the angle generated from the landmark detection and is the third component taken from Equation (23) and θ M i is the angle of the rotating platform of the robot. θ r c depends on the specific position of the robot. The values of θ r c are given in Section 8.
z l a n d m a r k r ( k ) of the landmark is as follows:
z l a n d m a r k r ( k ) = z l r l n x l r l n L θ r ,
where as z l r l n , x l r l n are taken from Equation (23).
The H l a n d m a r k r ( k ) matrix is as follows:
H l a n d m a r k r ( k ) = 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 .

6. Controls

6.1. Controls of Rotating Device

The rotating device tracks the ArUco markers on another rotating device by the PID controller. The main aim is to minimize the distance between the center of the camera and the ArUco marker as minimum as possible. The distance between the center of the ArUco marker and the camera center ( x m n ) is taken from Equation (12) or (16) depending on the case of the ArUco marker.
u p i d r ( t ) = K P r e r ( t ) + K i r e r ( t ) d t r + K d r d e r d t r ,
Where u p i d r ( t ) is the control signal generated for a time t. K P r , K i r and K d r are the proportional gain, integral gain, and derivative gain parameters, d e r is the change in error, d t r is the change in time, e r ( t ) is expressed as follows:
e r ( t ) = S P r θ M i .
where S P r is the set point that is set to zero as the camera should point in the middle of the ArUco marker. θ M i is the current angle of the rotating platform of the robot.

6.2. Collision Avoidance

The collision avoidance used in this paper is the idea presented by [40] and later widely used in multi-agent systems [41,42,43,44,45].
The robot r avoids collisions with all other robots j, j { 1 , , N } r using Artificial Potential Function (APF). All robots are surrounded by APFs that raise to infinity near the robot’s/obstacle’s boundary, r j - radius of the robot (j - number of the robot/obstacle), and monotonically decrease to zero at some radius R j , R j > r j .
The APF [44] is written as :
B a i j ( l r j ) = 0 for l r j < r j e l r j r j l r j R j for r j l r j < R j 0 for l r j R j ,
the above function gives the output B a r j ( l r j ) 0 , 1 .
The Euclidean length l r j = [ x j y j ] T [ x r y r ] T is used to calculate the distance between r-th robot and the j-th obstacle (or robot) .
To scale the function (50) within the range 0 , , the following equation is used:
V a r j ( l r j ) = B a r j ( l r j ) 1 B a r j ( l r j ) .
The scalar function V a r j ( l r j ) , is used in control to generate collision avoidance behaviors by using its spatial partial derivatives. The approach outlined above is also used to avoid collisions with a static obstacle with the center located at point ( x o , y o ) , with radius r o and APF vanishing at distance R o from the center. In this case, all the above equations remain valid if o is substituted in place of j.

6.3. Control of Mobile Platform

Each moving robot has its own virtual robot which acts as a leader robot. The kinematic model of the r t h mobile robots R r is written as:
x r ˙ y r ˙ θ r ˙ = cos θ r 0 sin θ r 0 0 1 v r ω r ,
where x r , y r , θ r represent the robot’s pose in the global reference frame. v r ω r represents the control vector where the linear velocity is v r and ω r represents the controls of the angular velocity of the robot, respectively.
The following global quantities to zero:
p r x = x r d x r p r y = y r d y r p r θ = θ r d θ r .
where x r d , y r d and θ r d represent the components of the pose of the virtual robot ( R o r ). x r , y r and θ r and are taken from the robot EKF system, Equation (35). Now the system errors with respect to the robot’s fixed frame is as follows:
e r x e r y e r θ = cos ( θ r ) sin ( θ r ) 0 sin ( θ r ) cos ( θ r ) 0 0 0 1 p r x p r y p r θ .
Adding the collision avoidance components
V a r j p r x = V a r j x r x r p r x = V a r j x r ,
V a r j p r y = V a r j y i y r p r y = V a r j y r .
The gradient of the APF can be expressed with respect to the local coordinate frame fixed to the r-th robot:
V a r j e r x V a r j e r y = cos θ r sin θ r sin θ r cos θ r V a r j p r x V a r j p r y .
Using equations (54) and (57) the system errors with collision avoidance components is written as:
E r x = p r x cos ( θ r ) + p r y sin ( θ r ) + j = 1 , j r N V a r j e r x ,
E r y = p r x sin ( θ r ) + p r y cos ( θ r ) + j = 1 , j r N V a r j e r y ,
e r θ = p r θ .
The above equations is re-written as follows:
E r x = e r x + j = 1 , j r N V a r j e r x ,
E r y = e r y + j = 1 , j r N V a r j e r y .
Due to its simplicity and its effectiveness, the trajectory tracking algorithm is taken from [46]. The control for i-th robot is written as:
v r = v o r cos e r θ + k 1 E r x ω r = ω o r + k 2 sgn ( v r 1 ) E r y + k 3 e r θ ,
where [ v o r ω o r ] is the desired velocities vector of the virtual robot where v 0 is the linear velocity and ω 0 is the angular velocity. k 1 , k 2 and k 3 are constant parameters greater then zero and function sgn ( ) is defined as follows:
sgn ( ξ ) = 1 for ξ < 0 0 for ξ = 0 1 for ξ > 0 .
The detailed analysis of the properties of the control for a single platform without collision avoidance is presented in paper [47].

7. Methodology

The experiment is carried out with four real robots which are shown in Figure 1. All robots are equipped with an Intel NUC where ROS Noetic is deployed. Each robot has its own controller in its own NUC system which works totally independently from each other. Low-level controllers are present in each robot which works as ROS nodes. The low-level controller subscribes to robot linear and angular velocities and publishes the ticks from the encoder. All robots are connected to one WiFi router to have access to roscore. One PC is also connected to the same WiFi router to launch roscore. The robots in the environment work as slaves to the ROS master. In the environment, landmarks are pasted on the vertical surface as seen in Figure 7b.
Two robots worked as static obstacles in the experiment. The other two robots move in the environment to avoid the static robots in the environment and try to reach their virtual robot’s localization. With the use of behavior trees moving robots watch towards the landmarks for 5 seconds and watch towards the closest robot or in a straight moving direction.

8. Experimental Results

The experiment video is uploaded on online platform [https://youtu.be/rDvpCEM7DmE]. Robots Numbers 1 and 2 are moving in the environment. Both the Robots number 1 and 2 have to follow their virtual robot 1 and virtual robot 2 respectively. The experiment is done for 60 seconds. The starting pose and end pose of virtual robot 1 are ( 2.75 , 2.0 , 0 ) m and ( 3.35 , 2.0 , 0 ) m, respectively, and for virtual robot 2 they are ( 1.24 , 1.8 , 3.14 ) m and ( 0.65 , 1.8 , 3.14 ) m. The virtual robots have to generate a straight path. The desired velocities for both virtual robots are 0.01 m/s. The values of r j and R j of APF in the collision avoidance section 6.2 are considered to be 0.2 m, 0.6 m, respectively. l l n of Equation (20) is considered as when the landmark number 1 in the environment detected by the robot in Figure (7b) then l 1 = 0 , for other landmarks l 2 = 0.786 , l 3 = 1.916 , l 4 = 3.046 and l 5 = 3.996 . l 1 , l 2 , l 3 , l 4 , l 5 in Equation (20) are the distance between the landmark 1 to other markers in the environment. θ r c of Equation (45) is considered in two cases. In case 1 is when the robot moves toward the positive x-axis of the environment and the rotating platform is watching toward the landmarks on the wall which are on the positive y-axis of the environment, then θ r c = 1.5708 . In case 2 is when the robot moves toward a negative x-axis and the rotating platform observes the landmarks on the wall that are on the positive y-axis of the environment, then θ r c = 1.5708 . The constant parameters of the controller Equation (63) are considered as k 1 = 0.25 , k 2 = 2 and k 3 = 2 .
Figure 8a shows the path for virtual robots 1 and 2, the calculated EKF path of robots 1 and 2, and the Opti-track path for all four robots. Figure 8b shows the x y -plot of robot 1 with static robot 4 with activation region. Figure 8c shows the x y -plot of robot 2 with virtual robot 2. Figure 8d shows the x y -plot of all robots with activation regions. The collision avoidance components D v and D y of robot 1 are shown in Figure 8e and Figure 8f respectively. The global errors of the robot, 1 on the x axis, y axis, and with θ with respect to time, are shown in Figure 8g, Figure 8h and Figure 8i respectively. The linear and angular velocities of robot 1 with respect to time are shown in Figure 8j and Figure 8k.
The collision avoidance components D v and D y of robot 2 are shown in Figure 8l and Figure 8m respectively. The global errors of the robot 2 in x axis, y axis and with θ with respect to time is shown in Figure 8n, Figure 8o and Figure 8p respectively. The linear and angular velocities of robot 2 with respect to time are shown in Figure 8q and Figure 8r.
Figure 8t and Figure 8v show the detections of other robots in the environment by robot 1 and robot 2 respectively. The detection of landmarks by robot 1 and robot 2 is shown in Figure 8s and Figure 8u.
The ground truth from the OptiTrack is used to evaluate the results from on-board measurements. The OptiTrack paths for Robots 1 and 2 are in the range of approximately ± 0.2 m with the EKF path generated for robots as seen in Figure 8a. It should be seen as a goal that was reached in this work.

9. Conclusions

The paper shows the algorithm that uses open-source rotating devices installed on all robots. The movement of the devices is independent of the movement of the robot’s mobile platform. Rotating devices are capable of detecting other rotating devices installed on robots in the environment and are also capable of detecting landmarks in the environment. Behavior trees are used to change the orientation of the camera towards the location of landmarks and towards the closest robot or to look straight towards the moving axis. An experiment was performed with four robots, two robots were stationary and worked as static obstacles in the environment. The other two robots were following their virtual robots’ trajectories and avoid collisions with other robots present in the environment. An Extended Kalman Filter is used to localize the robots from the data coming from encoders and from landmarks. The global pose of the robot is calculated and updated after the detection of the landmarks. The proposed research method was verified by an actual experiment, which confirms its usefulness in laboratory practice related to testing multi-agent systems.

Author Contributions

Conceptualization, A.J. and P.H.; Methodology, A.J. and W.K.; Software, A.J. and W.K.; Investigation, P.H.; Resources, W.K.; Writing—original draft, A.J. and W.K.; Writing—review & editing, W.K. and P.H.; Supervision, W.K. and P.H. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was supported by statutory grant 0211/SBAD/0123 and faculty grant 0211/SBAD/0424.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Joon, A.; Kowalczyk, W. Leader–Follower Approach for Non-Holonomic Mobile Robots Based on Extended Kalman Filter Sensor Data Fusion and Extended On-Board Camera Perception Controlled with Behavior Tree. Sensors 2023, 23, 8886. [Google Scholar] [CrossRef]
  2. A. Joon and W. Kowalczyk, "An Open-Source Rotating Device for Relative Localization in Multi-Agent Systems," 2024 13th International Workshop on Robot Motion and Control (RoMoCo), Poznań, Poland, 2024, pp. 69-75. [CrossRef]
  3. Kalaitzakis, M.; Cain, B.; Carroll, S.; Ambrosi, A.; Whitehead, C.; Vitzilaios, N. Fiducial Markers for Pose Estimation Overview, Applications and Experimental Comparison of the ARTag, AprilTag, ArUco and STag Markers. Journal of Intelligent & Robotic Systems 2021, 101, 71. [Google Scholar]
  4. Jurado-Rodriguez, D.; Munoz-Salinas, R.; Garrido-Jurado, S.; Medina-Carnicer, R. Design, Detection, and Tracking of Customized Fiducial Markers. IEEE Access 2021, 9, 140066–140078. [Google Scholar] [CrossRef]
  5. Cepon, G.; Ocepek, D.; Kodric, M.; Demsar, M.; Bregar, T.; Boltezar, M. Impact-Pose Estimation Using ArUco Markers in Structural Dynamics. Experimental Techniques 2024, 48, 369–380. [Google Scholar] [CrossRef]
  6. Hayat, A.A.; Boby, R.A.; Saha, S.K. A geometric approach for kinematic identification of an industrial robot using a monocular camera. Robotics and Computer Integrated Manufacturing 2019, 57, 329–346. [Google Scholar] [CrossRef]
  7. Balanji, H.M.; Turgut, A.E.; Tunc, L.T. A novel vision-based calibration framework for industrial robotic manipulators. Robotics and Computer-Integrated Manufacturing 2022, 73. [Google Scholar] [CrossRef]
  8. Botta, A.; Quaglia, G. Performance Analysis of Low-Cost Tracking System for Mobile Robots. Machines 2020, 8, 29. [Google Scholar] [CrossRef]
  9. Roos-Hoefgeest, S.; Garcia, I.A.; Gonzalez, R.C. Mobile robot localization in industrial environments using a ring of cameras and ArUco markers. In Proceedings of IECON 2021 - 47th Annual Conference of the IEEE Industrial Electronics Society, Toronto, ON, Canada, 2021, pp. 1-6.
  10. Babinec, A.; Jurisica, L.; Hubinsky, P.; Duchon, F. Visual localization of mobile robot using artificial markers. Procedia Engineering 2014, 96, 1–9. [Google Scholar] [CrossRef]
  11. Miranda, V.R.F.; Rezende, A.M.C.; Rocha, T.L.; Azpurua, H.; Pimenta, L.C.A.; Freitas, G.M. Autonomous Navigation System for a Delivery Drone. Journal of Control, Automation and Electrical Systems 2022, 33, 141–155. [Google Scholar] [CrossRef]
  12. Xing, B.; Zhu, Q.; Pan, F.; Feng, X. Marker-Based Multi-Sensor Fusion Indoor Localization System for Micro Air Vehicles. Sensors 2018, 18, 1706. [Google Scholar] [CrossRef] [PubMed]
  13. Morales, J.; Castelo, I.; Serra, R.; Lima, P.U.; Basiri, M. Vision-Based Autonomous Following of a Moving Platform and Landing for an Unmanned Aerial Vehicle. Sensors 2023, 23, 829. [Google Scholar] [CrossRef]
  14. Wubben, J.; Fabra, F.; Calafate, C.T.; Krzeszowski, T.; Marquez-Barja, J.M.; Cano, J.-C.; Manzoni, P. Accurate Landing of Unmanned Aerial Vehicles Using Ground Pattern Recognition. Electronics 2019, 8, 1532. [Google Scholar] [CrossRef]
  15. Liu, S.; Fantoni, I.; Chriette, A. Decentralized control and state estimation of a flying parallel robot interacting with the environment. Control Engineering Practice 2024, 144, 105817. [Google Scholar] [CrossRef]
  16. Zhong, Y.; Wang, Z.; Yalamanchili, A.V.; Yadav, A.; Ravi Srivatsa, B.N.; Saripalli, S.; Bukkapatna, S.T.S. Image-based flight control of unmanned aerial vehicles (UAVs) for material handling in custom manufacturing. Journal of Manufacturing Systems 2020, 56, 615–621. [Google Scholar] [CrossRef]
  17. Bacik, J.; Durovsky, F.; Fedor, P.; Perdukova, D. Autonomous flying with quadrocopter using fuzzy control and ArUco markers. Intelligent Service Robotics 2017, 10, 185–194. [Google Scholar] [CrossRef]
  18. Burrell, T.; West, C.; Monk, S.D.; Montezeri, A.; Taylor, C.J. Towards a Cooperative Robotic System for Autonomous Pipe Cutting in Nuclear Decommissioning. In Proceedings of 2018 UKACC 12th International Conference on Control (CONTROL) Sheffield, UK, 5-7 Sept 2018, pp.283–288.
  19. Vela, C.; Fasano, G.; Opromolla, R. Pose determination of passively cooperative spacecraft in close proximity using a monocular camera and AruCo markers. Acta Astronautica 2022, 201, 22–38. [Google Scholar] [CrossRef]
  20. Qian, J.; Zi, B.; Wang, D.; Ma, Y.; Zhang, D. The Design and Development of an Omni-Directional Mobile Robot Oriented to an Intelligent Manufacturing System. Sensors 2017, 17, 2073. [Google Scholar] [CrossRef]
  21. Zhang, L.; Wu, X.; Gao, R.; Pan, L.; Zhang, Q. A multi-sensor fusion positioning approach for indoor mobile robot using factor graph. Measurement 2023, 216, 112926. [Google Scholar] [CrossRef]
  22. Rekleitis, I.; Meger, D.; Dudek, G. Simultaneous planning, localization, and mapping in a camera sensor network. Robotics and Autonomous Systems 2006, 54, 921–932. [Google Scholar] [CrossRef]
  23. Zhou, G.; Luo, J.; Meng, S. An EKF-based multiple data fusion for mobile robot indoor localization. Assembly Automation 2021, 41(3), 274–282. [Google Scholar] [CrossRef]
  24. Guo, J.; Li, L.; Wang, J.; Li, K. Cyber-Physical System-Based Path Tracking Control of Autonomous Vehicles Under Cyber-Attacks. IEEE Transactions on Industrial Informatics 2023, 19(5), 6624–6635. [Google Scholar] [CrossRef]
  25. Cho, H.; Kim, E.K.; Jang, E.; Kim, S. Improved Positioning Method for Magnetic Encoder Type AGV Using Extended Kalman Filter and Encoder Compensation Method. International Journal of Control, Automation and Systems 2017, 15, 1844–1856. [Google Scholar] [CrossRef]
  26. Mekonnen, G.; Kumar, S.; Pathak, P.M. Wireless hybrid visual servoing of omnidirectional wheeled mobile robots. Robotics and Autonomous Systems 2016, 75, 450–462. [Google Scholar] [CrossRef]
  27. Miljkovic, Z.; Vukovic, N.; Mitic, M.; Babic, B. New hybrid vision-based control approach for automated guided vehicles. The International Journal of Advanced Manufacturing Technology 2013, 66, 231–249. [Google Scholar] [CrossRef]
  28. Py-trees. Available online: https://pypi.org/project/py-trees/ (accessed on 30 Nov 2024).
  29. Open Source Rotating Platform for Multi-agent Mobile Robots. Available online: https://www.thingiverse.com/thing:6550991 (accessed on 30 May 2024).
  30. Dynamixel e-Manual. Available online: https://emanual.robotis.com/docs/en/dxl/ax/ax-12a/ (accessed on 22 February 2024).
  31. NeuroRoboticTech/DynamixShield. Available online: https://github.com/NeuroRoboticTech/Dynamix Shield/tree/master/libraries/DynamixelSerial (accessed on 22 February 2024).
  32. rosserial arduino/Tutorials - ROS Wiki. Available online: https://wiki.ros.org/rosserial_arduino/Tutorials (accessed on 22 February 2024).
  33. ROS Wrapper for Intel. Available online: https://github.com/IntelRealSense/realsense-ros (accessed on 2 June 2024).
  34. Detection of ArUco Markers. Available online: https://docs.opencv.org/4.x/d5/dae/tutorial_aruco_detection.html (accessed on 2 June 2024).
  35. Camera Calibration and 3D Reconstruction - OpenCV. Available online: https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=camera%20calibration#rodrigues (accessed on 2 June 2024).
  36. Braden, B. The Surveyor’s Area Formula. The College Mathematics Journal 1986, 17, 326–337. [Google Scholar] [CrossRef]
  37. Camera posture estimation using an ARUCO diamond marker. Available online: https://longervision.github.io/2017/03/11/ComputerVision/OpenCV/opencv-external-posture-estimation-ArUco-diamond/ (accessed on 30 November 2024).
  38. Charuco Diamond Creation. OpenCV. Available online: https://docs.opencv.org/4.x/d5/d07/tutorial_charuco_diamond_detection.html (accessed on 1 December 2024).
  39. Al Khatib, E.I.; Jaradat, M.A.K.; Abdel-Hafez, M.F. Low-Cost Reduced Navigation System for Mobile Robot in Indoor/Outdoor Environments. IEEE Access 2020, 8, 25014–25026. [Google Scholar] [CrossRef]
  40. Khatib, O. Real-time Obstacle Avoidance for Manipulators and Mobile Robots,Int. J. Robot. Res. 1986, 5, 90–98. [Google Scholar] [CrossRef]
  41. Mastellone, S.; Stipanovic, D.; Spong, M. Formation control and collision avoidance for multi-agent non-holonomic systems: Theory and experiments. Int. J. Robot. Res. 2008, 107–126. [Google Scholar] [CrossRef]
  42. Do, K.D. Formation Tracking Control of Unicycle-Type Mobile Robots With Limited Sensing Ranges, IEEE Trans. Control Syst. Technol. 2008, 16, 527–538. [Google Scholar] [CrossRef]
  43. Kowalczyk, W.; Kozłowski, K.; Tar, J.K. Trajectory Tracking for Multiple Unicycles in the Environment with Obstacles. In Proceedings of the 19th International Workshop on Robotics in Alpe-Adria-Danube Region (RAAD 2010), Budapest, Hungary, 23–27 June 2010; pp. 451–456. [Google Scholar] [CrossRef]
  44. Kowalczyk, W.; Michałek, M.; Kozłowski, K. Trajectory Tracking Control with Obstacle Avoidance Capability for Unicycle-like Mobile Robot. Bull. Pol. Acad. Sci. Tech. Sci. 2012, 60, 537–546. [Google Scholar] [CrossRef]
  45. Kowalczyk, W.; Kozłowski, K. Leader-Follower Control and Collision Avoidance for the Formation of Differentially-Driven Mobile Robots. In Proceedings of the MMAR 2018—23rd International Conference on Methods and Models in Automation and Robotics, Miedzyzdroje, Poland, 27–30 August 2018. [Google Scholar]
  46. Tsoukalas, A.; Tzes, A.; Khorrami, F. Relative Pose Estimation of Unmanned Aerial Systems,2018 26th Mediterranean Conference on Control and Automation (MED), Zadar, Croatia, 2018, pp. 155-160, doi: 10.1109/MED.2018.8442959. [CrossRef]
  47. C. Canudas de Wit, H. Khennouf, C. Samson, O. J. Sordalen, Nonlinear Control Design for Mobile Robots. Recent Trends Mob. Rob. 1994, 121–156. [CrossRef]
Figure 2. Behavior Tree for rotating platform movement.
Figure 2. Behavior Tree for rotating platform movement.
Preprints 143279 g002
Figure 3. Exploded view of assembly.
Figure 3. Exploded view of assembly.
Preprints 143279 g003
Figure 4. Axis representations between the camera and the ArUco marker.
Figure 4. Axis representations between the camera and the ArUco marker.
Preprints 143279 g004
Figure 5. Corner points of ArUco marker.
Figure 5. Corner points of ArUco marker.
Preprints 143279 g005
Figure 6. Top view of the rotating device.
Figure 6. Top view of the rotating device.
Preprints 143279 g006
Figure 7. Side-by-side comparison of (a) ArUco Diamond Marker and (b) Environment with landmarks.
Figure 7. Side-by-side comparison of (a) ArUco Diamond Marker and (b) Environment with landmarks.
Preprints 143279 g007
Figure 8. Experiment result 1.
Figure 8. Experiment result 1.
Preprints 143279 g008aPreprints 143279 g008b
Table 1. Details of parts used in rotating platform.
Table 1. Details of parts used in rotating platform.
S.No. Part name Part details
1 Camera Intel RealSense sensor D435
2 Servo motor DYNAMIXEL AX-12A
3 Ball bearing 6010-2Z
4 Integrated Circuit (IC) 74LS241N
5 Microcontroller Arduino Mega
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated