Preprint
Article

This version is not peer-reviewed.

Development and Experimental Validation of a Sense and Avoid System for a Mini-UAV

A peer-reviewed article of this preprint also exists.

Submitted:

29 November 2024

Posted:

02 December 2024

You are already at the latest version

Abstract
This paper provides an integrated overview of the three-year effort to design and implement a prototypical SAA system based on a multisensory architecture leveraging data fusion between optical and radar sensors. The work was carried out within the context of the Italian research project named TERSA (Electrical and radar technologies for remotely piloted aircraft systems) undertaken by the University of Pisa in collaboration with its industrial partners, aimed at the design and development of a series of innovative technologies for remotely piloted aircraft systems of small scale (MTOW < 25 Kgf). The approach leverages advanced computer vision algorithms and an Extended Kalman Filter to enhance obstacle detection and tracking capabilities. The “Sense” module processes environmental data through a radar and an electro-optical sensor, while the “Avoid” module utilizes efficient geometric algorithms for collision prediction and evasive manoeuvre computation. Initially, this integrated system has been rigorously tested by means of a sophisticated simulation environment that replicates real-world conditions, including the generation of realistic artificial scenery to evaluate the closed loop interaction between sense and avoid modules in a Hardware-in-the-Loop (HIL) setup. Preliminary results demonstrate the system’s effectiveness in real-time detection and avoidance of non-cooperative obstacles, providing high computational efficiency and responsiveness while ensuring compliance of UAV aero mechanical constraints and safety constraints in terms of minimum distance requirements. At a later stage of the project, the effectiveness of the integrated system was evaluated by means of a preliminary flight test campaign conducted at the Lucca-Tassignano airport in Italy. The findings underscore the potential for deploying such advanced SAA systems in tactical UAV operations, contributing significantly to the safety of flight in non-segregated airspaces.
Keywords: 
;  ;  ;  ;  

1. Introduction

During the recent years, Unmanned Aircraft Systems (UAS) and their operations have registered a significant increase in number, sophistication and technical complexity. UAS are no longer a military exclusive reality, but several studies are currently evaluating their possible employment in a broad variety of civil applications. These range from monitoring and surveillance tasks such as border and maritime patrol mission, search and rescue operations, forest fire detection, road traffic surveillance, critical infrastructure protection to commercial purposes such as aerial imagery, crop assessment in agriculture and communication relays [1,2,3]. This rapid expansion is motivated by the unmatched versatility in terms of application scenarios that these systems offer, both military and civil, and by the numerous advantages connected with the employment of a vehicle that doesn’t require a pilot onboard. However, the lack of the presence of a human pilot onboard, also poses a series of safety related concerns and technical challenges mainly connected with the risk of a mid-air collision with another aircraft, that may hinder the successful integration of unmanned aircraft in civil non-segregated airspace [4]. For instance, in Italy, UAV operation is restricted within Visual Line of Sight (VLOS) by the Italian regulatory authority (ENAC) and a specific request must be filed and approved every time a Beyond Visual Line of Sight (BVLOS) operation is to be conducted [5]. Moreover, in the past years, Aeronautical agencies and authorities were involved in the definition of proper regulations to allow UAVs for a safe access to flight, culminating with FAA issuing mandatory installation, onboard UAVs, of dedicated systems for autonomous obstacle detection and avoidance, as a requirement for non-segregated flight operation [6]. Such systems are required to meet or exceed human capabilities in terms of avoiding potential collision threats, a concept that several authors [7,8] refer as “equivalent level of safety” that the UAS should possess for the safe integration in the global airspace. Consequently, a huge effort by worldwide research institutions has been put into place to devise proper Sense and Avoid (SAA) systems with the overall goal of mitigating the risk of mid-air collision between UAVs and other aircraft. The limitations connected with UAS own size and weight, the so-called SWaP (Seize, Weight and Power constraints), severely impact the implementation of autonomous collision avoidance solutions often preventing the adoption of more consolidated technologies such as TCAS [9] or ADB-S. Such constraints often impose to solve real-time, complex, detection and tracking problems utilizing the limited computational resources available onboard. In addition, the fundamental requirement of being able to autonomously detect and avoid non-cooperative intruders which are generally not equipped with transponders and do not take active part in the conflict resolution, force the designer to develop innovative solution leveraging a multitude of sensors with the aim of rendering UAS capable of solving the impending conflict on its own. From a taxonomical standpoint, SAA systems are usually categorized according to the sensor type they rely on, which can be active or passive. For the former case, radar exploiting Frequency Modulated Continuous Wave (FMCW) technology is becoming more and more popular for SAA purposes because of the economical, low Size, weight and power solution they offer. Besides, radar sensors appear interesting because they provide positioning information (in terms of range, range rate and angular measurement) practically under any weather or illumination condition. Several works have investigated the use of radar sensors for SAA applications: in [10] a SAA system comprising the combination of cooperative sensors (TCAS and ADB-S) and a radar is tested in an extensive flight trials campaign; the authors in [11] describe an algorithm for safe navigation of small-sized UAVs within an airspace leveraging secondary Surveillance Radar (SSR); other examples of UAV detection methods based on radar are described in [12,13]. Radar is not the only active sensor which is being investigated for SAA applications. An example of another active sensor, the LIDAR, currently studied for SAA applications is provided in [14]. Passive sensors and specifically optical sensor, on the other hand, generally offer a higher accuracy in terms of target angular position reconstruction, a higher refresh rate and a lower power absorption and weight; their main drawbacks lies in the fact that their operation is severely limited in low lighting conditions (i.e. night) and direct target position reconstruction is not possible because a single camera sensor can’t measure distance directly. In [15] the problem of estimating target distance with only a camera sensor is overcome by means of a depth estimation model based on a lightweight encoder-decoder architecture trained on a large database of airborne object. The deep learning model is trained to translate the input image from the monocular camera to a depth mask, classifying every pixel into depth values. In [16] the authors leverage an implementation of the You Only Look Once (YOLO) object detector to detect cooperative micro UAV within the FOV of a camera. Sensor data coming from the cooperative navigation system are exploited to reduce the search window in a strip of the image which is computed starting from the approximate relative position of the two UAVs and attitude information of the UAV where the camera is mounted. Other examples of different strategies employed to tackle the task of intruder detection relying solely on camera sensor can be found in [17,18]. Other authors have instead favoured an approach exploiting the fusion of passive and active sensors. This choice is motivated by the fact that sensor fusion strategies can leverage the complementary advantages of different sensor sources. For example, in [19] a radar/visual fusion approach designed to fuse the radar and visual tracks is discussed for a non-cooperative Sense and Avoid system for low altitude flying UAVs. In [20] an integrated Sense and Avoid system developed by the Italian Aerospace Research Centre (CIRA) and the university of Naples and deployed onboard a customized TECNAM P-92, based on data fusion between EO and radar sensor is thoroughly discussed. In the work, particular focus is placed on integration details, fusion logics, algorithms and flight results relevant to the achievable situational awareness. A radar/visual multisensory architecture was chosen also for the implementation of one of the activities of the recently concluded Italian project named TERSA. The project aimed at the design and development of several innovative technologies for a lightweight UAV including the development of a collision avoidance system for unmanned aerial vehicles leveraging data fusion between optical and radar sensor. The project involved the University of Pisa (UNIPI) and a consortium of industrial partners led by the Italian UAV manufacturing company Sky Eye Systems (SES), and it led to the development of a fully autonomous SAA system to be installed onboard a new generation of lightweight UAV manufactured by SES in Italy. This work presents an overview of the system that was developed during the project, from the identification and definition of the technical requirements to the actual flight test conducted to validate the system. Specifically, the present article is organized as follows: the reminder of this first Chapter describes the high level requirements of the system; Chapter II briefly recalls the sense and avoid system architecture developed during the project; Chapter III details the principal algorithms designed for the implementation of the system, with particular focus to the computer vision object detection and tracking algorithms and the data fusion strategy adopted; Chapter IV describes the numerical simulations conducted to validate the system providing an overview of the integrated HIL simulation environment that was developed at the Fly-By-Wire laboratories of the Civil and Industrial Engineering Department of the University of Pisa; Chapter V describes the flight test campaign conducted in cooperation with Sky Eye System (SES), and ECHOES radar technologies, the partner who manufactured the radar, aimed at the validation of the integrated system. Finally, Chapter VI presents the conclusion of the research work and provides an outlook of the future developments.

1.1. Reference Aircraft

The Sense and Avoid system, object of the present work is designed to be embarked and operated onboard the reference aircraft manufactured by SES (Figure 1), whose main geometrical and inertial characteristics are reported in Table 1.

1.2. System High Level Requirements

UAV autonomous anti-collision systems are still at a developing stage and research studies are still being carried out to identify system requirements and sensing solutions. As a consequence, at present, specific requirements to be met by the system designer don’t exist; however, a collection of universally accepted specifications, guidelines and recommendations formulated by ICAO [21,22], FAA [8], ASTM [23], EUROCONTROL [24], RTCA [25] and other researcher [26,27] has been used to define the system requirement for the present application. A SAA system is usually composed of two main modules. The sense module, from a functional standpoint must acquire information from the environment to contribute building awareness of what is happening in the vicinity of the UAV and to detect potential hazards. The task of the avoid module, on the other hand, is to guarantee both a separation-provision and a collision-avoidance function. By separation provision is intended the capability to separate the UAVs from other aircraft whenever Air Traffic Control (ATC) is not providing separation. The Pilot in Command (PIC) should be warned by the system following a loss of separation so that PIC may manoeuvre as required. System should immediately prompt the PIC as soon as separation is restored and should seek for approval to return on initial trajectory. This function should separate the UAV from other aircraft by a minimum distance of 0.5 NM in the horizontal plane and 500 ft in the vertical plane. Collision-avoidance function instead should operate independently from ATC and should either provide the PIC with valuable cues about an impending collision, waiting for him to employ corrective actions or should autonomously resolve imminent collision executing avoidance manoeuvres. Several requirements can be pointed out both for sense and avoid modules.

1.2.1. Sense Requirements

Principal requirements are summarized below:
  • Detection range,  d m i n . It is connected to the time-to-collision by means of approaching speed. It must be sufficiently large to allow for an avoidance manoeuvre complying with aircraft dynamic, structural and aerodynamical constraints that guarantees a minimum separation of 500 ft at the closest point of approach to the obstacle (“miss distance”) [23]. For the characterization of this requirement, the procedure described in [27] has been adopted by considering the most critical condition of a frontal encounter. Since the evasive manoeuvre will be conducted by means of a corrected turn on the horizontal plane, d m i n essentially depends on four factors: UAV speed, intruder aircraft speed, latency between detection and evasive command execution and especially the minimum turn radius. As well known, minimum turn radius (in stationary manoeuvre at constant altitude and speed) for a conventional fixed wing aircraft depends on three constraints: aerodynamic, structural and propulsive. As an example, Figure 2 (a) shows the minimum attainable turn radius at sea level for the TERSA aircraft according to each constraint where realistic bounds have been assumed for maximum lift coefficient C l m a x = 1.45 , maximum vertical load factor n z m a x = 1.15 , and available thurst T d m a x = 20 N (actual exact values provided by the manufacturer have been omitted due to NDA reasons). A lower threshold of R m i n = 100   m is enforced by the FCS as a stall protection mechanism. As it can be noted, in the range of operative flight speeds ( 20 < V < 28   m / s ), structural constraint is the only effective bound but in turn is capped by the stall protection limit. Figure 2 (b) shows the required minimum detection range for various intruder speed and UAV minimum turn radius, according to the procedure described in [27] (conservative estimate with a delay factor of Δ t = 5   s between detection and evasive command execution). A detection range of d m i n = 700   m appears satisfactory for the effectiveness of a collision avoidance maneuver with an intruder flying at a comparable cruise speed as the UAV.
  • Field of View: A widely accepted guideline for manned aircraft manufacturers is to build cockpits as to ensure a visibility of ±110° (azimuthal) and ±15° (in elevation) [23]. Usually, the same requirement is extended to Unmanned aircraft. However, due to the prototypical nature of this system a FOV of ±60° has been identified as the design requirement. As a matter of fact, preliminary sensitivity analysis conducted on a wide range of collision encounter scenarios, considering an intruder cruise speed comparable with the UAV cruise speed, have shown that, given an assigned detection range d m i n , FOV anlges greater than 55° don’t substantially affect the detectability of the intruder aircraft.
  • Refresh rates: For a precise tracking of the intruder aircraft, computer vision algorithms should run at a sufficiently high frequency (> 10 FPS). Given the limited computational power available on board, it is thus essential to optimize object detection and tracking algorithms and to select the best compromise between image resolution and performance. Data fusion algorithms are based on a standard implementation of an extended Kalman filter and thanks to optimized linear algebra libraries do not pose a significant computation burden on the VPU processor. Likewise, collision avoidance routines, that exploit simple geometric laws, do not constitute a bottleneck for performance; the computation of the setpoint on the desired ground track angle to carry out the maneuver, essentially boils down to the extraction of the roots of a 4th order polynomial which again is performed effectively on the VPU processor by means of optimized linear algebra libraries (OpenBLAS/cuBLAS). The bottleneck is thus represented by the object detector pipeline.
  • Optical sensor resolution: the choice of the optimal compromise is the result of a trade-off between computational efficiency and capability of detecting small targets at a great distance. The performance of the visual tracker chosen for the implementation of the system is essentially independent of the resolution because it acts on a finite set of discrete anchor points. On the contrary, the object detection pipeline is strongly affected by the size of the image as detailed in chapter 3. At the same time, a higher resolution is beneficial for two major reasons:
  • The attainable maximum angular resolution can be expressed as:
    Δ ψ = tan 1 S w Δ h H f = tan 1 1 ρ H f
    where ρ H is the horizontal pixel density of the sensor and f the focal length.
  • Johnson criteria states that the maximum distance R d e t at which an object of physical characteristic size of l can be detected by an optical sensor, denoting with q the minimum number of pixel the object needs to occupy for a successful detection (Johnson’s factor usually assumed = 2) and with m the physical size of the pixel, is given by [27]:
    R d e t = l f q m
For a given sensor’s size, the higher the resolution, the lower the single pixel physical size. For these two reasons, it is thus clear that a higher resolution would be beneficial. For the development of the system, considering the specifics of the Vision Processing Unit chosen (see Chapter 2) and the typical dimensions of the UAVs belonging to the same class as the TERSA aircraft, a resolution of 640 × 360 was chosen.

1.2.2. Avoid Requirements

Given the prototypical nature of the system, some simplified assumptions have been made during the definition of the requirements for the collision avoidance module that can be summarized as follows:
  • One single intruder aircraft present at a given time in the conflict scenario belonging to the same UAV class with comparable cruise speed and altitude.
  • Such aircraft maintains its original linear stationary flight course, wing levelled, constant speed. Several combinations of relative initial positions and relative speeds are evaluated.
  • Evasive maneuver is conducted by means of a correct turn on the horizontal plane, conducted at constant speed and altitude.
The reason for a climbing evasive maneuver has not been considered is briefly justified in the following. A study has been conducted evaluating the maximum flight path angle attainable by the reference aircraft in a steep (stationary) climb maneuver utilizing aero mechanical dataset provided by courtesy of SES. Specifically, as well known, the maximum attainable flight path angle γ m a x is given by:
γ m a x = max T d D W
where T d = f ( V , R P M , h ) is the available thrust, D = f ( α , α , ˙ β ,   V , h , ) is the aerodynamic drag and W the aircraft weight. Considering the equilibrium in the vertical plane:
L W = 0 M y = 0
And applying the principle of superimposition of effects (PSE), former equations can be expressed by means of Eq. (5) where the Δ terms represent variation w.r.t to the reference condition.
C l r e f + Δ C l δ e Δ δ e + Δ C l V Δ V + Δ C l α Δ α 1 2 ρ V 2 S W = 0 C m r e f + Δ C m δ e Δ δ e + Δ C m V Δ V + Δ C m α Δ α = 0
Once the trim problem is solved for AOA and elevator deflection ensuring equilibrium, ( α ,   δ e respectively), drag force is computed by means of Eq. (6)
D = C D r e f + Δ C D δ e Δ δ e + Δ C D V Δ V + Δ C D α Δ α 1 2 ρ V 2 S
Available thrust as a function of flight speed, engine RPM and altitude is obtained by means of engine maps provided by courtesy of SES. Figure 3 shows the attainable γ m a x as a function of RPM and flight speed at a given altitude.
The exact value of γ m a x for a given altitude is found by solving the maximization of the expression (3) while enforcing the respect of trim equilibrium described by (5). For instance, at an altitude of h = 1000   m ,   γ m a x = 9.57° corresponding to a True air speed of TAS = 18.8 m / s and an engine setting of 7950 RPM. Considering a detection range of d m = 700   m , this would result in a Δ h = tan ( γ m a x ) d m 118   m , insufficient to ensure a minimum separation provision of R s b = 500   f t as recommended in the guidelines. It should be noted that the actual minimum separation attainable with a climbing manoeuvre alone would be even smaller than the afore mentioned estimate due to engine and aircraft dynamics which prevent the onset of an immediate maximum flight path angle and unavoidable latencies between actual detection and evasive command implementation.
Granted that a climbing manoeuvre alone wouldn’t provide necessary separation to avoid the obstacle and given the prototypical nature of the application, it is decided to carry out the evasive action by means of a coordinated turn on the horizontal plane.
The requirements of the avoid modules are thus hereby summarized:
  • Once a threat has been positively identified by the sense module, the avoid function has to perform an avoidance manoeuvre in a completely autonomous fashion which guarantees minimum separation provision at all times between the UAV and the intruder aircraft. The Autonomous Collision Avoidance (ACA) strategy employed must be effective for non-cooperative intruders.
  • Such manoeuvre must be conducted on the horizontal plane by means of a coordinate turn that takes into account the aerodynamic and structural limits of the UAV.
  • After the conflict has been deemed resolved by the Conflict Detection and Resolution (CDR) algorithms, the avoid module must bring the UAV on his original trajectory.
  • It is desirable that the avoidance manoeuvre produces the minimum deviation from the original planned trajectory of the UAV.

2. System Architecture

The Typical integration of a SAA system within the closed loop control system of an UAV is shown in Figure 4. As can be noted, the underlying algorithms take as input the UAV position and velocity states provided by the Inertial Navigation System (INS) and the velocity and position states of the intruder aircraft and yield as output, in the most general case, the demanded speed, flight path angle and ground track angle which in turn are fed as input to the autopilot system to avoid the collision threat.
For this specific application, the only output of the SAA system is the demanded heading angle used as setpoint for the coordinated turn on the horizontal plane. For the actual implementation of the system, a multisensory architecture leveraging data fusion between radar and EO sensor has been chosen. As it has been shown in several previous works [20,27,28] this allows to compensate for the shortcomings of each sensor type, reduce noise and increase the overall solution accuracy. In particular, data fusion between radar and EO sensors leads to an increase in system robustness with respect to false alarms and missed detections during the object detection stage, coupled with a higher accuracy in the tracking stage. A diagram portraying the high-level schematics of the developed architecture is shown in Figure 5. The working principle of the system can be summarized as follows: both the EO (camera) sensor and the radar scan the environment ahead of the UAV in search of a potential threat. Each new frame captured by the camera is compared against a background model of the scene leveraging an object detection strategy based on Single Gaussian Model (SGM), where suitable motion compensation algorithms are employed to compensate for the ego motion of the camera.
A Ransac approach is used to derive the homography matrix describing the transformation between current frame and background model, starting from point correspondences between the two scenes. Background model is then warped according to this transformation as to co-register it to the current frame and minimize the noise due to camera ego-motion. Connected component analysis (CAA) is then used to group various blobs in the binary mask into a single object of interest. Statistical criteria are then applied to validate only those tracks that exhibit a certain regularity in terms of position within the field of view and dimensions across the various frames; more details on the specific algorithms involved are provided in the subsequent chapters. A validated track is then followed across various frames leveraging an implementation of the Kanade Lucas Tomasi (KLT) feature tracker [29]. From the knowledge of camera parameters (focal length, resolution and sensor size) track centroid position in terms of pixel within the scene is then transformed into relative elevation and azimuth angles, as shown in Figure 6 by means of Eq. (7). Here, ψ ,     λ are the azimuth and elevation angles respectively in a reference system fixed with the camera, S w ,     S h are the sensor width and height respectively, V ,   H the vertical and horizontal resolution of the scene, f the focal length and Δ v ,   Δ h the vertical and horizontal position of the intruder aircraft within the scene (expressed in pixel).
ψ = tan 1 S w 2 Δ h H / 2 f = tan 1 S w Δ h H f λ = tan 1 S h 2 Δ v V / 2 f 2 + S w Δ h H 2 = tan 1 S h Δ v V f 2 + S w Δ h H 2
The knowledge of the target distance provided by the radar allows to close the tracking problem. Specifically, distance measure, elevation and azimuth angles together with UAV position, velocity and attitude states are fed as input to an Extended Kalman Filter (EKF) which carries out data fusion and outputs intruder aircraft velocity and position states expressed in an inertial reference system. These states are then fed as input to collision avoidance routines which compute an evasive manoeuvre when an impending collision is predicted. A new setpoint on the heading angle is computed at every iteration step in a such a way that the trajectory of the UAV will be tangent to a sphere centred around the intruder aircraft in the point of closest approach. More details about the designed algorithms are provided in chapter 3.
System Components
The main components identified for the realization of the system are:
  • A range doppler radar in Ku-band (13.325 GHz) with forward-looking configuration. Radar emits a Linear Frequency Modulated Continuous Wave (FMCW) with a maximum transmitted band of 150 MHz. Azimuth estimation is obtained by means of monopulse amplitude technique.
  • Electro-optical sensor with at least 30 FPS, HD ready resolution. Focal length must be chosen to ensure a sufficiently high FOV.
  • Vision processing Unit. An embedded platform with low power absorption, easily interfaceable with camera and radar (through ethernet) must be chosen, ideally optimized for computer vision tasks. Both Sense and avoid algorithms run on the embedded platform
  • An Inertial Measurement Unit (IMU) must be employed to retrieve attitude angles which are necessary to transform intruder aircraft position and velocity states into the inertial frame where the conflict is resolved.

3. Algorithms Design

The present section details the implementation of principal algorithms underlying the sense and avoid system.

3.1. Sense Module

As explained above, for the implementation of the system, a multisensory architecture based on data fusion between electro-optical (EO) and radar sensor was chosen. Digital Image processing algorithms thus play a crucial role for the correct functioning of the SAA system. The three main tasks such algorithms are called to perform can be summarized as:
4.
Segmentation between background and objects of interest within the FOV with the aim of identifying the presence of an intruder aircraft, Object Detection (OB).
5.
Carry out noise suppression due to camera ego-motion, Ego Motion Compensation (EMO)
6.
Once the presence of an object has been confirmed, track its motion efficiently across frames after its identification.
The requirement of achieving a sufficiently high frame rate on the VPU chosen, to allow for an effective target tracking while at the same time maintaining a sufficiently high precision to distinguish small targets at a great distance from the optical sensor, guided the entire designing process.
Figure 7 shows a high-level schematic of the adopted computer vision pipeline underlying the sense module. Some of the key algorithms constituting the building blocks of the pipeline (such has Harris Corner detector and Kanade Lucas Tomasi feature tracker) appear at several stages of the processing chain; they will be briefly explained in the following.

3.1.1. Harris Corner Detector

The Harris Corner Detector [30] is one of the most well-known, robust and efficient algorithms for extracting singular points within a digital image. Singular points are points of considerable importance as they tend to be invariant to translation, rotation and changes in illumination; their stability makes them ideal for, among other, image tracking or stabilization applications. The name corner derives from the fact that they are usually located at the intersection of at least two edges within the image, where an edge is defined as an area characterized by a sudden, sharp change in light intensity. Consider a digital image of size s = W × H and a small window of size s w = w × w ; then a shift of the window in a certain direction will either produce negligible change in mean intensity, if the window lies above a flat region, a substantial change in mean intensity value when shifting in two orthogonal direction if the window lies over a corner and lastly if the window lies over an edge, the shift will produce a substantial change while shifting along a direction but a negligible one if shifting along the orthogonal direction. Mathematically, this can be expressed in terms of the Sum of Square Differences (SSD) of the intensity values of all the pixels within the window before and after the translation as:
E u , v = x , y W w x , y [ I x + u , y + v I x , v ] 2
where w ( x , y ) is a weight matrix, I x , y denotes the intensity value of the pixel located at position ( x , y ) and I ( x + u , y + v ) denotes the intensity value of the same pixel after a small shift ( u , v ) of the window in the direction ( x , y ) .
By means of Taylor expansion, Eq. (8) can be rearranged as:
E u , v [ u   v ]   x , y W w x , y I x 2 I x I y I x I y I y 2 u v = u v   M   u v
where the matrix M = x , y W w x , y I x 2 I x I y I x I y I y 2 is generally referred as the structural tensor with I x ,   I y representing (spatial) horizontal and vertical image gradient respectively. Eigenvectors of M matrix denote the directions along which the mean square deviation takes on maximum and minimum values. When both the associated eigenvalues are sufficiently high (in a relative sense with respect to the others), λ 1 ,   λ 2 0 the patch is above a singular value (corner point). A suitable Harris response, defined as:
R = λ 1 λ 2 k λ 1 + λ 2 2 = det M k t r M 2
Is used in place of the actual eigenvalues computation, where k [ 0.004,0.006 ] The chosen implementation is articulated in the following steps:
  • Gray scale conversion of original image
  • Spatial gradient calculation by means of linear spatial convolution with kernels k = [ 1   0   1 ] and its transpose k T = 1   0   1 T
  • Structural tensor calculation for each pixel p ( x , y ) in a window W x , y centered around the pixel.
  • Harris response calculation, by computing trace and determinant of M matrix.
  • Non maxima suppression (to eliminate duplicate points) and thresholding: in a nutshell, only those points whose response is higher then 1% of the maximum R are selected as valid points.
  • Biquadratic fitting to refine corner position and obtain a subpixel location.

3.1.2. Kanade Lucas Tomasi Feature Tracker

The KLT feature tracker is a well-known, efficient feature tracking algorithm which exhibits some robustness w.r.t the change in illumination condition, camera motion, and change of appearance of the tracked object. In its pyramidal implementation [31], it is capable of tracking large motions while preserving a small integration window. The algorithm finds correspondences of points from a primary image I ( x , y ) and a secondary image J ( x , y ) by solving a non-linear optimization problem. It attempts to minimize the residual function ϵ defined as:
ϵ d ¯ = ϵ d x , d y = x = p x w x p x + w x y = p y w y p y + w y I x , y J x + d x , y + d y 2
where d ¯ = d x , d y is the displacement vector to be determined, p = p x , p y T is the current corner point under evaluation and w = ( w x ,   w y ) is an integration window centered on it. The objective of the algorithm is to find the position of a corresponding window centered on the pixel v = [ p x + d x ,   p y + d y ] on the secondary image, such that the sum of square differences among the intensity values is minimized. The vector d = d x , d y T is called optical flow and is the unknown to be determined. Applying Taylor expansion to Eq. (11) and setting its derivative with respect to displacement to zero (at the optimum there should be no change in intensities values) yields:
ϵ ( d ¯ ) d ¯ 2 x = p x w x p x + w x y = p y w y p y + w y I x , y J x , y J x   J y d ¯ J x   J y = 0
where the quantity δ I x , y I x , y J ( x , y ) can be interpreted as the time derivative of the intensity values of the image in the point x , y T , whereas the matrix J x   J y is simply the spatial gradient of the intensity values of the secondary image (current frame). For a purely translational displacement, it can be proved that intensity gradients of the primary and secondary images are interchangeable i.e.
J x , y = J x J y δ J δ x δ J δ y T = δ I δ x δ I δ y T = I x I y = I x , y
This fact is exploited in the implementation leading to great computational advantage because for each image, for each pyramidal level, gradients are computed only once. Gradients are computed by means of the central difference operator Eq. (14)-(15)
I x x , y = I x , y x = I x + 1 , y I ( x 1 , y ) 2
I y x , y = I x , y y = I x , y + 1 I ( x , y 1 ) 2
Plugging equations (14)-(15) in (12) yields Eq (16).
1 2 ϵ d ¯ d ¯ T x = p x w x p x + w x y = p y w y p y + w y I x 2 I x I y I x I y I y 2 d ¯ δ I   I x δ I   I y = 0
Now, defining the so-called image mismatch vector  b ¯ as:
b ¯   x = p x w x p x + w x y = p y w y p x + w y δ I   I x δ I   I y
And the structural tensor G as:
G x = p x w x p x + w x y = p y w y p y + w y I x 2 I x I y I x I y I y 2
Eq. (16) can be rewritten as:
1 2 ϵ ( d ¯ ) d ¯ T G d ¯ b ¯ = 0
Which yields the optical flow solution:
d ¯ o p t = G 1 b ¯
Notice that in chosen implementation, the matrix G is constant for each pyramidal level, thus the computation of image gradients (an expensive operation that involves spatial convolution) is performed only once per pyramidal level; the only quantity updated at each iteration is the image mismatch vector b ¯ which is a function of the difference between the reference window centered around the current pixel in the primary image (which is constant) and window in the secondary image which is a warping of the original window. Notice however that the interpolation required to compute the warped window is not performed on the entire image but only on the window, thus the smaller the window the faster the algorithm. Since the maximum displacement that can be tracked is equal to half the window size, the pyramidal implementation is a clever approach to track higher displacement while preserving a small window and thus computational efficiency. Essentially, each pyramid level is a down sampled version of the original image with a subsample factor of 2. When the algorithm converges at a the l t h level, the displacement is propagated at the l 1 t h level and used as a new starting point. A visual representation of the pyramidal expansion is shown in Figure 8. An example of matched points correspondences for an aeronautical sequence is shown in Figure 9. Finally, a high-level scheme of the implemented algorithm is provided in Figure 10.

3.1.3. Object Detection

Several approaches exist in literature to address the problem of detecting an object of interest within the FOV. These algorithms range from simple colour thresholding, to edge detection coupled with proper thresholding and morphological operations [32], segmentation based on motion estimation through optical flow [6], background subtraction methods, of whom a comprehensive review is presented in [33] or template/block matching. Recent approaches based on machine learning/deep learning such as region proposal convolutional neural networks (R-CNN) have proved to be extremely powerful for the purposes of object detection, but they come at the expense of a higher computational cost. Unified Real time object detection, of which YOLO [34] constitutes the most notorious example, aims at reducing computational cost while retaining the flexibility of R-CNN based methods. For the present application however, also in light of the expected limited object size within the FOV, due to the distance and small size, a more traditional computer vision object detection technique has been employed. A background subtraction scheme based on single gaussian model has been chosen. In addition, in order to reduce noise arising from the ego-motion of the camera, a proper motion compensation strategy, inspired by the works in [35,36,37] is employed; more details about the ego motion compensation are provided in the next section. To address the problem of the dynamic background (due to camera motion new portion of the background enter the FOV while others are no longer visible), two main approaches are commonly found in literature:
  • The construction of a panoramic background model by co-registering and stitching consecutive frames of the scene, the so-called panorama stitching.
  • The update of a background model which preserves its size in terms of pixels. For this approach it is necessary to warp the background model on the current frame according to a matrix describing the transformation of the scene. The overlapping regions can then be updated while the new regions must be considered as background at the first iteration.
The first approach is very expensive in terms of memory requirements, mainly because it requires constant storage of the new images captured by the camera as it moves; it also requires precise 'stitching' between the images, resulting in a high computational cost. For the system implementation thus, the second approach, inspired by the works [35,36,37], was chosen as it allows for an efficient and effective 'object detection' system, both in terms of memory required and computational cost. The main steps of the developed object detection algorithm are summarized below:
  • Grayscale conversion of each new frame and preprocessing by spatial convolution with a Gaussian filter (low pass) to reduce aliasing effects and noise.
  • A number of initial frames are stored in a buffer S ( x , y , i ) and used to build the single gaussian background model. Here x , y denote pixel position while i is the index of the single image in the buffer. In particular, mean intensity value μ and standard deviation σ are computed as:
    μ x , y = 1 n i = 1 n S x , y , i                       σ x , y = 1 n 1 i = 1 n S x , y , i μ x , y 2
  • Once the background model is ready, at each iteration, once a new frame becomes available, the model is coregistered on the current frame following a procedure similar to the one detailed in [35]. The portion of the background which lies outside the new frame is discarded while the new portion of the scene which is not present in the model is flagged as background with maximum learning rate β = 1 . This is done by computing the homography matrix H which describes the perspective transformation between model and current frame using corner points as anchor points and applying this matrix on the background model (warping).
  • For each pixel X c x , y in the current frame, a corresponding pixel in the background model X B ( x ' , y ' ) is computed as
    X b = H X c
  • The comparison of the intensity value of a certain pixel in the current frame with the neighborhood of the corresponding pixel in the background model allows to determine whether or not said pixel belongs to the foreground. In analytical form this is expressed by Eq. (23)(24) and (25).
    D X c , X i = 1 σ 2 X i I X c μ X i 2
    X ^ b = a r g m i n X i D X c , X i   where   X i N ( X b )
    ( X c ) = B a c k g r o u n d     i f     D X C , X ^ b < θ ; F o r e g r o u n d   o t h e r w i s e
    where θ is a threshold chosen arbitrarily (design parameter).
  • Once background and foreground section of the frame have been identified (segmentation process), the model is updated according to Eq. (26), (27)
    μ b = 1 α μ b + α I b σ b = 1 α σ b + α | I b μ b |                 0 α 1
    μ f = 1 β μ f + β I f σ f = 1 β σ f + β | I f μ f |               c o n   0 β   1
In the previous equations, the subscripts b,f indicate that the corresponding mean value μ   or standard deviation σ must be considered on the portion of the scene labelled as background or foreground respectively. The terms α , β are the learning rates of the model. To better understand the effect of these parameters, let’s consider some limit cases. The case for α = β = 0 represent a static background model which is never updated. When a new object of interest appears in the scene it will be permanently detected in the associated binary mask. This is desirable but the associated problem lies in the fact that if there’s an actual change in the background scene due to camera motion or dynamic lighting conditions or others, such differences in the scene would be flagged as false positives (ghosting) and the model would be unable to adapt to the dynamic change in the scene. The limit case for α = β = 1 is equivalent to having no background pattern at all. With these settings, in fact, each frame is instantaneously compared only with the previous one, resulting in what in technical terms is defined as inter-frame background subtraction. In this case, there are no ghosting problems, but there are at least two important limitations. Since there is no background pattern in fact, the object is only highlighted if it has a relative motion w.r.t to the previous frame and for objects with a uniform pattern, only the edges are highlighted, drastically reducing the effectiveness of detection. The foreground learning rate parameters α is related to the speed at which regions of the model labelled as foreground are updated with new information coming from the current frame. A low value means that an object of interest will exhibit its presence in the binary mask for longer times (which is desirable) but at same time excessively low values would determine unwanted streaks with longer decay time associated with object of interest motion within the FOV. Conversely a higher value is beneficial for reducing streaks in the wake of object motions but could entail shorter times of effective detection in terms of binary mask for an object of interest. Similar considerations can be made for the effects of the background learning rate parameter β .
7.
Connected Component Analysis (CAA) is finally applied to condense several close blobs into the same track reading. In the chosen implementation, a connectivity analysis is essentially performed to filter out the various clusters of scattered pixels with little detection significance and due essentially to noise, in order to select only a trace with dimensions compatible with the intended purposes.
Referring to Figure 7, the output of the object detection algorithm is fed as input to the track validation stage. Specifically bounding box size and its centroid coordinates are evaluated in the track validation algorithm for statistical coherence before being confirmed as a valid track and passed on to the Extended Kalman filter for data fusion with radar reading.

3.2. Avoid Module

The decision-making algorithm for pair wise non-cooperative aircraft collision avoidance chosen for the developed system is adopted from the results of the work [38] and it is based on the concept of minimum separation distance:
d A B = r     V A B   V A B 2 V A B r
where d A B is the minimum distance that will be registered between the two aircraft at future time, r is the relative distance vector and V A B is the relative speed vector, and it’s valid only for linear trajectories at constant speed. In this framework, the intruder aircraft is modelled as a safety bubble of radius R with constant speed V B whereas the UAV is a point mass with a 3DOF dynamic, and a constant speed V A . With these assumptions, the collision avoidance problem can be formulated as a two-stage problem:
  • “Conflict detection problem”: at every processing step, the feasibility of a potential conflict in the future is investigated by determining whether or not, loss of minimum separation will occur at a future time. In other words, it is determined whether the UAV will cross the safety bubble centred around the intruder aircraft. If the response is affirmative the next problem is solved.
  • “Conflict resolution problem”: based on the concept of collision cone, during conflict resolution stage, the system computes a heading setpoint that will bring the trajectory of the UAV to be tangent to the safety bubble around the intruder aircraft.
It can be proved [39], that two objects are bound for a collision if Eq. (29)(52) holds:
d A B < R   a n d   r ˙ < 0
The algorithm thus, at each time step, starting from the position and velocity states of the intruder aircraft, given by the EKF and the UAV states provided by the INS, computes the relative distance and speed to update the predicted minimum separation distance vector d A B . If the norm of the latter is lower than the safety bubble radius, then a collision is predicted to occur, and an evasive manoeuvre is computed to avoid the obstacle. Conflict resolution is based on the concept of collision cone [38]:
C A B R = V A V A , γ A , χ A R 3 | V A > 0 ,   γ A ] π 2 , π 2 ] ,   χ A [ π , π ] ,   r ˙ < 0 ,   d A B V A 2 < R 2
i.e. all the triplets of airspeed V A d , flight path angle γ A d , and ground track angle χ a d resulting in a speed vector inside the collision cone defined be Eq. (30) will result in a collision. The set of solutions is composed of all the combinations of speed vectors that lie outside such collision cone and at the same time comply with UAV aero mechanical constraints:
Σ = V A d , γ A d , χ a d R 3 V A d V A m , V A M ,   γ A d γ a m , γ A M ,   χ A d π , π ] , r 0   o r   d A B V A d , γ A d , χ A d 2 R 2
Each solution provided by Eq. (31) is clearly a viable one. However, the tangential solution is chosen, as suggested in the original paper because it is the one the minimize the deviation from the nominal trajectory. This can be obtained from Eq. (32)
d A B 2 = r 2 r V A B 2 V A B 2   R 2
After some manipulation, Eq. (32) yields:
( r x V A cos γ A cos χ A + r y V A cos γ A sin χ A r z V A sin γ A ) 2 2 r   V B r x cos γ A cos χ A + r y cos γ A sin χ A r z sin γ A V A + r V B V A 2 = r 2 R 2 V A 2 [ V A 2 + V B 2 2 V A ( V B X cos γ A cos χ A V B Y cos γ A sin χ A V B Z sin γ A ) ]
This can be interpreted as searching for all the triplets of: UAV airspeed V A , flight path angle γ A and ground track χ A that correspond to a tangential solution for the particular conflict scenario defined by the triplets of R, r , V B . Evidently there are 2 solutions corresponding to all the tangent vectors to the safety bubble in the 3D space. As specified in chapter 1.2.2, for the present application, an evasive maneuver based on a coordinated turn on the horizontal plane is considered. Thus, the only unknown of Eq. (33), is the UAV ground track angle χ A while, V A and γ A can be considered constant. The previous equation is thus rearranged yielding:
a 1 cos 2 χ A + a 2 sin 2 χ A + a 3 cos χ A sin χ A + a 4 cos χ A + a 5 sin χ A = b
With: a 1 = r x 2 cos 2 γ A , a 2 = r y 2 cos 2 γ A ,     a 3 = 2 r x r y cos 2 γ A ,
a 4 = 2 r 2 R 2 V A 2 V A V B X cos γ A 2 r x r z cos γ A sin γ A 2 r V B V A r x cos γ A
a 5 = 2 r 2 R 2 V A 2 V A V B Y cos γ A 2 r y r z cos γ A sin γ A 2 r V B V A r y cos γ A
b = r 2 R 2 V A 2 V A 2 + V B 2 + 2 r 2 R 2 V A 2 V A V B Z sin γ A r z 2 sin γ A 2 r V B V A r Z sin γ A r V B V A 2
This 4th order polynomial yields four solutions corresponding to two tangent solutions in the forward collision cone and two tangent solutions in the backward collision cone as explained in [38]. By filtering out the solutions pertaining to the backward collision cone, two viable ground track angles are left and can be chosen to perform the maneuver. The decision-making algorithm developed for this work always selects the angle which allows the UAV to pass behind the approaching intruder aircraft rather than in front (i.e. turning right for a starboard approaching encounter and turning left for a port approaching encounter); for a front collision encounter a right turn is always chosen. A Finite State Machine (FSM), schematized in Figure 11, handles the transition between the state of navigation and avoidance. A further differentiation is made between a state where the target is within the FOV where both its velocity and position states are constantly corrected with measurements coming from the sensors and a state when the target is outside the FOV and where its states are updated solely by means of the dynamic model. If the intruder aircraft exits the FOV (f.e. due to the avoidance maneuver which turns the nose of the UAV) the maneuver is conducted on the basis of the projected states until the rate of change of the relative distance between UAV and projected target position ( r ˙ ) becomes positive again. If on the other hand, the intruder aircraft comes back within the FOV, the ongoing maneuver is updated and corrected on the basis of new measurements. Once the conflict scenario is solved, FST brings the control back to the navigation module which sets the original trajectory waypoints as setpoints for the autopilot.

3.3. Data Fusion

Fusion between optical and radar data is accomplished by means of an implementation of the Extended Kalman Filter (EKF). The use of Kalmand Filters (and its variations) for the purposes of collision avoidance is well known in literature: some examples are [19] where the filter is used to fuse radar and visual tracks with a Track-to-Track Fusion (T2TF) approach, taking into account uncertainties associated with each sensor tracking processes; in [40] the performance of an Unscented Kalman Filter (UKF) based tracking system is investigated for different SAA flight scenarios; in [41] the problem of cooperative collision avoidance in a formation of flight of multiple UAVs is tackled by means of an EKF together with a model predictive control (MPC) strategy. The mathematical formulation of the data fusion strategy adopted for the system implementation is briefly summarized below, further details can be found in [42]:
The state vector is composed of position and velocities states of the intruder aircraft expressed in a fixed North East Down (NED) cartesian reference frame which is supposed inertial by neglecting Earth’s rotation and revolution motions Eq. (35).
x N E D = x N , x ˙ N , y N , y ˙ N , z N , z ˙ N
Measurements are obtained in a reference system fixed with camera by combining azimuth and elevation readings given by Eq. (7) with a distance measure provided by the radar as expressed by Eq. (36)
x c a m = R cos λ cos ψ y c a m = R cos λ sin ψ z c a m = R sin λ
It’s useful to express the measurement vector m = [ R , λ ,   ψ ] in terms of position states expressed in camera reference frame as in Eq (37)
R = x c a m 2 + y c a m 2 + z c a m 2 λ = sin 1 z c a m R ψ = tan 1 y c a m x c a m  
The intruder centre position can be converted from camera axes to UAV body axes by means of a suitable transformation matrix (describing the camera pose):
x b , y b , z b T = T B C x c a m , y c a m , z c a m T
Then, the relationship between state vector and measurement is obtained combining Eq. (37) and (38) with the transformation matrix between body and NED axes expressed in Eq. (39).
x B y B z B = cos θ cos ψ cos θ sin ψ sin θ sin ϕ sin θ cos ψ cos ϕ sin ψ sin ϕ sin θ sin ψ + cos ϕ cos ψ sin ϕ cos θ cos ϕ sin θ cos ψ + sin ϕ sin ψ cos ϕ sin θ sin ψ sin ϕ cos ψ cos ϕ cos θ x N x U A V y N y U A V z N z U A V  
where ψ ,   θ ,   ϕ are the UAV’s yaw, pitch and roll angles respectively. The relationship between measurement model and state vector components is clearly nonlinear, hence the need to employ an extended Kalman filter. Within collision avoidance literature, it is common practice to assume stationary motion for the intruder aircraft (in an inertial reference frame) therefore, it is assumed that the speed of the intruder is constant within the small integration step such as the position state update can be described by:
x i + 1 = x i + v Δ t
where v is the speed vector and Δ t the integration step. State transition matrix can thus be expressed as:
x ^ k = x ^ k x ˙ ^ k y ^ k y ˙ ^ k z ^ k z ˙ ^ k = 1 Δ t 0 0 0 0 0 1 0 0 0 0 0 0 1 Δ t 0 0 0 0 0 1 0 0 0 0 0 0 1 Δ t 0 0 0 0 0 1 x ^ k 1 x ˙ ^ k 1 y ^ k 1 y ˙ ^ k 1 z ^ k 1 z ˙ ^ k 1 = F k x ^ k 1
The Jacobian of the measurement model is computed as follows:
H k = h x N E D x ^ k | k 1 = K R r X 0 K R r Y 0 K R r z 0 K λ λ x 0 K λ λ y 0 K λ λ z 0 K ψ ψ x 0 K ψ ψ y 0 K ψ ψ z 0
where:
K R = 1 2 x B 2 + y B 2 + z B 2 r X = 2 x B R N B 1,1 + 2 y B R N B 2,1 + 2 z B R N B 3,1 r Y = 2 x B R N B 1,2 + 2 y B R N B 2,2 + 2 z B R N B 3,2 r Z = 2 x B R N B 1,3 + 2 y B R N B 2,3 + 2 z B R N B 3,3 K λ = 1 1 z B R 2 λ x = R N B 3,1 R z B 2 R 2 x B R N B 1,1 + 2 y B R N B 2,1 + 2 z B R N B 3,1 R 2 λ y = R N B 3,2 R z B 2 R 2 x B R N B 1,2 + 2 y B R N B 2,2 + 2 z B R N B 3,2 R 2 λ z = R N B 3,3 R z B 2 R 2 x B R N B 1,3 + 2 y B R N B 2,3 + 2 z B R N B 3,3 R 2 K ψ = 1 1 + y B x B 2 ψ x = R N B 2,1 x B R N B 1,1 y B x B 2 ψ y = R N B 2,2 x B R N B 1,2 y B x B 2 ψ z = R N B 2,3 x B R N B 1,3 y B x B 2
The terms R N B ( m ,   n ) are the corresponding elements of the rotation matrix (39) which maps NED coordinates into body-fixed coordinates. Standard Kalman formulation follows. When measurements are not available (due to the lower refresh rate of the radar sensor w.r.t optical data) or when, following the evasive maneuver intruder aircraft exit the FOV, states are clearly updated only by means of the model update; in other words, no correction is carried out when measurements are not available. Both the initial intruder position and velocity states are initialized following a batch linear least square regression method for higher robustness against the presence of noise. In a nutshell, once a track is validated and confirmed, n number of measurements are stored before initializing the filter. These are converted in position states expressed in the NED reference frame by means of the inverse of equation (39). Then, matrices A   R n x 2 , B R n x 1 and x R 2 x n are defined as:
A = 1 t 1 t 0 1 t 2 t 1 1 t n t n 1         B = p 1 p 2 p n       x = x 0 x 0 v 0 v 0  
where p 1 ,   ,   p n are clearly the position states and x is the vector of the unknown initial position and velocity states such that A x = B . The model clearly assumes constant speed for the intruder aircraft. The overdetermined system is solved by means of the well-known linear least square close formulation as:
x = ( x 0 , v 0 ) = A T A 1 A T B
The procedure is repeated for each component of 3D motion. Then the EKF is initialized with v 0 E K F = [ v 0 x , v 0 y , v 0 z ] and x 0 E K F = x 0 + v 0 E K F t n Once the filter is initialized, the norm of the innovation vector is used to determine the convergence of the states. The i t h component of the innovation vector at time t is defined as:
e r r i = 1 w k = t w t y k i 1 w k = t w t m k i
where m k i is the measurement computed at time t = k whereas y k i is the reconstructed measurement from the predicted states and w   is the length of the moving average used to populate each component of the innovation vector. At each simulation step, the norm of innovation vector | e r r | is compared against a threshold and when it is below it, the filter is considered converged.
In chapter 4, results of the EKF in terms of reconstruction of position and velocity states from noisy measurements are shown for a starboard encounter collision scenario.

3.4. Radar Algorithms

The Radar Doppler used in the SAA and its processing algorithms were designed and manufactured by ECHOES radar technologies, one of the industrial partners of the project. The two main algorithms developed are briefly summarized below.

3.4.1. Detection

Figure 12 shows a high-level scheme of the signal processing architecture for the radar sensor.
The architecture has been implemented in multithread C code on the ARM processor of the motherboard of the Radar and it leverages optimized mathematical libraries such as FFTW for Fourier Transform computation and BLAS for linear algebra calculations. For each receiving channel, range profiles are fed as input to a Doppler FFT block. This block implements a floating-point Fast Fourier Transform (FFT) with a user programmable number of points. The output of this block is represented by the range-Doppler maps representing the scene captured by the radar in a bidimensional domain range-doppler speed Figure 13 (a). The detection module is tasked with the population of a list of detected tracks for each frame. It employs an implementation of a Constant False Alarm Rate (CFAR) detector algorithm. This adaptive algorithm is aimed at the detection of targets surrounded by background noise (clutter) with a tolerable false alarm probability. As shown in Figure 13(b), this technique compares the Cell Under Test (CUT) with an appropriate threshold which is function of the false alarm probability and of the leading and lagging cells. The cells immediately adjacent to the CUT (Guard cells) are ignored to avoid corrupting this estimate with power from the CUT itself. In particular, the developed system implements the cell-averaging CFAR (CA-CFAR) shown in Figure 13 (c). The CUT is compared with a threshold given by:
α = P F A 1 N r e f 1
where P F A is the desired false alarm probability and N r e f is the number of reference cells. If the CUT is above the threshold, then a track is identified, otherwise no targets are detected. Starting from the two-dimensional range-doppler map, CFAR is applied along the two directions and the resulting two masks are combined with a logical and operation as shown in Figure 14 (a). The result is a binary mask similar to the output of the computer vision object detector algorithm, showing the detected tracks (Figure 14 (b)).
Morphological operations (dilations) are then applied to the binary mask in order to fill in possible gaps between pixels of the same object occupying several adjacent cells. Finally, a unique label is assigned to each region of connected pixel in the current frame and a series of properties is computed for each track such as range, doppler speed, power, signal to noise ratio, detection area in pixels and bounding box.

3.4.2. Monopulse

Monopulse is a technique for the measurement of the Direction of Arrival (DOA) of a signal by comparing the contributions received simultaneously by two or more antennas. The developed SAA radar implements a phase monopulse technique. The working principle is briefly summarized below. The idea is to exploit the difference in phase of the same signal received by different antennas as shown in Figure 15.
By considering a radar consisting of two receiving antennas with a spatial separation d < λ / 2 having their beams pointing in the same direction and applying the parallel beam approximation, the distance between each antenna and the target can be expressed as:
R 1 R + d 2 sin ( θ ) R 2 R d 2 sin ( θ )
Thus, the difference between the two distances is Δ R = R 1 R 2 = d sin ( θ ) where d is the physical spatial separation between the two antennas and θ is the DOA to be estimated. The distance Δ R is related to the wavelength λ and to the phase difference ϕ by the well-known Eq. (49).
ϕ = 2 π λ   Δ R = 2 π d λ sin ( θ )
By inverting the previous equation, the formula for the DOA estimation is obtained:
θ = sin 1 ϕ 2 π λ d
Since phase is periodical, there is a maximum value of θ in modulus that can be estimated without ambiguity and it is given by:
θ sin 1 λ 2 d
In the present application, ϕ is obtained by computing the interferogram between the range-Doppler maps of the two receiving channels as described by Eq. (52)
γ = 1 N n = 1 N X n Y n * X n 2   Y n 2
Where N is the number of pixels within each bounding box whereas X n and Y n are the complex numbers populating the two range-Dopplers maps. Each region of pixel which has been detected and labelled is thus also accompanied by the value of the estimated θ . At this stage of the development of the system however, this estimation has not yet been incorporated into the EKF for the refinement of track estimates or used as a hint for delimiting the search region of the computer vision-based object detector.

4. Simulation Campaign

The effectiveness of the system and the real time interaction between sense and avoid modules have been tested by means of a complex, hardware-in-the-loop simulation environment that was implemented at the Flight-By-Wire laboratories of the aerospace section of the Civil and Industrial Engineering Department of the University of Pisa. A realistic, completely non-linear 6 DOF flight dynamic model of the aircraft was developed in Simulink leveraging aero-mechanical data and Flight Control Laws provided by courtesy of SES. The model features an automatic trim finding routine which exploits a gradient descent strategy and automatically computes engine setting and control surface deflection to ensure equilibrium for each flight condition. Initially, for the purposes of testing avoidance algorithms alone, the position of intruder aircraft was considered known in advance. The effectiveness of the avoidance manoeuvre was evaluated according to several metrics that are summarized in Figure 16 (a). Among these, are the minimum registered distance between UAV and intruder, the duration of the anti-collision evasive manoeuvre, the normal deviation w.r.t the original trajectory at the end of the manoeuvre and the maximum normal deviation registered.
A campaign of simulations was conducted for a combination of different speeds and initial starting positions of the intruder aircraft; the latter were taken on a circumference centred on the initial position of the UAV as shown in Figure 16 (b). The heading of the intruder aircraft was pre-computed for each combination of starting position and speed module as to guarantee a perfect collision supposing, linear, constant speed trajectory for both aircraft with no evasive action carried out by the UAV. Figure 17 shows some results of the simulations: plot (a) shows the 2D trajectory of the UAV, plot (b) shows aileron deflection command which is always well below the saturation limit of δ a m a x = 10 ° prescribed by the manufacturer. Figure 18 (a) shows that the minimum distance registered is always above the threshold of the safety bubble; clearly the value of minimum distance decreases as the intruder speed increases; plot (b) shows the maximum normal distance to the original planned trajectory. It’s interesting to note that as the intruder speed increases, the coordinated turn required for a successful evasive manoeuvre UAV becomes steeper. Subsequently, simulations were repeated with a higher fidelity model including the Extended Kalman Filter, where camera intrinsic and extrinsic matrix were used, together with precomputed intruder trajectory, to reconstruct, for each simulation step, the apparent position of the intruder within the FOV in terms of relative elevation and azimuth angles. Zero mean, Gaussian noise was then artificially injected both in relative distance and angle readings to simulate for real sensor noise. Figure 19 shows the reconstruction of position and velocity states by the EKF plotted against ground truth and noisy measurements. The lag between the moment the intruder is visible, and the moment where states begin to be reconstructed (blue line in Figure 19) is due to the fact that a certain number of initial readings are used to estimate the initial velocity states by means of a linear regression as explained in chapter 3.3. Even when the initial reconstruction of velocity states is not accurate due to biased initial measurements, the filter converges rapidly to the true velocity states value. In the simulation it is chosen a threshold on the norm of the innovation factor of t h e r r = 10 2 .
The first dashed vertical line indicates the moment the intruder is detectable (within radar range), the second dashed line indicates the moment where the EKF has reached convergence and its output is fed into avoidance algorithms.
After the validation of the avoidance algorithms in the presence of sensor noise, the complete HIL simulation system, whose high-level scheme is shown in Figure 20, was used to test the simultaneous interaction between sense and avoid algorithms. Essentially, the 6 DOF flight mechanic model, drives a real time realistic rendering of the scene, leveraging the open-source flight simulator Flight Gear v 2020.3 (FG). A python program was then developed to efficiently acquire screenshot of the monitor where the FG instance was running and a Gstreamer pipeline was built to send the video stream from the host PC to the VPU via UDP protocol in real time. In order not to saturate the network adapter of the Nvidia Jetson Nano, the video stream was compressed according to the H264 (MPEG-4 AVC) standard. The two pipelines used for sending and receiving video data are schematized in Figure 21. The intruder aircraft was inserted within the FG visual rendering by creating a tailored xml file defining the scenario and its waypoints according to its precomputed trajectory. The intruder aircraft was rendered with a mockup of the Cessna 172p SkyHawk single-engine aircraft. Some glue code in the scripting language nasal interfacing FG properties tree with the underlying C++ simulation engine code was required to ensure proper synchronization between the various components of the simulation framework. Additional code was required to create a camera view in front of the aircraft as to simulate the SAA camera sensor positioned on the nose of the UAV. An example of such view is shown in Figure 22. Sense algorithms were re-implemented on the Jetson Nano by means of Python code leveraging optimized OpenCV library when possible, to speed up the execution rate. Likewise, the avoidance decision-making algorithms, including the Extended Kalman filter and the finite state machine, was deployed on the jetson nano as well. The intruder range was sent to the jetson nano alongside the video stream on the same UDP interface. The VPU was thus processing the video stream in real time, performing both object detection and tracking and merging the distance information to computing the evasive maneuver. The setpoints on the ground track angle outputted by the avoidance algorithms by the VPU were then fed back to the Simulink model via another UDP interface and were used to update the simulation in real time, so that at each simulation step the position of the UAV could reflect the results of the SAA system calculations.
All the simulations conducted have shown a full compliance with aero mechanical constraints of the aircraft while at the same time respecting the safety boundaries; the minimum separation distance is never violated for every initial position of the intruder aircraft according to the scheme of Figure 16 (b) and for a range of intruder’s flight speed equal to V i n t r 22 ,   52   m / s , greatly exceeding the cruise speed of other UAV belonging to the same class as imposed by system requirements. Figure 23 illustrates some results in terms of UAV flight parameters throughout the evasive manoeuvre, obtained with the HIL complete simulation framework, for a conflict scenario with intruder aircraft approaching from starboard side and flying at a constant speed of V i n t r = 50   m / s . For the same simulation, Figure 24 shows the comparison between the actual elevation and azimuth as they would appear on the sensors based on the ground truth position of the intruder aircraft and the ones reconstructed by the SAA system, compensated for the transmission and processing delay of the signal between VPU and host pc. Figure 25 shows some screenshots from the Flight Gear rendering relative to the same collision scenario with the intruder aircraft approaching from the starboard side, highlighted with a red bounding box. Both the intruder aircraft and the UAV were rendered with a Cessna 172p mockup. Notice however that the simulations were conducted with the viewpoint shown in Figure 22, with the camera pointing forward so that the UAV actual rendering was meaningless; in Figure 25 UAV rendering has been included for the sake of clarity in showing the evolution of the evasive manoeuvre on the horizontal plane.
; V i n t r = 50   m / s . 3D trajectory (a), Euler angles (b), Speed components in NED reference frame (c), load factors (d).

5. Flight Tests

A preliminary flight test campaign aimed at the validation of the effectiveness of the system in detecting an intruder aircraft was conducted at the Lucca-Tassignano airport in conjunction with SES and Echoes. Due to flight safety restrictions, it was impractical to operate two distinct UAV at the same time in the same airspace in light of the fact that the sense and avoid system hasn’t received yet airworthiness certification; it was therefore decided to position the SAA system fixed on the ground and compare track readings of the detected aircraft with actual telemetry data recorded onboard during the flight. The prototype of the SAA system that was used during the test is shown in Figure 26.
As highlighted by the picture, the system is composed of several components, briefly listed below:
  • Color camera Arducam 1/4” 8MP IMX219, 30 FPS. Recording resolution was set at 1280 x 720. Due to the small footprint size of the target in terms of pixels it was chosen a lens with a horizontal FOV of around F O V h = 50 .
  • Sense and Avoid doppler radar, developed by Echoes.
  • Vision Processing Unit. Nvidia Jetson Nano was chosen thanks to its small size and weight and low power absorption.
  • Receiving and transmitting antennas are housed inside a fiberglass mockup of the nose of the aircraft to better simulate operating conditions of the system. Fig shows the position of the antennas within the nose. Camera sensor was mounted on the nose mockup in such a way to ensure the alignment of the camera Line of Sight with radar beam.
  • Lipo batteries provide power to the radar at a tension of 22V, while a voltage regulator is employed to provide power to VPU and camera at a lower voltage.
  • Two different workstations were used for real time monitoring of both radar and camera sensors.
Figure 27. (a) Antennas placement within the nose mockup; (b) camera sensor chosen for the system implementation; (c) Jetson nano Vision Processing Unit.
Figure 27. (a) Antennas placement within the nose mockup; (b) camera sensor chosen for the system implementation; (c) Jetson nano Vision Processing Unit.
Preprints 141355 g027
During the flight test, each sensor was recording on a local storage the result of its own processing; moreover, a synchronization of radar processor and VPU with the system time of the telemetry onboard the aircraft was implemented to guarantee temporal alignment of data acquisition. Specifically, data acquired by each sensor during the test is summarized in Table 2.
Figure 28 shows the position of the Sense and Avoid system during the flight test with respect to the trajectory of the UAV. Tue blue line indicates the direction of the geographic north, the green line is parallel to the SAA LOS and the launch site of the drone is marked with a red rectangle. The drone was launched in the air leveraging its pneumatic catapult platform. It was decided to position the radar with the highway behind (visible in the lower right corner of the image) in order to reduce the ground clutter as much as possible. Radar sensor was designed for operation in the air where ground targets are not so close, thus during flight test some adjustments to the gains were made in order to compensate for the presence of cars and other highly reflective objects nearby. As noticeable in Figure 28, after take-off, the UAV entered an helicoidal climbing trajectory and then it remained in a Loiter circuit (approximately rectangular in shape) at a constant altitude of h 50   m .
Telemetry data was post processed and transformed into a cartesian fixed Baseline Reference Frame (BRF), with its origin centred in the SAA system position and the x axis pointing north, y axis towards right and z chosen as to comply with the right-hand rule. The position of the UAV was reconstructed by the system in a cartesian Reference Frame fixed with the SAA system with x axis oriented along the LOS. Attitude angles of the system as recorded by the IMU installed on the radar were used to transform position state recorded by the system in the BRF in order to compare them with telemetry. Figure 29 show one example of such reconstruction of the target position states, expressed in the BRF. Due to the presence of ground clutter, in certain section of the flights radar distance wasn’t available but optical algorithms still correctly performed detection and tracking. In absence of distance readings it was still possible to reconstruct heading and azimuth angles in camera reference frame. Figure 30 shows the comparison of said angles with ground truth (obtained by projecting the real UAV position logged by telemetry into the focus plane by means of camera intrinsic and extrinsic matrices) for two distinct segments of flight.
position state, (b) y position state, (c) z position state.

6. Conclusions

This paper has detailed the design and development of a sense and avoid system for a mini-UAV. The system was designed to comply with the strict SWaP constraints of the aircraft platform manufactured by SES. Particular focus was employed in the development of efficient processing algorithms with a sufficiently high refresh rate and ego motion compensation both for optical and radar sensor to ensure a robust detection and tracking capability. The developed, hardware in the loop, integrated simulation environment has allowed the validation of the sense and avoid algorithms for a multitude of different collision scenarios leveraging a realistic, real-time, dynamic representation of the scene. Finally, the preliminary flight test campaign has produced promising results for the reconstruction of the position states of a representative target aircraft despite its small size and ground clutter noise. Several aspects still need to be investigated and will constitute the core of an important future research effort. First of all, the interaction between the FCS and avoidance algorithms has not been tested because the system was static during flight tests and during simulations, attitude and position states of the UAV were considered known in advance. Attitude angles are essential for the correct reconstruction of target position in an inertial reference frame. Therefore, complete flight tests with two flying aircraft will be conducted in the future where the SAA system will be deployed onboard one of the UAVs. The interaction between FCS and SAA system will be validated during said flight campaign and the test will provide precious insights about the effectiveness of motion compensation capabilities in rejecting noise caused by the ego-motion of the platform and engine vibration. Future research will also investigate more advanced collision avoidance strategies, leveraging 3D manoeuvres with optimality constraints (fuel/time minimization). Another essential aspect to be investigated is the simultaneous presence of multiple intruder aircraft within the FOV, where possibly each of them pose a collision threat. Finally, the benefits of employing an array of camera sensors properly oriented to enlarge the overall available FOV will be investigated together with the utilisation of the coarse azimuth reading provided by the radar with the aim of increasing system capabilities and robustness w.r.t to noise.

Author Contributions

Conceptualization, R.G. and M.F.; methodology, R.G. and M.F.; software, M.F.; validation, R.G., G.D.R. and M.F.; formal analysis, R.G., G.D.R. and M.F.; investigation, M.F.; resources: R.G. and G.D.R.; data curation, M.F.; writing – original draft preparation, M.F.; writing - review and editing, R.G. and M.F; visualization, R.G., G.D.R. and M.F.; supervision, R.G. and M.F.; project administration, R.G.; funding acquisition, R.G. and G.D.R.. All authors have read and agreed to the published version of the manuscript.
Funding:This research was co-funded by the Italian Government (Ministero delle Imprese e del Made in Italy, MIMIT) and by the Tuscany Regional Government, in the context of the R&D project “Tecnologie Elettriche e Radar per SAPR Autonomi (TERSA)”, Grant number: F/130088/01- 05/X38. .

Data Availability Statement

Restriction apply to the availability of these data.

Acknowledgments

The authors wish to thank the project leader company Sky Eye Systems S.r.l. (Via Grecia 52, 56021 Cascina, Italy) for the support to the research activity and for providing the UAV aero-mechanical data and FCS laws that allowed the creation of an accurate dynamic simulation model. Special thanks to the partner company Echoes S.r.l. (Viale Giovanni Pisano 55, 56123 Pisa, Italy), responsible for the design and development of the SAA radar, and to the partner company Free Space S.r.l. (via Antonio Cocchi 7, 56121 Pisa, Italy) for the design and development of the radar antennas.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. FAA “Integration of Civil Unmanned Aircraft Systems (UAS) in the National Airspace System (NAS) Roadmap - 2° Ed,” Federal Aviation Administration, 2018.
  2. R. Merkert and J. Bushell, “Managing the drone revolution: A Systematic literature review into the current use of airborne drones and future strategic directions for their effective control,” Journal of Air transport Management, 2020. [CrossRef]
  3. P. Angelov, Sense and Avoid in UAS (Research and Applications), John Wiley & Sons, Ltd ISBN: 978-0-470-97975-4, 2012.
  4. K. V. a. L. P. K. Dalamagkidis, “On unmanned aircraft systems issues, challeneges and operational restrictions preventing integration into the National Airspace System,” Progress in Aerospace Sciences, vol. 44, no. 7, pp. 503-519, 2008. [CrossRef]
  5. ENAC, “Regolamento mezzi aerei a pilotaggio remoto,” 2016.
  6. G. Recchia, G. Fasano, D. Accardo and A. Moccia, “An Optical Flow Based electro-Optical See-and-Avoid System for UAVs,” IEEE, 2007. [CrossRef]
  7. J. martin, J. Riseley and J. J. Ford, “A Dataset of Stationary, Fixed-wing Aircraft on a Collision Course for Vision-Based Sense and Avoid,” in International Conference on Unmanned Aircraft Systems (ICUAS), Dubrovnik, Croatia, 2022. [CrossRef]
  8. FAA, “Sense and Avoid (SAA) for unmanned aircraft systems (UAS). Prepared by FAA sponsored Sense and Avoid Workshop,” Oct. 9, 2009.
  9. Radio Technical Commision for Aeronautics, “RTCA-DO-185B, Minimum operational performance standards for traffic alert and collision avoidance system II (TCAS II),” July 2009.
  10. D. Klarer, A. Domann, M. Zekel, T. Schuster, H. Mayer and J. Henning, “Sense and Avoid Radar Flight Tests,” in International Radar Symposium, Warsaw, Poland, 2020. [CrossRef]
  11. M. Mohammadkarimi and R. T. Rajan, “Cooperative Sense and Avoid for UAVs using Secondary Radar,” IEEE transactions on aerospace and electronic systems, 2024. [CrossRef]
  12. A. Huizing, M. Heiligers, B. Dekker, J. d. Wit and L. Cifola, “Deep Learning for Classification of Mini-UAVs Using Micro-Doppler Spectrograms in Cognitive Radar,” IEEE Aerospace and Electronic Systems Magazine, vol. 34, no. 11, pp. 46-56, 2019. [CrossRef]
  13. Y. L. a. Y. S. “. a. t. o. a. U. v. C. Zhou, “Detection and tracking of a UAV via hough transform,” in CIE International Conference on Radar, 2016. [CrossRef]
  14. S. Ramasamy, R. Sabatini, A. Gardi and J. Liu, “LIDAR Obstacle Warning and Avoidance System for Unmanned Aerial Vehicle Sense-and-Avoid,” Aerospace Science and Technology, 2016. [CrossRef]
  15. v. Karampinis, A. Arsenos, O. Filippopoulos, E. Petrongonas, C. Skliros, D. Kollias, S. Kollias and A. Voulodimos, “Ensuring UAV Safety: A Vision-only and Real-time Framework for Collision Avoidance Through Detection, Tracking and Distance Estimation.,” in International Conference on Unmanned Aircraft Systems (ICUAS), Chania, Crete, Greece, 2024. [CrossRef]
  16. R. Opromolla, G. Inchingolo and G. Fasano, “Airborne Visual Detection and Tracking of Cooperative UAVs Exploiting Deep Learning,” MDPI Sensors, vol. 19, 2019. [CrossRef]
  17. J. Zhao, J. Zhang, D. Li and D. Wang, “Vision-based Anti-UAV Detection and Tracking,” IEEE Transactions on Intelligent Transportation Systems, 2022. [CrossRef]
  18. P. Bauer, A. HIba and J. Bokor, “Monocular Image-based Intruder Direction Estimation at Closest Point of Approach,” in International COnference on Unmanned Aircraft Systems (ICUAS), Miami, FL, USA, 2017. [CrossRef]
  19. F. Vitiello, F. Causa, R. Opromolla and G. Fasano, “Experimental Analysis of Radar/Optical Track-to-track Fusion for Non-Cooperative Sense and Avoid,” in International COnference on Unmanned Aircraft Systems (ICUAS), Warsaw, Poland, 2023. [CrossRef]
  20. G. Fasano, D. Accardo, A. E. Tirri and A. Moccia, “Radar/electro-optical data fusion for non-cooperative UAS sense and avoid,” Aerospace Science and Technology, vol. 46, pp. 436-450, 2015. [CrossRef]
  21. ICAO, “Doc 10019 - Manual on Remotely Piloted Aircraft Systems,” International Civil Aviation Organization, 2015.
  22. ICAO, “Cir. 328 - Unmanned Aircraft Systems (UAS),” International Civil Aviation Organization, 2011, 2011.
  23. ASTM Designation: F 2411-04, “Standard Specification for Design and Performance of an Airborne Sense-and-Avoid system,” ASTM International, West Conshohocken, PA, 2004.
  24. EUROCONTROL, “Specification for the use of Military Unmanned Aerial Vehicle as Operational Traffic Outside Segregated Airspaces, SPEC-0102,” July 2007.
  25. A. D. Zeitlin, “Progress on Requirements and Standards for Sense & Avoid,” The MITRE Corporation, 2010.
  26. C. Geyer, S. Singh and L. Chamberlain, “Avoiding Collisions Between Aircraft: State of the Art and Requirements for UAVs operating in Civilian Airspace.,” 2008.
  27. G. Fasano, D. Accado, A. Moccia and D. Moroney, “Sense and Avoid for Unmanned Aircraft Systems,” IEEE A&E SYSTEMS MAGAZINE, pp. 82-110, November 2016. [CrossRef]
  28. S. Ramasmy and R. Sabatini, “A Unified Approach to Cooperative and Non-Cooperative Sense-and-Avoid,” in International Conference on Intelligent robots and Systems (IROS), Vancouver, BC, Canada, 2017. [CrossRef]
  29. B. D. Lucas and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision,” in Proceedings of the 7th International Joint Conference on Artificial Intelligence, Vancouver, Canada, 1981.
  30. C. Harris and M. Stephens, “A Combined Corner And Edge Detection,” in Proceedings of the 4th Alvey Vision Conference, 1988. [CrossRef]
  31. J. Y. Bouguet, “Pyramidal Implementation of the Lucas Kanade Feature tracker Description of the algorithm,” Intel Corporation, Microprocessor Research Labs, 2000.
  32. L. Forlenza, G. Fasano and A. M. D. Accardo, “Flight Performance Analysis of an Image Processing Algorithm for Integrated Sense-and-Avoid Systems,” International Journal of Aerospace Engineering, p. 8, 2012. [CrossRef]
  33. Y. Benezeth, P. M. Jodoin, B. Emile, H. Laurent and C. Rosenberger, “Comparative Study of Background Subtraction Algorithm,” Journal of Electronic Imaging, July 2010. [CrossRef]
  34. J. Redmon, S. Divvala, R. Girshick and A. Farhadi, “You Only Look Once: Unified Real-Time Object Detection,” 9 May 2016. [Online]. Available: https://arxiv.org/abs/1506.02640v5.
  35. S. W. Kim, K. Yun, K. M. Yi, S. J. Kim and J. Y. Choi, “Detection of moving objects with moving camera using non-panoramic background model,” machine Vision and Applications, pp. 1016-1028, 2013. [CrossRef]
  36. W. Y. D. H.,. J. K. L. Kurnianggoro, “Online Backgorund-Subtraction with Motion Compensation for Freely Moving Camera.,” in Intelligent Computing Theories and Application, 2016. [CrossRef]
  37. K. M. Yi, K. Yun, S. W. Kim, H. J. Chang, H. Jeong and J. Y. Choi, “Detection of Moving Objects with Non-stationary Cameras in 5.8ms: Bringing Motion Detection to Your Mobile Device,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2013. [CrossRef]
  38. C. Carbone, U. Ciniglio, F. Corraro and S. Luongo, “A Novel 3D Geometric Algorithm for Aircraft Autonomous Collision Avoidance,” in Proceedings of the 45th IEEE Conference on Decision and Control, San Diego, CA, USA, 2006. [CrossRef]
  39. A. Chakravarthy and D. Ghose, “Obstacle avoidance in a Dynamic Environment: A Collision Cone Approach,” IEEE Transactions on Systems, Man and Cybernetics - Part A: Systems and Humans, vol. 28, no. 5, 1998. [CrossRef]
  40. S. Wang, A. Then and R. Herschel, “UAV Tracking based on Unscented Kalman FIlter for Sense and Avoid Applications,” in International Radar Symposium, Warsaw, Polan, 2020. [CrossRef]
  41. H. Huang, H. Zhou, M. Zheng, C. Xu, X. Zhang and W. Xiong, “Cooperative Collision Avoidance Method for Multi-UAV Based on Kalman Filter and Model Predictive Control,” in International Conference on Unmanned Systems and Artificial Intelligence (ICUSAI), Xi'an, China, 2019. [CrossRef]
  42. M. Fiorio, R. Galatolo and G. D. Rito, “Extended Kalman Filter Based Data Fusion Algorithm Implemented in a Sense and Avoid System for a Mini-UAV application,” in IEEE Metrology for Aerospace, Lublin, Poland, 2024. [CrossRef]
Figure 1. Reference TERSA aircraft.
Figure 1. Reference TERSA aircraft.
Preprints 141355 g001
Figure 2. a) minimum turn radius as a function of aerodynamic, structural and propulsive constraint for TERSA aircraft; (b) minimum detection required range as a function of intruder aircraft airspeed.
Figure 2. a) minimum turn radius as a function of aerodynamic, structural and propulsive constraint for TERSA aircraft; (b) minimum detection required range as a function of intruder aircraft airspeed.
Preprints 141355 g002
Figure 3. Maximum attainable flight path angle @ h = 1000   m
Figure 3. Maximum attainable flight path angle @ h = 1000   m
Preprints 141355 g003
Figure 4. SAA typical system integration high-level scheme.
Figure 4. SAA typical system integration high-level scheme.
Preprints 141355 g004
Figure 5. High-level schematics of the proposed SAA system.
Figure 5. High-level schematics of the proposed SAA system.
Preprints 141355 g005
Figure 6. Intruder aircraft azimuth and elevation angles reconstruction.
Figure 6. Intruder aircraft azimuth and elevation angles reconstruction.
Preprints 141355 g006
Figure 7. Sense module computer vision pipeline.
Figure 7. Sense module computer vision pipeline.
Preprints 141355 g007
Figure 8. Visual representation of the pyramidal expansion of the algorithm.
Figure 8. Visual representation of the pyramidal expansion of the algorithm.
Preprints 141355 g008
Figure 9. Example of KLT feature matcher output.
Figure 9. Example of KLT feature matcher output.
Preprints 141355 g009
Figure 10. Flow chart of KLT algorithm.
Figure 10. Flow chart of KLT algorithm.
Preprints 141355 g010
Figure 11. High level scheme of the Finite State Machine handling the conflict detection and resolution problem.
Figure 11. High level scheme of the Finite State Machine handling the conflict detection and resolution problem.
Preprints 141355 g011
Figure 12. Radar signal processing architecture.
Figure 12. Radar signal processing architecture.
Preprints 141355 g012
Figure 13. (a) example of a range doppler map, (b) CFAR technique, (c) CA-CFAR technique.
Figure 13. (a) example of a range doppler map, (b) CFAR technique, (c) CA-CFAR technique.
Preprints 141355 g013
Figure 14. (a) bi-dimensional CFAR; (b) binary mask showing detected tracks.
Figure 14. (a) bi-dimensional CFAR; (b) binary mask showing detected tracks.
Preprints 141355 g014
Figure 15. Phase monopulse configuration.
Figure 15. Phase monopulse configuration.
Preprints 141355 g015
Figure 16. Metrics used for the evaluation of avoidance manoeuvre (a); initial position of the intruder aircraft (b).
Figure 16. Metrics used for the evaluation of avoidance manoeuvre (a); initial position of the intruder aircraft (b).
Preprints 141355 g016
Figure 17. Results of evasive maneuver for different initial positions and velocity states of the intruder aircraft; (a) 2D trajectory, (b) aileron deflection.
Figure 17. Results of evasive maneuver for different initial positions and velocity states of the intruder aircraft; (a) 2D trajectory, (b) aileron deflection.
Preprints 141355 g017
Figure 18. Simulations Results; (a) minimum distance; (b) maximum normal deviation.
Figure 18. Simulations Results; (a) minimum distance; (b) maximum normal deviation.
Preprints 141355 g018
Figure 19. Intruder position states (a,b,c) and velocity states (d,e,f) reconstruction by the EKF for a starboard encounter; V u a v = V i t n r = 22   m / s   .
Figure 19. Intruder position states (a,b,c) and velocity states (d,e,f) reconstruction by the EKF for a starboard encounter; V u a v = V i t n r = 22   m / s   .
Preprints 141355 g019
Figure 20. High level scheme of the complete simulation framework.
Figure 20. High level scheme of the complete simulation framework.
Preprints 141355 g020
Figure 21. Video stream send (a) and receive (b) pipeline schematics.
Figure 21. Video stream send (a) and receive (b) pipeline schematics.
Preprints 141355 g021
Figure 22. SAA camera viewpoint as rendered in Flight Gear, (flying over the city of Pisa).
Figure 22. SAA camera viewpoint as rendered in Flight Gear, (flying over the city of Pisa).
Preprints 141355 g022
Figure 23. Results of a HIL collision scenario in the complete simulation environment. Intruder approaching from starboard side. V U A V = 22   m / s
Figure 23. Results of a HIL collision scenario in the complete simulation environment. Intruder approaching from starboard side. V U A V = 22   m / s
Preprints 141355 g023
Figure 24. Comparison between elevation (a) and azimuth (b) measurements with ground truth for the complete simulation framework.
Figure 24. Comparison between elevation (a) and azimuth (b) measurements with ground truth for the complete simulation framework.
Preprints 141355 g024
Figure 25. Flight Gear Rendering of a starboard collision scenario on a coastal landscape with intruder position highlighted as detected by Sense algorithms within the complete simulation framework.
Figure 25. Flight Gear Rendering of a starboard collision scenario on a coastal landscape with intruder position highlighted as detected by Sense algorithms within the complete simulation framework.
Preprints 141355 g025
Figure 26. SAA system prototype.
Figure 26. SAA system prototype.
Preprints 141355 g026
Figure 28. SAA system position and UAV trajectory during flight test at Lucca-Tassignano airport.
Figure 28. SAA system position and UAV trajectory during flight test at Lucca-Tassignano airport.
Preprints 141355 g028
Figure 29. reconstructed UAV position states vs telemetry in Base Reference Frame: (a) x
Figure 29. reconstructed UAV position states vs telemetry in Base Reference Frame: (a) x
Preprints 141355 g029
Figure 30. relative azimuth (a) (c) and elevation (b) (d) angles between target UAV and SAA reference frame for two different flight phases.
Figure 30. relative azimuth (a) (c) and elevation (b) (d) angles between target UAV and SAA reference frame for two different flight phases.
Preprints 141355 g030
Table 1. Reference aircraft characteristics.
Table 1. Reference aircraft characteristics.
MTOW 25   K g f
Reference surface S 1.058   m 2
Mean aerodynamic chord c 0.303   m
Wingspan b 3.6   m
Stability margin 0.1880
Table 2. Sensors data acquired during the flight test.
Table 2. Sensors data acquired during the flight test.
UAV Telemetry Radar Camera
Altitude Target radial distance Target position within FOV
Latitude Target radial velocity Timestamp of readings
Longitude System attitude (roll, pitch, heading)
Timestamp of readings System position (altitude, latitude, longitude)
Timestamp of readings
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated