Preprint
Review

This version is not peer-reviewed.

Conflict Detection, Resolution, and Collision Avoidance for Decentralized UAV Autonomy: Classical Methods and AI Integration

Submitted:

15 December 2025

Posted:

16 December 2025

You are already at the latest version

Abstract
Unmanned Aerial Vehicles (UAVs) are increasingly deployed across diverse domains and many applications demand a high degree of automation, supported by reliable Conflict Detection and Resolution (CD&R) and Collision Avoidance (CA) systems. At the same time, public mistrust, safety and privacy concerns, the presence of uncooperative airspace users, and rising traffic density are driving a shift toward decentralized concepts such as free flight, in which each actor is responsible for its own safe trajectory. This survey reviews CD&R and CA methods with a particular focus on decentralized automation and encounters with noncooperative intruders. It analyzes classical rule-based approaches and their limitations, then examines Machine Learning (ML)–based techniques that aim to improve adaptability in complex environments. Building on recent regulatory discussions, it further considers how requirements for trust, transparency, explainability, and interpretability evolve with the degree of human oversight and autonomy, addressing gaps left by prior surveys.
Keywords: 
;  ;  

1. Introduction

Unmanned Aerial Vehicles (UAVs) have become a pivotal technology across diverse domains. Shakhatreh et al. [1] curated a survey categorizing UAV applications into distinct classes, including precision agriculture, search and rescue, infrastructure monitoring, and delivery of goods. To maximize their effectiveness, most of these applications require a high degree of automation—whether in fully automated missions, remotely piloted operations with decision-support systems, or hybrid approaches that combine human and machine control.
However, the adoption of automation in UAV operations has raised significant public mistrust. Safety concerns [2] and privacy issues [3] are frequently cited. Tam [4] conducted a survey on public trust in autonomous UAVs for transporting people and goods, revealing that the vast majority of respondents would only accept autonomous aerial transport of people if a pilot were onboard to override the autopilot when necessary. This finding underscores that public acceptance of UAV operations is closely tied to perceptions of human oversight and safety assurance.
At the same time, the potential for noncooperative or unpredictable behavior from other airspace users, combined with the increasing traffic density, complicates the deployment of large-scale automated UAV systems. These challenges push researchers toward decentralized solutions such as free flight [5,6], in which each actor is responsible for planning its trajectory, maintaining safe separation with other traffic, and avoiding collisions rather than relying solely on centralized control. Enabling such a paradigm requires robust Conflict Detection and Resolution (CD&R) to remain conflict-free, and Collision Avoidance (CA) as a last-resort safety net. Throughout this paper, a conflict denotes a predicted breach of the adopted separation minimum within a finite look-ahead horizon; while conflict-free indicates the absence of such a prediction.

1.1. New Machine Learning Approaches and Their Challenges

Recent research suggests that traditional rule-based approaches may be insufficient for managing the complexity of future UAV operations. The combination of uncooperative intruders, traffic growth, and the shift toward decentralized management highlights the need for more adaptable methods for CD&R and CA. Because uncooperative intruders do not share intent information, their behavior is inherently unpredictable. This makes CD&R and CA in high traffic especially challenging, and motivates the exploration of Machine Learning (ML) approaches that can adapt to diverse encounter scenarios. As stated in [7], ML algorithms offer superior adaptability to complex and novel situations. Unlike hand-crafted systems, they can leverage past experience with the environment rather than relying exclusively on manually coded features, thereby increasing their potential efficacy.
Nevertheless, the use of ML introduces new challenges. By their nature, ML algorithms often operate as black boxes, making it difficult to ensure predictable and certifiable behavior. This opacity complicates safety assurance, as it becomes harder to anticipate how the system will react under all possible operating conditions. Furthermore, increasing reliance on opaque algorithms runs counter to public expectations of keeping a human in the loop. As UAV systems become more intelligent yet less transparent, sustaining human oversight and fostering public trust become even more difficult.
To reconcile the benefits of ML with the safety and trustworthiness required in aviation, the concepts of transparency, explainability, and interpretability have become paramount. Although sometimes used interchangeably, these terms describe distinct properties:
  • Transparency in ML-based systems can be achieved by providing open and accessible information about the model—its architecture, training data, and assumptions. Alternatively, ML can be used as an optimization layer atop transparent rule-based algorithms. An example of this hybrid strategy is presented in [7], where a reinforcement learning agent is combined with a rule-based controller.
  • Explainability focuses on understanding how trained models, often neural networks, reach their decisions. Post-hoc explanation frameworks such as SHAP [8], LIME [9], and Deep SHAP [10] are commonly applied to provide interpretable insights into complex models.
  • Interpretability refers to the degree to which an Artificial Intelligence (AI) system’s outputs can be directly comprehended and logically assessed by a human observer. While not clearly defined, it generally emphasizes simplicity and clarity. For instance, Q-learning [11] can be considered interpretable due to its straightforward policy representation.
Discussions are still ongoing on how these requirements should be formalized for aviation. The European Union Aviation Safety Agency (EASA), in its recent report [12] defines a roadmap for AI integration in aviation. The correspondent concept paper [13] outlined preliminary requirements for the integration of AI-based systems into the airspace. Concepts such as human-in-the-loop operation and explainability are presented as foundational principles for certification and acceptance.

1.2. Related Work

Table 1 provides a comparative overview with related work found in the literature. As it is possible to notice, prior works have focused primarily on cooperative and centralized traffic management [14], or have examined explainability without considering it through the lens of airborne autonomy [15,16]. While Rahman et al. [17] focusing on ML techniques for UAV detection and classification, Lu et al. [18] give insights in noncooperative CA techniques for micro aerial vehicles. Bello et al. [19] address the topic of certification of AI-based autonomous systems in aviation. However, CD&R and CA systems are not their focus, and CD is referenced only as an example. This paper presents a survey which addresses the limitations of prior surveys and places attention on CD&R and CA methods both for cooperative and noncooperative traffic, with a particular focus on detection and avoidance under decentralized automation regime. It analyses both classical and ML-based approaches developed for this purpose. Finally, for ML-based approaches, it discusses how requirements for trust, transparency, and assurance evolve with the degree of human oversight and autonomous decision-making.
The remainder of this survey focuses on the sensor and algorithmic components for CD&R and CA. First, classical algorithmic methods are presented, highlighting their strengths and limitations. Next, ML–based techniques are examined, with particular emphasis on the challenges of ensuring safe and trustworthy integration in this safety-critical context.

2. Free Flight and Autonomy

Free Flight is a concept originally introduced in Air Traffic Management (ATM) as a paradigm shift from centralized control to decentralized decision-making. Its goal is to provide aircraft with greater flexibility in route selection and altitude changes while maintaining safety and efficiency. Unlike traditional systems that rely heavily on ground-based controllers, Free Flight empowers pilots—or onboard automation—to optimize trajectories based on real-time conditions such as weather, traffic, and fuel efficiency. Advocates of this approach highlight its potential to reduce delays as well as emissions, and improve overall system performance in response to anticipated growth in air traffic demand [20].
Similarly, in the Advanced Air Mobility (AAM) scenarios on which this paper focuses — where manned and unmanned aerial vehicles share low-altitude airspace — the number of operators is expected to increase dramatically, far exceeding that of conventional aviation [21]. This trend has sparked interest in decentralized strategies that shift trajectory optimization and separation assurance to pilots or autonomous systems. As noted in [22], proposed structured solutions such as air corridors can reduce traffic complexity, allowing centralized strategies, but may also introduce bottlenecks, limiting scalability.
The assignment of flight optimization, separation assurance, and CA tasks to pilots and autonomous systems makes CD&R and CA key enablers for safety in environments where centralized control is minimized.
Notable examples of this technology include DAIDALUS [23] and ACAS Xu [24]. DAIDALUS, developed by NASA, serves as a reference implementation for RTCA DO-365 [25] compliance, providing alerting logic and maneuver guidance to remain collision-free. ACAS Xu, part of the ACAS X family, integrates both CD&R and CA functions through probabilistic decision-making models, offering a certifiable solution for UAS in shared airspace.
The following chapter breaks down CD, CR, and CA into their core sub-functions and analyzes algorithmic solutions for each, including an overview of the latest ML-based approaches.

3. Conflict Detection, Resolution, and Collision Avoidance: Classical and AI-Based Approaches

This chapter examines the algorithmic pipeline that supports decentralized, free flight autonomy for UAVs, encompassing processes from initial sensing to last-resort avoidance maneuvers. Within this framework, the system integrates sensing, reasoning, and avoidance functions to enable autonomous detection, assessment, and mitigation of collision risks. The process begins with cooperative and noncooperative sensing to detect various types of hazards, such as traffic, terrain, or weather. It then proceeds through reasoning and alerting for intruder identification and threat assessment, and culminates in avoidance maneuvers that guide the UAV to execute a computed evasive action.
Building on established taxonomies [26], and as illustrated in Figure 1, these processes are grouped into three main functional blocks: detection, reasoning and alerting, and avoidance. The detection block corresponds to the sensing or detect function, while the reasoning and alerting block encompasses the sub-functions Track, Evaluate, Prioritize, and Declare. Finally, the avoidance block incorporates the remaining sub-functions responsible for determining and executing the appropriate evasive maneuver.

3.1. Sensing

Within the following subsections, various sensors and techniques are analyzed and evaluated with respect to their applicability of detecting cooperative and noncooperative intruders. The discussion begins with a description of the sensor types considered in this work, followed by a presentation of classical and ML methods for object detection.

3.1.1. Sensor Types

The suitability of specific sensors varies according to whether the detection task involves a cooperative or noncooperative intruder. In comparison with noncooperative intruders, cooperative intruders are aerial objects that actively participate in their own detection in accordance with current aviation standards. This information can be supplied through specific signals by equipped transponders, such as Automatic Dependent Surveillance–Broadcast (ADS-B). This technology relies on GNSS-derived data to broadcast information such as position, ground speed, track angle, vertical rate, and timestamp over an RF channel. Such systems allow air vehicles to receive continuous and precise traffic information of surrounding aircraft. However, current ADS-B implementations typically lack encryption and authentication, making them sensitive to intentional interference and intrusion, such as spoofing, jamming, or message injection. Furthermore, in scenarios where manned and unmanned aircraft share low-altitude airspace, potential intruders are not equipped with transponders. To address these limitations and extend detection capabilities to noncooperative intruders, alternative sensing technologies must be integrated to provide robust detection of uncooperative conflicts.
Radar is a widely used active sensing technology for intruder detection due to its capability to operate in various weather and lighting conditions. Radars work by transmitting electromagnetic pulses and analyzing the time delay, Doppler shift, and amplitude of the reflected signals. Range resolution is determined by the transmitted pulse bandwidth. Angular resolution depends on antenna aperture and beamforming method. While radar offers long detection ranges to several kilometers and simultaneous multi-target tracking, its performance can be degraded by clutter, precipitation, or limited scanning coverage, which imposes trade-offs between Field of View (FoV) and update rate.
Thermal sensors detect emitted radiation in the long-wave infrared (IR) spectrum, providing low-light and night-time detection capability. Compared to visual cameras, thermal sensors have a lower angular resolution. Furthermore, the effectiveness of these sensors depends strongly on temperature contrast and atmospheric absorption, although their comparatively low data rates reduce onboard computational load.
Vision sensors are passive systems that extract information from visible light to perform object detection, classification, and tracking. They offer high spatial and angular resolution, which depends on sensor size, and lens optics. Wider FoVs reduce pixel density and effective operation range. High-resolution images produce substantial data rates that require efficient onboard processing. Vision systems are highly sensitive to environmental factors, such as illumination variations, glare, shadows, rain, or fog, which can reduce detection accuracy or cause false alarms.
LiDAR sensors emit laser light to illuminate their surrounding and analyze the reflected pulses to measure distances. This allows to localize objects in 3D and its usage in low-light conditions. Moreover, it overs a wide FoV up to 360° coverage. Nevertheless, this sensor type produces large amounts of data and it remains expensive, despite the growing availability of more affordable models. In addition, smaller sensors are constrained by limited range, and their performance can degrade under adverse weather conditions.
Each sensor type exhibits inherent limitations described in the previous paragraphs and summarized in Figure 2, which has resulted in relatively limited research on single-sensor approaches within this research field. For instance, Aldao et al. [27] investigate a LiDAR-based system, while Corucci et al. [28] present a radar-based solution. However, no single technology can currently provide accurate, continuous, and robust information on all airspace participants. This motivates the use of multi-sensor systems, in which complementary sensing technologies are combined to enable more reliable detection of uncooperative intruders. Even cooperative surveillance technologies such as ADS-B can be improved through the integration with complementary sensors to achieve enhanced intrusion detection and increased robustness.
Much research can be found about intruder detection systems composed by multiple sensors. The most common combinations of sensor technologies are optical, IR and radar, like the system proposed by Fasano et al. in [29,30] where a Detect and Avoid (DAA) system composed by pulsed Ka-Band radars, optical and IR cameras is proposed. Similarly, in [31], Salazar et al. proposed a DAA system for a fixed wing UAV composed of a Laser Radar (LADAR), a Millimeter Wave (MMW) radar, optical cameras, and IR cameras. Other sensor technologies which are gaining interest in the scientific community for the development of DAA systems are LiDARs, whose integration in a DAA system together with radar sensors is discussed by de Haag et al. in [32]. Similarly, in [33] the development of a DAA system is carried out using a LiDAR sensor in combination with a stereo-camera.
For the detection of intruders in the airspace, either a collaborative approach between airborne and ground-based sensors can be implemented, or to rely solely on ground-based sensors approaches. Coraluppi et al. in [34,35], described a detection system composed of diverse airborne and ground-based sensors. In [36], the development of a DAA system composed by diverse ground-based sensors are described. Table 2 shows a summary of the sensor technologies surveyed.

3.1.2. Classical Approaches for Detection

Based on these sensor characteristics and operating principles, numerous algorithms have been proposed in the literature for detecting intruders in the airspace using sensor data. Each algorithm is tailored to a specific sensing modality and exploits the particular characteristics and functioning of the sensor for which it is designed.
For ADS-B, research focuses on data validation—flagging intrusions and anomalies without modifying the ADS-B protocol itself to avoid costly changes. Early work by Kacem et al. [37] combined lightweight cryptography with flight-path modeling to verify message authenticity and plausibility with negligible overhead. Leonardi et al. [38] instead used RF fingerprinting to extract transmitter-specific features from ADS-B signals, distinguishing legitimate from spoofed messages (reporting detection rates up to 85% with low-cost receivers). Ray et al. [39] propose a cosine-similarity method to detect replay attacks in large SDR datasets, successfully identifying single, swarm, and staggered scenarios.
Radar target detection typically relies on Constant False Alarm Rate (CFAR) processing [40]. CFAR adaptively sets thresholds—often via a sliding-window estimate—to maintain a specified false-alarm probability; many practical systems use CA-, OS-, GO-CFAR and related variants. Recent work refines CFAR for real-time, cluttered settings. Sim et al. [41] for instance, present an FPGA-optimized CFAR for airborne radars, sustaining high detection performance under load. Complementarily, Safa et al. [42] introduce a low-complexity nonlinear detector (kernel-inspired, correlation-based) that replaces the statistical modeling step in classical CFAR, outperforming OS-CFAR for indoor drone obstacle avoidance where dense multipath/clutter degrades CFAR. Beyond CFAR, Doppler and micro-Doppler methods exploit target motion for the detection [43]. Regardless of the specific detector, low–slow–small (LSS) UAVs remain challenging to detect with a radar sensor. Classical CFAR schemes struggle to reliably detect targets with low Radar Cross Section (RCS), while Doppler-based methods have difficulties with slow-moving objects. To address these algorithmic limitations, Shao et al. [44] reformulate and retune a classical CFAR-based processing chain to improve LSS detection in complex outdoor environments.
For thermal sensors, small-target detection is often based on simple intensity thresholding. However, this becomes challenging in low-resolution imagery, where targets occupy only a few pixels and their contrast against clutter is low. To address this, Jakubowicz et al. [45] propose a statistical framework for detecting aircraft in very low-resolution IR images ( 32 × 32 ) that combines sensitivity analysis of simulated IR signatures, quasi–Monte Carlo sampling of uncertain conditions, and detection tests based on level sets and total variation. Experiments on 90,000 simulated images show that these level-set–based statistics significantly outperform classical mean- and max-intensity detectors, particularly under realistic cloudy-sky backgrounds modeled as fractional Brownian noise. Complementary to this, Qi et al. [46] formulate IR small-target detection as a saliency problem and exploit the fact that point-like targets appear as isotropic Gaussian-like spots whereas background clutter is locally oriented. They use a second-order directional derivative filter to build directional channels, apply phase-spectrum–based saliency detection, and fuse the resulting maps into a high–signal-to-clutter “target-saliency” map from which targets are extracted by a simple threshold, achieving higher SCR gain and better ROC performance than several classical filters on real IR imagery with complex backgrounds.
For Visual sensors, target detection is usually performed from image sequences from which appearance cues (e.g., shape, texture, apparent size) are extracted to localize obstacles and support trajectory prediction, thereby extending situational awareness. High target speed, agile maneuvers, cluttered backgrounds, and changing illumination remain key challenges, especially for reliably distinguishing cooperative from noncooperative aircraft at useful ranges. Optical flow is a classical approach for vision-based CA; Chao et al. [47] compared motion models that use flow for UAV navigation. However, standard optical-flow methods are insensitive to objects approaching head-on, as such motion induces little lateral displacement in the image. Mori et al. [48] mitigate this by combining SURF feature matching and template matching across frames to track relative size changes, enabling distance estimation to frontal obstacles. Mejías et al. [49] proposed a classical vision-based sense-and-avoid pipeline that combines morphological spatial filtering with a Hidden Markov Model (HMM) temporal filter to detect and track small, low-contrast aircraft above the horizon, estimating the target’s bearing and elevation as inputs to a CA control strategy. This work was extended by Molloy et al. [50] to the more challenging below-horizon case by adding image registration and gradient subtraction, while retaining HMM-based temporal filtering to robustly detect intruding aircraft amid structured ground clutter. Another noteworthy study is presented by Dolph et al. [51], where several classical computer-vision pipelines for intruder detection—including SURF feature matching, optical-flow tracking, FAST-based frame differencing, and Gaussian-mixture background modeling—are systematically evaluated, providing insight into their practical performance and limitations for long-range visual DAA.
LiDAR sensors deliver accurate distance measurements and 3D data that help differentiate tiny, fast-moving objects such as drones from other aerial targets through their motion and size patterns by analyzing and interpreting the point cloud data. Therefore, classical approaches focusing on point cloud clustering techniques to detect objects of interest. Aldao et al. [52] used a Second Order Cone Program (SOCP) to detect intruders and estimate their motion. Based on this information, avoidance trajectories are computed in real time. Dewan et al. [53] used RANSAC [54] to estimate motion cues combined with a Bayesian approach to detect dynamic objects. Their approach effectively addresses the challenges posed by partial observations and occlusions. Lu et al. [55] used density-based spatial clustering of applications with noise (DBSCAN) to make a first clustering step followed by an additional geometric segmentation method for dynamic objects by using an adaptive covariance Kalman filter. Their learning free technique enables real-time tracking and CA onboard. While DBSCAN is good for uniform point cloud density, it shows weaknesses when segment obstacles with low density. Zheng et al. [56] try to overcome this limitation by developing a new clustering method called clustering algorithm based on relative distance and density (CBRDD).
Classical techniques, in contrast to ML approaches, require no training and thus do not depend on the extensive datasets needed for ML models. Nonetheless, over all mentioned sensor types purely classical (non–AI) vision pipelines for object detection are increasingly being replaced by learning-based methods in modern systems because ML approaches solve many limitations that classical techniques cannot overcome which is examined in detail in the subsequent section.

3.1.3. Machine Learning Approaches for Detection

AI-based methods in this field have advanced rapidly in recent years. Deep learning approaches, particularly Convolutional Neural Networks (CNNs), consistently outperform traditional techniques by handling complex scenarios, diverse object appearances, and dynamic environments where classical algorithms often struggle.
Real-time performance is equally critical, and many approaches rely on YOLO-family detectors [57] for their strong accuracy–speed balance. Beyond CNN-based architectures, transformer-based models have also gained prominence. DETR [58] introduced an anchor-free, end-to-end paradigm that models images as sequences of patches, integrates global context, and directly predicts bounding boxes and categories — eliminating anchor boxes and non-maximum suppression (NMS). Although later variants improve efficiency, they still fall short of real-time requirements for UAV operations. RF-DETR [59] addresses this limitation through a hybrid encoder and IoU-aware queries, establishing the first real-time end-to-end detector. CNNs offer efficiency and maturity, transformers provide enhanced global context, and emerging foundation models leverage the strengths of both: they learn generalized, transferable representations, support multiple tasks, and can leverage massive unlabeled or weakly labeled datasets.
The number of available datasets for both single and multiple sensor configurations in the context of aerial object detection and tracking has grown in parallel with these methodological advancements, reflecting the increasing interest in AI-driven detection of aerial objects. In this regard, not only the volume of data but also its quality, representativeness, and fidelity to real-world conditions are critical, as they directly influence model performance, generalization, and robustness. To enable reliable pattern recognition and decision-making, datasets must therefore provide both high-quality samples and sufficient variability while minimizing inherent biases. Table 3 shows a collection of available open source datasets recorded from different sensors to support ML-based aerial object detection from moving as well as stationary sensor setups.
The Airborne Object Tracking Dataset [60] published in 2021 contains nearly 5k high resolution grayscale flight sequences resulting in over 5.9M images with more than 3.3M annotated airborne objects and is one of the largest public available datasets in this area. Vrba et al. [61] created a dataset for UAV detection and segmentation in point clouds. It consists of 5.455 scans from two LiDAR types and contain three UAV types. To compensate for the weaknesses of a specific sensor type, multi-modal approaches are increasingly being developed as well as their corresponding datasets. Yuan et al. [62] published a multi-modal dataset containing 3D LiDAR, mmWave radar and audio data. Patrikar et al. [63] published a dataset combining visual data with speech and ADS-B trajectory data while Svanström et al. [64] collected 90 audio clips, 365 IR and 285 RGB videos. However, acquiring real-world recordings to generate datasets presents significant challenges due to the high time and cost requirements, as well as the difficulty of covering certain scenarios (e.g., collisions with specific objects like birds). To address these limitations, AI-supported data annotation and synthetic generated datasets serve as a valuable alternative. While the first technique decreases time and manual annotation effort, the latter one enables the generation and validation of initial hypotheses using the developed methodologies in the absence of real data. UAVDB [65] combined bounding box annotations with predictions of the foundation model SAM2 to generate high-quality masks for instance segmentation. Lenhard et al. [66] published SynDroneVision, a RGB-based drone detection dataset, for surveillance applications containing diverse backgrounds, lighting conditions, and drone models. In contrast to the previous Aldao et al. [67] developed a LiDAR Simulator to generate realistic point clouds of UAVs. After reviewing the current data landscape and briefly examining techniques to address missing data, the discussion now turns to potential machine learning applications that require such datasets to train and fine-tune the AI models. In the following paragraphs, AI approaches are discussed for each sensor.
For ADS-B, research focuses on different AI-supported prediction approaches, which are useful to identify abnormal flight behavior or other safety-critical anomalies. Shafienya et al. [68] developed a CNN model with Gated Recurrent Unit (GRU) deep model structure to predict 4D flight trajectories for long-term flight trajectory planning. While the CNN part is used for spatial feature extraction, GRU extracts temporal features. The TTSAD model [69] is focusing on detecting anomalies in ADS-B data by first predicting temporal correlations in ADS-B signals and further applying a reconstruction module to capture contextual dependencies. Finally, the reconstruction differences are predicted to determine anomalies. Ahmed et al. [70] introduced a deep learning architecture using TabNet, NODE, and DeepGBM models to classify ADS-B messages and detect attack types, achieving up to 98% accuracy in identifying anomalies. Similarly, Ngamboé et al. [71] developed an xLSTM-based intrusion detection system that outperforms transformer-based models for detecting subtle attacks, with an F1-score of 98.9%.
With Radar sensors, the velocity and range of airborne targets can be derived. Zhao et al. [72] focused on target and clutter classification by applying a new GAN-CNN-based detection method. By combining GAN and CNN architectures they are able to locate the target in multidimensional space of range, velocity and angle-of-arrival. Wang et al. [73] presented a CNN-based method for the detection of UAVs in the airspace with a Pulse-Doppler Radar sensor. The method consists in a CNN with two heads: a classifier to identify targets and a regressor that estimates the offset from the patch center. Their outputs are then processed by a NMS module that combines probability, density, and voting cues to suppress and control false alarms. Tests with simulated and real data showed that the proposed method outperformed the classical CFAR algorithm. Tm et al. [74] propose a CNN architecture to perform single shot target detection from R-D data of an airborne radar.
Thermal sensors pose challenges for AI detection methods because thermal noise, temperature fluctuations, and cluttered environments degrade signal clarity and consistency. To overcome these challenges, GM-DETR [75] provides a fine-grained context-aware fusion module to enhance semantic and texture features for IR detection of small UAV swarms combined with a long-term memory mechanism to further improve the robustness. Gutierrez et al. [76] compared popular detector architectures which are YOLOv9, GELAN, DETR and ViTDet and showed that CNN-based detectors stands out for real-time detection speed while transformer-based models provide higher accuracies in varying and complex conditions.
For Visual sensors AI models are used for detection, supported by additional filtering and refinement processes. For instance [77,78] used AI-based visual object detection. Arsenos et al. [79] used an adapted YOLOv5 model. Their approach allows to detect UAVs at distances of up to 145 meters. Yu et al. [80] used YOLOv8 combined with an additional slicing approach to further increase detection accuracy of tiny objects. Approaches like [81] employed CenterTrack a tracking-by-detection method resulting in a joint detector-tracker by representing objects as center points and modeling only the inter-frame distance offsets. Karampinis et al. [82] took the detection pipeline from [79]. They formulated the task as an image-to-image translation problem and employed a lightweight encoder–decoder network for depth estimation. Despite current limitations of foundation models regarding processing speed, such models facilitate multi-task annotation generation while requiring minimal supervision as described in [83].
For LiDAR sensors ML-models learn geometric properties of point clouds to identify and detect objects of interest. Key challenges include large differences in point cloud density and accuracy among LiDAR sensors, as well as motion distortion and real-time processing requirements. Xiao et al. [84] developed an approach based on two Lidar sensors. The LiDAR 360 gives a 360° coverage while the Livox Avia provides focused 3D point cloud data for each timestamp. Objects of interest are identified by using a clustering-based learning detection approach (CL-Det). Afterwards, DBSCAN is used to further cluster the detected objects. By combining both sensors, they demonstrate the potential for real-time, precise UAV tracking in sparse data conditions. Zhange et al. [85] presented DeFlow, which employs GRU refinement to transition from voxel-based to point-based features. A novel loss function was developed that compensates the imbalance between static and dynamic points.
The field of AI-based object detection and tracking has made continuous progress in tackling challenges such as reliably tracking of very small aerial objects with unpredictable flight patterns. Key research directions include achieving real-time performance, modeling and uncertainty prediction, and effectively leveraging appearance information for robust tracking. Table 4 provides the summary of the presented classical and ML approaches for aerial object detection examined in this survey.

3.2. Reasoning and Alerting

The information provided by these different sensor systems must be processed to acquire a unique description of the environment around the own vehicle. The sensor detections must be analyzed to filter possible false positives. The tracks must be constantly and correctly updated to produce accurate and robust estimates, which are necessary to identify collision risks and to determine their level of threat.

3.2.1. Classical Approaches for Intruder Tracking

Tracking intruders in airspace using multiple sensor measurements can be formulated as a classical multi-target tracking problem. This problem is typically decomposed into two main components: observation-to-track association, and state filtering and prediction.
The observation-to-track association is a non-trivial problem in multi-target tracking, especially for the presence of false detection within gates of the tracks, which can lead to misassociations. This phenomenon, as known since many years [88], can lead to degradation of the tracks and, ultimately to loss of tracking.
Different data association algorithms can be used for airborne sensor detections. The simplest method would be the use of the Nearest Neighbour (NN) method. With this method, the detection closer to the track is selected for association. This method may be not optimal and may lead to the association of a detection to multiple tracks. For this reason, the Global Nearest Neighbour (GNN), is more used. With this method, all the detections within gate and all the tracks are considered in order to determine the association. In these two methods, the likelihood is only based on the distance between the tracks and the detections. In case of presence of false positives, a more detailed probabilistic relationship between the tracks and the detections must be defined, which can consider several factors, among which the likelihood of the detection being a false positive. These methods fall under the definition of Probabilistic Data Association (PDA). Among this family of methods, an effective algorithm is the Joint Probability Data Association (JPDA), where not only the probability of the association hypothesis is considered, but also the probabilities of the other measurements to belong to other tracks. One of the most robust methods is the Multiple Hypothesis Tracking, introduced for the first time by Reid in [89]. Its strength resides in its deferred logic. As a matter of facts, as soon as new measurements are received, they are not associated between themselves and to the tracks. Instead, several association hypotheses are formed. Only after a preordained number of measurements have been performed, these hypotheses are evaluated. Blackman in [90] discusses the application of the MHT to the Data-to-Track Association problem.
Concerning Filtering and Prediction, one of the most known and used algorithm is the Extended Kalman Filter (EKF). Another method is the Interactive Multiple Model (IMM) which, as described in [86], consists of several Kalman filters which work in parallel with the aim of corresponding to a target model. Another effective and robust algorithm is Particle Filtering, also called the sequential importance sampling algorithm. It is a Monte Carlo method that sequentially uses incoming measurements to maintain a set of particles distributed across the surveyed state space [91]. Each particle consists of a state and an associated weight, and it is interpreted as a state hypothesis. With a high number of particles and sum-normalized weights their ensemble can be interpreted as a state-discrete approximation to the posterior probability density function of the true origin of a target that is causing the received detections.
Fasano et al. in [29,30] uses an ellipsoidal gating for track association followed by an EKF. The system developed by Salazar et al. in [31] uses the Track-to-Track [92,93] algorithm to fuse the incoming data. As explained in the paper, this method combines estimates rather than measurements and it requires a Kalman Filter for every sensor source. Coraluppi in [34,35] make use of the Multiple Hypothesis Tracking (MHT) algorithm for the data association, followed by an EKF. In [86], Torelli et al. propose a joint approach between the MHT and the IMM algorithms, which they call IM3HT. Cornic et al. in [87] propose the use of the Mahalanobis distance [94] for the evaluation of the association of the detection to the tracks, in conjunction with the IMM algorithm for the fusion of radar and visual data.
Table 5 shows a summary of the surveyed intruder tracking methods.

3.2.2. Classical Approaches for Alerting

The process of track update and filtering of false positive is followed by the threat identification and alerting. This process is based on the concept of “Well Clear” (WC), of the “Remain Well Clear” (RWC) Volume and of the RWC function. As explained in [95], it is possible to define “Well Clear” as a state for the aircraft, on whose loss it may depend the application of the right of way rule, while the RWC volume is defined as a separation minima, the violation of which determines a conflict. Finally, the purpose of a RWC function is to ensure that the RWC volume are never violated. A distinction must be made at this point between the “Remain Well Clear” function and the CA. The first requires more smooth maneuver necessary to avoid loss of Well Clear, the latter is a last resort maneuver to avoid collision.
ICAO defined these volumes by taking inspiration from the Separation Volumes of the Traffic CA Systems (TCAS) equipped on manned aircraft and described in [96]. The dimensions of these volumes and the metric by which it is possible to foresee a loss of Well Clear were though not indicated and have been an active field of research in the last years. As explained in [97] one of the concerns was about the interoperability with the TCAS systems. The RWC volumes need to be big enough to avoid triggering a Resolution Advisory (RA) from the TCAS.
The metrics for the detection of loss of separation used in the TCAS is based on the concept of tau, and on its variation, the modified tau. As explained in [98], tau is defined as follows:
τ = r r ˙
In the (1), r is the range between the two aircraft and r ˙ is the range rate, i.e. the variation of the distance between the two aircraft in time. For low range rates this parameter become inadequate to describe the conflict and ensure safe separation. For this reason, a modified version of this parameter, the modified tau, has been defined as follows:
τ m o d = r 2 D M O D 2 r r ˙
In the (2), D M O D is a range threshold which depends on the altitude. Vaidya et al. in [98] analyse, by means of simulations based on particle kinematics, τ and τ m o d for different encounters, proving that the parameter (2) provides bigger separation at RA. During flight, the TCAS System considers the distance between the vehicles and compare the τ m o d parameter with a threshold value that depends on the altitude. By stating the necessity to ensure interoperability between UAVs CD&R and CA Systems and TCAS, the study in [97] also highlighted the need for a complementary algorithm for the detection of conflict geometries which can cause a RA from the TCAS II. This complementary algorithm has been implemented based on the τ m o d parameter and it is described in [99]. The presented method was then generalized in [100] by providing an algorithm for the identification of loss of well clear, which could be used with different horizontal time variables. Beside the aforementioned τ and τ m o d parameter, the authors also considered the time to closest point of approach t c p a , i.e. the time in seconds to the minimum distance between the two aerial vehicles, and the time to entry point t e p , i.e. the time at which the two vehicles will no longer be separated, considering D M O D and straight line vehicles trajectory. The authors in [100] also analyzed and compared the performance of this algorithm with these four parameters and proved that the algorithm is more conservative when t e p is used as time variable. This works were the mathematical foundation of the DAIDALUS System, which was first introduced in [23]. The DAIDALUS system was then further developed in other works, like for instance in [101], which describes the implementation of two new features in the system, namely of the dynamic well clear volumes and sensor uncertainty mitigation. Another noteworthy work concerning the interoperability between UAV CD&R and CA Systems and TCAS, Thipphavong et al. in [102] described the tests made to define a collision volume where the vertical guidance of the UAV is restricted in order to avoid the issuance of a RA by the TCAS.
Another common method for assessing whether the ownship remains well clear is the Solution Space Diagram (SSD). The method was first applied in Air Traffic Management applications and has been developed to decrease the workload of Air Traffic Controller, as it is more readable and thus more easily interpretable than other methods. The first implementation of the method can be found in [103]. The method was then further developed in other works such as [104,105]. The determination of conflict with the SSD is based on geometric considerations. Considering the own aircraft and the intruder, the method considers the Protected Zone (PZ) of the intruder, represented as a circle of radius 5 NM centered on the intruders position. From the own vehicle position, the two tangents to the intruder PZ are drawn to identify the Forbidden Zone (FBZ), i.e. the space delimited by the tangents to the PZ. Based on this construction, it is possible to determine if there is a conflict between the two aircraft based on the position of the relative velocity with respect to the FBZ. More specifically, a violation of the PZ of the intruder, and consequently a conflict between the two aircraft, can be foreseen as long as the relative velocity between the aircraft lies in the FBZ.
Other alerting logics can also be defined for sensors which do not deliver a 3D information such as visual camera. Despite the lack of information, these kinds of sensors can be useful for the identification of noncooperative intruders. As described in [78], it is possible to estimate if a detected object is a threat by analysing the Line-of-Sight (LOS) Rate. More in particular, considering the principle of Proportional Navigation, provided that the range rate is negative, i.e. that the two objects are getting closer, they can be considered on a collision route if the LOS Rate approaches 0. This method though presents inherent difficulties. The first difficulty is the determination of an appropriate threshold to determine the conflict. Secondly, given the impossibility to determine the range rate only with visual cameras, it must be always supposed that the range rate is negative. These algorithms always consider the worst case, and thus alert more than they should.
Figure 3 shows an explanatory diagram of the classification of alerting methods

3.2.3. Machine Learning Approaches for Reasoning and Alerting

In recent years there has been much research about the use of ML for Data Fusion. [106,107] present surveys about ML methods which can be used for this purpose. The use of CNN, Support Vector Machine (SVM) and k-mean clustering has been identified as possible algorithm for the solution of these kind of problem. In [108], CNN with self-attention are used for fusing data from binocular vision measurement, laser tracking, and depth camera systems. To the best of our knowledge though, while these approaches exhibit substantial potential for addressing the problems of intruder tracking from multi-sensor data and threat identification, they have not yet been explicitly considered in the literature. The work most closely related is authored by Skinner et al. [109]. They studied the use of Bayesian Networks for the intent classification of aerial objects based on raw data coming from radar, visual and IR sensors. Another noticeable approach is attention networks. These networks are Long Short Term Memory (LSTM) networks which takes as input data about intruders and give a vector of fixed length with information about the intruder to consider for the avoidance. They are introduced in [110]. The CA agents described in [111,112] use these methods to decide which intruder must be considered for the avoidance.

3.3. Collision Avoidance

3.3.1. Classical Approaches

The subsequent step to the threat identification is the decision of the evasive maneuver to perform to avoid a mid-air collision. As for the alerting and threat identification, the CA algorithm must consider the response of other CA systems equipped on the intruder. The avoidance maneuver must thus be compatible or coordinated with the maneuver that other systems such as the TCAS may suggests. Furthermore, the avoidance maneuver must consider possible errors in the estimation of the position of the intruder. Finally, as noted in [126], the CA algorithm must be robust to modification of the trajectory of the intruder, which is the most challenging aspect in the design of a CA systems. There are different categories of CA algorithm used for this purpose. The most common ones are rule-based, geometric, game theory, probabilistic and potential field-based methods. This classification is shown in Figure 4.
Rule-based methods extrapolate set of rules for the deconfliction from the General Flight Rule, like in [113], or from the Visual Flight Rules like in [114]. Alharbi et al. in [115] describes a rule-based deconfliction method based on three stages with rule for every stage. It must be noticed though that this work tackles the deconfliction problem more from the point of view of Air Traffic Management, rather than from the point of view of airborne CA systems. Rule-based CA is also one of the preferred methods with swarms, like for instance in [116], where the CA between UAV in a swarm is achieved using the Reynolds rules [127].
Through game theory, the collision of two or more UAVs is modeled in most of the literature as a differential game [128]. Through this formulation a cost function models the state and the evolution of the game through time and a subsequent optimization process solves the game and provide the avoidance maneuver. The modeling of the conflict is usually done through a pursuit-evasion game [129]. Pursuit-evasion games are games between two players, where one of the players (the pursuer) tries to catch the other one (the evader), which in turn tries to evade it. Modifications or restrictions in the movement of one of the two players can be considered by using one of the many variations of this game. The approaches described in [117,118] use differential pursuit-evasion games to model the conflict. In [118] a variation of the game, named the “Suicidal Pedestrian” is used in order to consider limitations in the movement of the evader while giving full freedom of mobility to the pursuer. Contrary to the two approaches presented before, in [119] the conflict is model using a simultaneous pursuit-evasion game. The conflict is discretized into a series of simultaneous games which end after the two players decide which actions to perform.
Geometrical methods are characterized by two stages: one is the analysis of the conflict in geometrical term, like for instance with the use of the so-called collision cone [130]. Similarly, as in the previously described SSD method, the protected zone of the aircraft and the tangents to the protected zone define a cone where the relative velocity vector must not lie to avoid the conflict. In [120] the collision cone is used to determine an “aiming point”, which is then used as a target for a guidance algorithm based on differential geometry [126]. In [121] the collision cone is used to compute a velocity change which is then achieved through proportional navigation. A cooperative geometrical approach is described in [122]. The authors compute the miss distance between two UAVs from their positions and velocities. Subsequently they elongate the miss distance vector to obtain two velocity commands, one for each UAV. Another famous geometrical conflict resolution method is the “Modified Voltage Potential”. This method was originally developed at the MIT Lincoln Laboratories [123], and it is based on the computation of the displacement of the ownship trajectory along the avoidance vector, i.e. the vector between the predicted ownship position and the edge of the intruder’s protected zone. Hoekstra et al. then in [5] used this algorithm to develop a concept logic for separation assurance in free flight.
Other noteworthy methods of CA are a probabilistic approaches, such as the one implemented in ACAS Xu. As described in [24], the encounter is modeled as a Markov Decision Process (MDP). An MDP models the interaction between a system and its environment in terms of state–action pairs, where the next state depends only on the current state and action (the Markov property). The MDP is solved offline using dynamic programming, and the resulting values (Q-values) are stored in a lookup table. During an encounter, the system uses the current state estimate to query this table and selects the action with the best entry. Human-in-the-loop evaluations reported in [131] demonstrated the effectiveness of the approach in reducing losses of well clear for both cooperative and noncooperative intruders, although the number of losses of well clear was higher in the noncooperative case. This was primarily attributed to the limited range of the radar sensor.
Finally, potential fields methods model the CA problems as charged particles in an electromagnetic field. By associating an attractive potential to the target waypoint and a repulsive potential to obstacles and intruders, it is possible to obtain an optimal obstacle-free trajectory. These methods are capable of providing optimal trajectories but, as mentioned in [132] their practical implementation often suffer from the local minima problem. Furthermore, problems may arise if the target waypoint is close to an obstacle and the trajectory can be highly irregular in some point. The authors then propose an implementation of the Artificial Potential Field algorithm which address these issues producing optimal trajectories. In [124] a CA system based on the Artificial Potential Field algorithm is presented. The authors integrate different components of different Artificial Potential Field algorithms to achieve CA between two UAVs. In particular, the definition of the potential functions is based on the work presented in [125], while the prioritization logic is based on the work presented in [133]. Table 6 summarizes the surveyed literature.

3.3.2. Machine Learning Approaches

In recent years, AI has gained increasing attention in the field of CA because of its potential robustness, particularly in complex, high-traffic airspace and even in the presence of unpredictable behavior by other airspace users.
In the context of CA, a large body of recent work relies on reinforcement learning to compute avoidance maneuvers. Keong et al. in [134], trained an agent with the Deep Q-Network (DQN) [142] algorithm to resolve conflicts in two different scenario. Zhao et al. in [135] summarized physics information about the conflict such as number of aircraft’s positions, speeds and heading angles in an image which describes the traffic around the own vehicle with the previously described SSD method. This images are fed to the agent, which includes CNN layers, trained with the Proximal Policy Optimization (PPO) algorithm [143] to resolve the conflicts. The use of images representing the traffic with the SSD method makes also the approach human-understandable. Ribeiro et al. in [136] trained an agent with the Deep Deterministic Policy Gradient (DDPG) [144] to resolve conflict in the airspace by defining heading deviation and a change of speed after receiving as input the relative bearing and distance of a fixed number of intruders. They pre-trained the critic network of the DDPG with values extrapolated by using the Modified Voltage Potential method. The authors used these results to improve training procedure to train an agent to optimize the MVP method for a variable number of intruders. They published their results in [137]. Brittain et al. in [111] trained an agent with the Discrete Soft Actor Critic (SACD) [145] and attention network to achieve decentralized conflict resolution in Urban Air Corridors in Advanced Air Mobility (AAM) scenario. Brittain et al. [112] presented a multi-agent Reinforcement Learning framework which employs PPO agents for distributed conflict resolution. Wang et al. in [6] trained an agent to ensure self-separation in free flight with an algorithm called Safe-DQN. They developed this algorithm by modifying the formulation of the Bellman Equation, foundation of the classical DQN algorithm [142], making it the sum of a goal Q-value and a safe Q-Value. This algorithm ensures that the actions of the agent have an appropriate level of safety. Furthermore, they tested the performance of the agent in case of adversarial attack, i.e. an attack aimed at mislead the agent into making the wrong decision using faulty input. Finally, by visualizing the safety values and the Q-values they claimed that the actions of the agent can be understood by humans. Pham et al. in [138] used reinforcement learning to find the best conflict resolution maneuver defined by the time to change heading and the point of return to the original trajectory. Their strategy is composed by two stages: first a vector of possible times to heading change is generated. Then, for every one of these times a DDPG agent chooses the best point of return. This results in the creation of different avoidance maneuvers. The best one is then selected with a DQN-like strategy. Tran et al. in [139] trained with the DDPG method an agent to solve conflicts in the airspace by imitating the conflict resolution advisories from a Air Traffic Control Operator (ATCO). This was possible by defining a reward which depended from the similarities between the resolution of the agent and the one of the operator.
A noteworthy example of AI-based CA that is not based on reinforcement learning is the family of neural networks originally designed as a compact representation of the ACAS X CA policy. These neural networks were first introduced in [140,141] as a function-approximation of the large ACAS Xu score table obtained via dynamic programming. The authors show that the neural-network–based compression of the lookup table reduces the required storage by roughly a factor of 1000, while largely preserving (and in many cases even improving) the preferred advisory in large-scale Monte Carlo simulations. To provide formal safety guarantees for neural-network CA controllers of this type, the same research group in [146] later combined reachability analysis with neural-network verification tools such as Reluplex [147] and ReluVal [148] in order to prove closed-loop safety properties for notional vertical and horizontal CA systems inspired by ACAS X. Subsequent work by Bak and Tran [149] performed closed-loop verification of the ACAS Xu early-prototype neural-network compression and showed that, even under highly favorable assumptions (perfect sensing, idealized dynamics, instantaneous pilot response, and a straight-flying intruder), the compressed neural-network controller admits encounter scenarios that lead to near mid-air collisions. Table 7 summarized the surveyed literature.

4. Discussion and Outlook

The survey presented in this paper has examined a wide range of technologies for CD&R and CA, covering both classical algorithmic approaches and recent machine-learning-based methods. Across this range, a central theme is the tension between exploiting the benefits of AI - greater adaptability, improved performance in complex scenarios, and scalability to dense traffic — and satisfying the stringent safety and certification requirements of the aviation domain. This tension is particularly pronounced in CD&R and CA, where system behavior must remain predictable, and acceptable to regulators, operators, and the public.
As of the time of writing, no dedicated, domain-specific certification specifications, Acceptable Means of Compliance, or Guidance Material (AMC/GM) exist for the integration of AI-based methods in airborne CD&R and CA. EASA’s AI Concept Paper Issue 02 [13] provides practical guidance for the certification of ML-based systems, but its current scope is limited to Level 1 and Level 2 AI (“assistance to human” and “human–AI teaming”) and to offline supervised and unsupervised learning. Within this scope, the worked examples address only a narrow subset of separation-assurance and surveillance problems, such as runway foreign-object debris and intruder detection. For supervised computer-vision detection functions of the type used in these examples, EASA and Daedalean have also introduced a W-shaped learning-assurance life cycle, which adapts the classical V-model by explicitly covering dataset management, the learning process, and model verification. This model is now used as a main reference for assuring Level 1 and 2 ML detectors in safety-critical avionics [150]. Conversely, decentralized air-to-air CD&R and CA are not treated explicitly, even in a purely decision-support role [13].
In parallel, ongoing rule-making activities on AI trustworthiness aim to translate the concept paper into a sector-wide regulatory framework and generic AI-related AMC/GM [151], but concrete certification pathways for highly autonomous applications remain largely unresolved. The regulation of highly autonomous AI-based applications (Level 3, “advanced autonomy”) is expected to be addressed in future EASA rule-making, at which point AI-based CD&R and CA is also likely to come into scope.
Against this background, the research literature on AI-based CD&R and CA tends to adopt design patterns that minimize perceived regulatory risk, as discussed below.

4.1. Certification-Driven Patterns in AI-Based CA

A first pattern emerging in the literature is the use of AI not as a replacement for classical logic but as a means to optimize, approximate, or compress algorithms while preserving a transparent core structure. Examples include the agent that optimizes the MVP algorithm in [137] as well as the policy-compression work for ACAS X in [140,141], in which neural networks approximate a dynamic-programming policy rather than replacing it with a fully opaque controller. In these designs, the safety-critical decision logic remains grounded in classical models, while AI components serve as an efficient surrogate or optimization layer, reducing computational and memory footprints.
A second pattern involves retaining human-interpretable intermediate representations, even when the final policy is implemented via neural networks. Some approaches encode geometrical or graphical constructs—such as SSD-like state-space diagrams or conflict images, as in [135]—as input to a learning algorithm. Intermediate representations, including positions, velocities, headings, and conflict zones, remain interpretable for human experts, while AI maps them to CA advisories. This shared representation facilitates human situational understanding, supports debugging and verification, and enhances regulatory compliance. Similarly, the visualizations of the Q-value and the safety-boundary in [6] provide an interpretable insight into the agent’s reasoning, reducing the complexity of demonstrating compliance.
A third pattern is the combination of reinforcement learning with learning from demonstration, as in [139]. Agents are first trained to mimic controller or pilot behavior, and subsequently refined with RL. Rewards encourage alignment with operational human procedures, anchoring AI behavior in real-world practices. Although formal safety assurance remains necessary, this approach can improve operator trust and facilitate regulatory compliance by ensuring that learned policies adhere closely to accepted procedures.
These three patterns are shown in Figure 5.

4.2. Explainability, Transparency, and Levels of Autonomy

Across the surveyed work, a relationship emerges between the required degree of explainability and the intended level of autonomy. The following remarks are intended as a conceptual framework rather than prescriptive guidance.
For human-supervised or remotely-piloted operations, interpretability generally takes precedence over deep, introspective explainability. In these contexts, CD&R/CA systems should make decisions that map familiar operational constructs — such as SSD-like geometries, standard maneuver templates, separation minima, or rule-based advisories — so that human operators can understand, monitor, and, if necessary, override the system.
For high-autonomy or fully autonomous deployments, stronger forms of explainability become increasingly important. When a system assumes responsibility for safety-critical decisions without continuous human oversight, stakeholders may require not only interpretable outputs but also detailed justifications or structured reasoning traces. In these contexts, black-box policies — such as end-to-end deep reinforcement learning for multi-UAV CA — may need to be augmented with post-hoc explanation methods, human-interpretable intermediate representations, formal safety-verification and runtime-assurance frameworks (e.g., [152]), and evaluation within standardized benchmarks and testbeds that support reproducible safety assessment.
In decentralized settings, where multiple autonomous CD&R and CA agents interact, these requirements extend from single-vehicle reasoning to emergent multi-agent behavior, further underscoring the need for standardized benchmarks and testbeds.

4.3. Towards Standardized Benchmarks and Testbeds

A longstanding challenge in evaluating AI-based aviation systems is the difficulty of comparing methods across heterogeneous simulators, traffic models, and scenario sets. Without shared benchmarks and scenario libraries, it is unclear whether improvements reported in one study generalize to other environments or operational conditions. In the broader advanced-air-mobility community, efforts such as AAM-Gym [153] address this gap by providing standardized test environments with well-defined observation and action spaces, scenario suites, and performance metrics — including use cases directly relevant to separation assurance and CA. Such initiatives show how common interfaces and reproducible testbeds can accelerate research and support assurance activities by enabling community-wide comparisons.
For CD&R and CA in encounters with uncooperative intruders, similar standardized testbeds would be highly valuable. Shared encounter geometries, intruder behavior models, sensor limitations, and traffic densities — combined with agreed-upon safety and efficiency metrics — could facilitate consistent evaluation across reinforcement learning, supervised learning, and classical algorithmic methods.
Standardized benchmarks could also offer natural anchor points for embedding validation frameworks, should these be required by certification agencies. Black-box safety-validation approaches, such as those surveyed in [154,155], search for failures or estimate failure probabilities using optimization, importance sampling, or reinforcement learning over disturbance spaces, without directly attempting to interpret policies. Such frameworks have already been applied to AI-based perception systems, including vision-based landing and runway detection [156] and vision-based aircraft CA [157], allowing researchers to report both nominal performance and statistically grounded estimates or bounds on failure risk under clearly specified scenario assumptions.
Recent work further explores adversarial stress-testing frameworks, such as [158], where multi-agent adversaries generate hazardous traffic scenarios that expose failure modes of AI-based separation assurance systems.
Taken together, the approaches reviewed in this survey illustrate both the promise and the difficulty of deploying AI-based CA in safety-critical aviation environments. Algorithmic innovations continue to expand technical capabilities and improve the robustness of autonomous systems, yet certification, transparency, and assurance remain central challenges. Future progress will likely depend on integrating interpretable representations, learning from operational data, and formal or statistical safety guarantees within standardized testbeds that support reproducible evaluation and verification. Bridging methodological innovation with regulatory and operational needs is key to converting experimental AI approaches into certifiable and dependable aviation solutions.

Author Contributions

Conceptualization, F.D.; resources, F.D, V.W.; Writing - original draft, F.D., V.W., C.S.; writing—review and editing, F.D., P.FJ., V.W., C.S.; supervision, F.D.. All authors have read and agreed to the published version of the manuscript.

Funding

The research leading to these results has received funding from Take Off programme (Grant number FO999913991). Take Off is a Research, Technology and Innovation Funding Programme of the Republic of Austria, Ministry of Climate Action. The Austrian Research Promotion Agency (FFG) has been authorised for the programme management.

Acknowledgments

The authors used ChatGPT (OpenAI) to assist with text editing and formatting. The authors reviewed and edited all content and take full responsibility for the final version of the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shakhatreh, H.; Sawalmeh, A.H.; Al-Fuqaha, A.; Dou, Z.; Almaita, E.; Khalil, I.; Othman, N.S.; Khreishah, A.; Guizani, M. Unmanned Aerial Vehicles (UAVs): A Survey on Civil Applications and Key Research Challenges. IEEE Access Conference Name: IEEE Access. 2019, 7, 48572–48634. [Google Scholar] [CrossRef]
  2. Mu, D.; Yue, C.; Chen, A. Are we working on the safety of UAVs? An LDA-based study of UAV safety technology trends. Safety Science 2022, 152, 105767. [Google Scholar] [CrossRef]
  3. Nelson, J.R.; Grubesic, T.H.; Wallace, D.; Chamberlain, A.W. The View from Above: A Survey of the Public’s Perception of Unmanned Aerial Vehicles and Privacy. Journal of Urban Technology Publisher: Routledge _eprint. 2019, 26, 83–105. [Google Scholar] [CrossRef]
  4. Tam, A. Public Perception of Unmanned Aerial Vehicles. Aviation Technology Graduate Student Publications 2011. [Google Scholar]
  5. Hoekstra, J.M.; van Gent, R.N.H.W.; Ruigrok, R.C.J. Designing for safety: the ‘free flight’ air traffic management concept. Reliability Engineering & System Safety 2002, 75, 215–232. [Google Scholar] [CrossRef]
  6. Wang, L.; Yang, H.; Lin, Y.; Yin, S.; Wu, Y. Explainable and safe reinforcement learning for autonomous air mobility. arXiv 2022, arXiv:2211.13474. [Google Scholar] [CrossRef]
  7. Likmeta, A.; Metelli, A.M.; Tirinzoni, A.; Giol, R.; Restelli, M.; Romano, D. Combining reinforcement learning with rule-based controllers for transparent and general decision-making in autonomous driving. Robotics and Autonomous Systems 2020, 131, 103568. [Google Scholar] [CrossRef]
  8. Lipovetsky, S.; Conklin, M. Analysis of regression in game theory approach. 2001. [Google Scholar] [CrossRef]
  9. Ribeiro, M.T.; Singh, S.; Guestrin, C. Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of the Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA; KDD ’16, pp. 1135–1144. [CrossRef]
  10. Lundberg, S.M.; Lee, S.I. A Unified Approach to Interpreting Model Predictions. In Proceedings of the Advances in Neural Information Processing Systems. Curran Associates, Inc., 2017, Vol. 30.
  11. Watkins, C.J.; Dayan, P. Q-learning. Machine learning 1992, 8, 279–292. [Google Scholar] [CrossRef]
  12. European Union Aviation Safety Agency. Artificial Intelligence Roadmap: A human-centric approach to AI in aviation, 2024. Accessed: 2025-02-20.
  13. European Union Aviation Safety Agency. EASA Artificial Intelligence (AI) Concept Paper Issue 2: Guidance for Level 1 & 2 Machine-Learning Applications. Technical report, European Union Aviation Safety Agency, 2024. Quoting as: “EASA Artificial Intelligence (AI) Concept Paper Issue 2: Guidance for Level 1&2 machine learning applications, March 2024.”.
  14. Hamissi, A.; Dhraief, A.; Sliman, L. A Comprehensive Survey on Conflict Detection and Resolution in Unmanned Aircraft System Traffic Management. IEEE Transactions on Intelligent Transportation Systems 2025, 26, 1395–1418. [Google Scholar] [CrossRef]
  15. Fennedy, K.; Hilburn, B.; Nadirsha, T.N.; Alam, S.; Le, K.D.; Li, H. Do ATCOs Need Explanations, and Why? Towards ATCO-Centered Explainable AI for Conflict Resolution Advisories. arXiv 2025, arXiv:2505.03117. [Google Scholar] [CrossRef]
  16. Degas, A.; Islam, M.R.; Hurter, C.; Barua, S.; Rahman, H.; Poudel, M.; Ruscio, D.; Ahmed, M.U.; Begum, S.; Rahman, M.A.; et al. A survey on artificial intelligence (ai) and explainable ai in air traffic management: Current trends and development with future research trajectory. Applied Sciences 2022, 12, 1295. [Google Scholar] [CrossRef]
  17. Rahman, M.H.; Sejan, M.A.S.; Aziz, M.A.; Tabassum, R.; Baik, J.I.; Song, H.K. A Comprehensive Survey of Unmanned Aerial Vehicles Detection and Classification Using Machine Learning Approach: Challenges, Solutions, and Future Directions. Remote Sensing 2024, 16. [Google Scholar] [CrossRef]
  18. Lu, L.; Fasano, G.; Carrio, A.; Lei, M.; Bavle, H.; Campoy, P. A comprehensive survey on non-cooperative collision avoidance for micro aerial vehicles: Sensing and obstacle detection. Journal of Field Robotics 2023, 40, 1697–1720. [Google Scholar] [CrossRef]
  19. Bello, H.; Geißler, D.; Ray, L.; Müller-Divéky, S.; Müller, P.; Kittrell, S.; Liu, M.; Zhou, B.; Lukowicz, P. Towards certifiable AI in aviation: landscape, challenges, and opportunities. arXiv 2024, arXiv:2409.08666. [Google Scholar] [CrossRef]
  20. Coletsos, J.; Ntakolia, C. Air traffic management and energy efficiency: the free flight concept. Energy Systems 2017, 8, 709–726. [Google Scholar] [CrossRef]
  21. Chin, C.; Qin, V.; Gopalakrishnan, K.; Balakrishnan, H. Traffic management protocols for advanced air mobility. Frontiers in Aerospace Engineering 2023, 2, 1176969. [Google Scholar] [CrossRef]
  22. de Oliveira, Í.R.; Neto, E.C.P.; Matsumoto, T.T.; Yu, H. Decentralized air traffic management for advanced air mobility. arXiv 2021, arXiv:2108.11329. [Google Scholar] [CrossRef]
  23. Muñoz, C.; Narkawicz, A.; Hagen, G.; Upchurch, J.; Dutle, A.; Consiglio, M.; Chamberlain, J. DAIDALUS: detect and avoid alerting logic for unmanned systems. In Proceedings of the 2015 IEEE/AIAA 34th Digital Avionics Systems Conference (DASC); IEEE, 2015; pp. 5A1–1. [Google Scholar]
  24. Owen, M.P.; Panken, A.; Moss, R.; Alvarez, L.; Leeper, C. ACAS Xu: Integrated collision avoidance and detect and avoid capability for UAS. In Proceedings of the 2019 IEEE/AIAA 38th Digital Avionics Systems Conference (DASC); IEEE, 2019; pp. 1–10. [Google Scholar]
  25. RTCA, Inc.. Minimum Operational Performance Standards (MOPS) for Detect and Avoid (DAA) Systems. Do-365c, RTCA, Washington, D.C., 2022.
  26. Zeitlin, A.D. Progress on Requirements and Standards for Sense & Avoid. 2010. [Google Scholar]
  27. Aldao, E.; González-de Santos, L.M.; González-Jorge, H. LiDAR Based Detect and Avoid System for UAV Navigation in UAM Corridors. Drones 2022, 6, 185. Number: 8 Publisher: Multidisciplinary Digital Publishing Institute. [CrossRef]
  28. Corucci, L.; Meta, A.; Coccia, A. An X-band radar-based airborne collision avoidance system proof of concept. In Proceedings of the 2014 15th International Radar Symposium (IRS); IEEE, 2014; pp. 1–3. [Google Scholar]
  29. Fasano, G.; Accardo, D.; Moccia, A.; Carbone, C.; Ciniglio, U.; Corraro, F.; Luongo, S. Multi-sensor-based fully autonomous non-cooperative collision avoidance system for unmanned air vehicles. 2008. [Google Scholar] [CrossRef]
  30. Fasano, G.; Accardo, D.; Tirri, A.E.; Moccia, A.; De Lellis, E. Radar/electro-optical data fusion for non-cooperative UAS sense and avoid. Aerospace Science and Technology 2015, 46, 436–450. [Google Scholar] [CrossRef]
  31. Salazar, L.R.; Sabatini, R.; Ramasamy, S.; Gardi, A. A novel system for non-cooperative UAV sense-and-avoid. In Proceedings of the European Navigation Conference; 2013. [Google Scholar]
  32. de Haag, M.U.; Bartone, C.G.; Braasch, M.S. Flight-test evaluation of small form-factor LiDAR and radar sensors for sUAS detect-and-avoid applications. In Proceedings of the 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC); 2016; pp. 1–11, ISSN 2155-7209. [Google Scholar] [CrossRef]
  33. Lyu, H.; et al. Detect and avoid system based on multi sensor fusion for UAV. In Proceedings of the 2018 International Conference on Information and Communication Technology Convergence (ICTC); IEEE, 2018; pp. 1107–1109. [Google Scholar]
  34. Coraluppi, S.; Carthel, C.; Wu, C.; Stevens, M.; Douglas, J.; Titi, G.; Luettgen, M. Distributed MHT with active and passive sensors. In Proceedings of the 2015 18th International Conference on Information Fusion (Fusion); IEEE, 2015; pp. 2065–2072. [Google Scholar]
  35. Coraluppi, S.; Carthel, C.; Zimmerman, B.; Allen, T.; Douglas, J.; Muka, J. Multi-stage MHT with airborne and ground sensors. In Proceedings of the 2018 IEEE Aerospace Conference; 2018; pp. 1–13. [Google Scholar] [CrossRef]
  36. Stamm, R.J.; Glaneuski, J.; Kennett, P.R.; Belanger, J.M. Advances in the Use of NAS Infrastructure and GBDAA for UAS Operations. In Proceedings of the 2018 IEEE/AIAA 37th Digital Avionics Systems Conference (DASC); 2018; pp. 1–9, ISSN 2155-7209. [Google Scholar] [CrossRef]
  37. Kacem, T.; Wijesekera, D.; Costa, P.; Barreto, A. An ADS-B intrusion detection system. In Proceedings of the 2016 IEEE Trustcom/BigDataSE/ISPA. IEEE; 2016; pp. 544–551. [Google Scholar]
  38. Leonardi, M.; Di Fausto, D. ADS-B signal signature extraction for intrusion detection in the air traffic surveillance system. In Proceedings of the 2018 26th European Signal Processing Conference (EUSIPCO); IEEE, 2018; pp. 2564–2568. [Google Scholar]
  39. Ray, G.; Ray, J. Detecting ADS-B replay cyberattacks in the national airspace system. Issues in Information Systems 2023, 24. [Google Scholar]
  40. Farina, A.; Studer, F.A. A review of CFAR detection techniques in radar systems. 1986. [Google Scholar]
  41. Sim, Y.; Heo, J.; Jung, Y.; Lee, S.; Jung, Y. FPGA implementation of efficient CFAR algorithm for radar systems. Sensors 2023, 23, 954. [Google Scholar] [CrossRef] [PubMed]
  42. Safa, A.; Verbelen, T.; Keuninckx, L.; Ocket, I.; Hartmann, M.; Bourdoux, A.; Catthoor, F.; Gielen, G.G. A low-complexity radar detector outperforming OS-CFAR for indoor drone obstacle avoidance. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2021, 14, 9162–9175. [Google Scholar] [CrossRef]
  43. Hoffmann, F.; Ritchie, M.; Fioranelli, F.; Charlish, A.; Griffiths, H. Micro-Doppler based detection and tracking of UAVs with multistatic radar. In Proceedings of the 2016 IEEE radar conference (RadarConf). IEEE; 2016; pp. 1–6. [Google Scholar]
  44. Shao, S.; Zhu, W.; Li, Y. Radar detection of low-slow-small UAVs in complex environments. In Proceedings of the 2022 IEEE 10th joint international information technology and artificial intelligence conference (ITAIC); IEEE, 2022; Vol. 10, pp. 1153–1157. [Google Scholar]
  45. Jakubowicz, J.; Lefebvre, S.; Maire, F.; Moulines, E. Detecting aircraft with a low-resolution infrared sensor. IEEE transactions on image processing 2012, 21, 3034–3041. [Google Scholar] [CrossRef]
  46. Qi, S.; Ma, J.; Tao, C.; Yang, C.; Tian, J. A robust directional saliency-based method for infrared small-target detection under various complex backgrounds. IEEE Geoscience and Remote Sensing Letters 2012, 10, 495–499. [Google Scholar]
  47. Chao, H.; Gu, Y.; Napolitano, M. A survey of optical flow techniques for UAV navigation applications. In Proceedings of the 2013 International Conference on Unmanned Aircraft Systems (ICUAS); IEEE, 2013; pp. 710–716. [Google Scholar]
  48. Mori, T.; Scherer, S. First results in detecting and avoiding frontal obstacles from a monocular camera for micro unmanned aerial vehicles. In Proceedings of the 2013 IEEE international conference on robotics and automation. IEEE; 2013; pp. 1750–1757. [Google Scholar]
  49. Mejias Alvarez, L.; Ford, J.; Lai, J. Towards the implementation of vision-based UAS sense-and-avoidance system. In Proceedings of the Proceedings of the 27th Congress of the International Council of the Aeronautical Sciences; Optimage Ltd. on behalf of the International Council of the Aeronautical …, 2010; pp. 1–10. [Google Scholar]
  50. Molloy, T.L.; Ford, J.J.; Mejias, L. Detection of aircraft below the horizon for vision-based detect and avoid in unmanned aircraft systems. Journal of Field Robotics 2017, 34, 1378–1391. [Google Scholar] [CrossRef]
  51. Dolph, C.; Logan, M.J.; Glaab, L.J.; Vranas, T.L.; McSwain, R.G.; Johns, Z. Sense and avoid for small unmanned aircraft systems. AIAA information systems-AIAA Infotech@ Aerospace 2017, 1151. [Google Scholar]
  52. Aldao, E.; González-de Santos, L.M.; González-Jorge, H. LiDAR Based Detect and Avoid System for UAV Navigation in UAM Corridors. Drones 2022, 6. [Google Scholar] [CrossRef]
  53. Dewan, A.; Caselitz, T.; Tipaldi, G.D.; Burgard, W. Motion-based detection and tracking in 3d lidar scans. In Proceedings of the 2016 IEEE international conference on robotics and automation (ICRA); IEEE, 2016; pp. 4508–4513. [Google Scholar]
  54. Fischler, M.A.; Bolles, R.C. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  55. Lu, M.; Fan, X.; Chen, H.; Lu, P. Fapp: Fast and adaptive perception and planning for uavs in dynamic cluttered environments. IEEE Transactions on Robotics; 2024. [Google Scholar]
  56. Zheng, L.; Zhang, P.; Tan, J.; Li, F. The obstacle detection method of uav based on 2d lidar. IEEE Access 2019, 7, 163437–163448. [Google Scholar] [CrossRef]
  57. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the Proceedings of the IEEE conference on computer vision and pattern recognition, 2016; pp. 779–788. [Google Scholar]
  58. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-end object detection with transformers. In Proceedings of the European conference on computer vision; Springer, 2020; pp. 213–229. [Google Scholar]
  59. Zhao, Y.; Lv, W.; Xu, S.; Wei, J.; Wang, G.; Dang, Q.; Liu, Y.; Chen, J. Detrs beat yolos on real-time object detection. In Proceedings of the Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2024; pp. 16965–16974. [Google Scholar]
  60. Airborne Object Tracking Dataset, 2021. Available online: https://registry.opendata.aws/airborne-object-tracking/ (accessed on 28 March 2025).
  61. Vrba, M.; Walter, V.; Pritzl, V.; Pliska, M.; Báča, T.; Spurný, V.; Heřt, D.; Saska, M. On Onboard LiDAR-Based Flying Object Detection. IEEE Transactions on Robotics 2025, 41, 593–611. [Google Scholar] [CrossRef]
  62. Yuan, S.; Yang, Y.; Nguyen, T.H.; Nguyen, T.M.; Yang, J.; Liu, F.; Li, J.; Wang, H.; Xie, L. MMAUD: A Comprehensive Multi-Modal Anti-UAV Dataset for Modern Miniature Drone Threats. In Proceedings of the 2024 IEEE International Conference on Robotics and Automation (ICRA); 2024; pp. 2745–2751. [Google Scholar] [CrossRef]
  63. Patrikar, J.; Dantas, J.; Moon, B.; Hamidi, M.; Ghosh, S.; Keetha, N.; Higgins, I.; Chandak, A.; Yoneyama, T.; Scherer, S. Image, speech, and ADS-B trajectory datasets for terminal airspace operations. Scientific Data 2025, 12, 468. [Google Scholar] [CrossRef] [PubMed]
  64. Svanström, F.; Englund, C.; Alonso-Fernandez, F. Real-time drone detection and tracking with visible, thermal and acoustic sensors. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR); IEEE, 2021; pp. 7265–7272. [Google Scholar]
  65. Chen, Y.H. UAVDB: Trajectory-Guided Adaptable Bounding Boxes for UAV Detection. arXiv 2024, arXiv:2409.06490. [Google Scholar] [CrossRef]
  66. Lenhard, T.R.; Weinmann, A.; Franke, K.; Koch, T. SynDroneVision: A Synthetic Dataset for Image-Based Drone Detection. arXiv 2024, arXiv:2411.05633. [Google Scholar]
  67. Aldao, E.; Veiga-López, F.; Miguel González-deSantos, L.; González-Jorge, H. Enhancing UAV Classification With Synthetic Data: GMM LiDAR Simulator for Aerial Surveillance Applications. IEEE Sensors Journal 2024, 24, 26960–26970. [Google Scholar] [CrossRef]
  68. Shafienya, H.; Regan, A. 4D flight trajectory prediction based on ADS-B data: A comparison of CNN-GRU models. In Proceedings of the 2022 IEEE Aerospace Conference (AERO). IEEE, 2022; pp. 01–12. [Google Scholar]
  69. Luo, P.; Wang, B.; Tian, J. TTSAD: TCN-Transformer-SVDD Model for Anomaly Detection in air traffic ADS-B data. Computers & Security 2024, 141, 103840. [Google Scholar]
  70. Ahmed, W.; Masood, A.; Manzoor, J.; Akleylek, S. Automatic dependent surveillance-broadcast (ADS-B) anomalous messages and attack type detection: deep learning-based architecture. PeerJ Computer Science 2025, 11, e2886. [Google Scholar] [CrossRef] [PubMed]
  71. Ngamboé, M.; Marrocco, J.S.; Ouattara, J.Y.; Fernandez, J.M.; Nicolescu, G. New Machine Learning Approaches for Intrusion Detection in ADS-B. arXiv 2025, arXiv:2510.08333. [Google Scholar] [CrossRef]
  72. Zhao, Y.; Sun, T.; Zhang, J.; Gao, M. GAN–CNN-based moving target detector for airborne radar systems. IEEE Sensors Journal 2024, 24, 21614–21627. [Google Scholar] [CrossRef]
  73. Wang, C.; Tian, J.; Cao, J.; Wang, X. Deep learning-based UAV detection in pulse-Doppler radar. IEEE Transactions on Geoscience and Remote Sensing 2021, 60, 1–12. [Google Scholar] [CrossRef]
  74. Tm, D.; Verma, R.; Rajesh, R.; Varughese, S. Single shot radar target detection and localization using deep neural network. In Proceedings of the 2022 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT); IEEE, 2022; pp. 1–9. [Google Scholar]
  75. Zhu, C.; Xie, X.; Xi, J.; Yang, X. GM-DETR: Infrared Detection of Small UAV Swarm Targets Based on Detection Transformer. Remote Sensing 2025, 17, 3379. [Google Scholar] [CrossRef]
  76. Gutierrez, G.; Llerena, J.P.; Usero, L.; Patricio, M.A. A comparative study of convolutional neural network and transformer architectures for drone detection in thermal images. Applied Sciences 2024, 15, 109. [Google Scholar] [CrossRef]
  77. Lee, Z.W.; Chin, W.H.; Ho, H.W. Air-to-air Micro Air Vehicle interceptor with an embedded mechanism and deep learning. Aerospace Science and Technology 2023, 135, 108192. [Google Scholar] [CrossRef]
  78. Opromolla, R.; Fasano, G. Visual-based obstacle detection and tracking, and conflict detection for small UAS sense and avoid. Aerospace Science and Technology 2021, 119, 107167. [Google Scholar] [CrossRef]
  79. Arsenos, A.; Petrongonas, E.; Filippopoulos, O.; Skliros, C.; Kollias, D.; Kollias, S. NEFELI: A deep-learning detection and tracking pipeline for enhancing autonomy in advanced air mobility. Aerospace Science and Technology 2024, 155, 109613. [Google Scholar] [CrossRef]
  80. Yu, X.; Liu, X.; Liang, G. YOLOv8-SMOT: An Efficient and Robust Framework for Real-Time Small Object Tracking via Slice-Assisted Training and Adaptive Association. arXiv arXiv:2507.12087.
  81. Ghosh, S.; Patrikar, J.; Moon, B.; Hamidi, M.M.; Scherer, S. AirTrack: Onboard deep learning framework for long-range aircraft detection and tracking. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA); IEEE, 2023; pp. 1277–1283. [Google Scholar]
  82. Karampinis, V.; Arsenos, A.; Filippopoulos, O.; Petrongonas, E.; Skliros, C.; Kollias, D.; Kollias, S.; Voulodimos, A. Ensuring UAV Safety: A Vision-Only and Real-Time Framework for Collision Avoidance Through Object Detection, Tracking, and Distance Estimation. In Proceedings of the 2024 International Conference on Unmanned Aircraft Systems (ICUAS); 2024; pp. 1072–1079. [Google Scholar] [CrossRef]
  83. Chen, Y.H. UAVDB: Point-Guided Masks for UAV Detection and Segmentation. 2025. [Google Scholar]
  84. Xiao, J.; Pisutsin, P.; Tsao, C.W.; Feroskhan, M. Clustering-based Learning for UAV Tracking and Pose Estimation. arXiv 2024, arXiv:2405.16867. [Google Scholar] [CrossRef]
  85. Zhang, Q.; Yang, Y.; Fang, H.; Geng, R.; Jensfelt, P. DeFlow: Decoder of Scene Flow Network in Autonomous Driving. In Proceedings of the 2024 IEEE International Conference on Robotics and Automation (ICRA); 2024; pp. 2105–2111. [Google Scholar] [CrossRef]
  86. Torelli, R.; Graziano, A.; Farina, A. IM3HT Algorithm: A Joint Formulation of IMM and MHT for Multi-target Tracking. European Journal of Control 1999, 5, 46–53. [Google Scholar] [CrossRef]
  87. Cornic, P.; Garrec, P.; Kemkemian, S.; Ratton, L. Sense and avoid radar using data fusion with other sensors. In Proceedings of the 2011 Aerospace Conference. IEEE, 2011; pp. 1–14. [Google Scholar]
  88. Bar-Shalom, Y.; Tse, E. Tracking in a cluttered environment with probabilistic data association. Automatica 1975, 11, 451–460. [Google Scholar] [CrossRef]
  89. Reid, D. An algorithm for tracking multiple targets. IEEE Transactions on Automatic Control Conference Name: IEEE Transactions on Automatic Contro. 1979, 24, 843–854. [Google Scholar] [CrossRef]
  90. Blackman, S. Multiple hypothesis tracking for multiple target tracking. IEEE Aerospace and Electronic Systems Magazine IEEE Aerospace and Electronic Systems Magazine. 2004, 19, 5–18. [Google Scholar] [CrossRef]
  91. Arulampalam, M.; Maskell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Transactions on Signal Processing IEEE Transactions on Signal Processing. 2002, 50, 174–188. [Google Scholar] [CrossRef]
  92. Bar-Shalom, Y. On the track-to-track correlation problem. IEEE Transactions on Automatic Control IEEE Transactions on Automatic Control. 1981, 26, 571–572. [Google Scholar] [CrossRef]
  93. Durrant-Whyte, H.; Henderson, T.C. Multisensor Data Fusion. In Springer Handbook of Robotics; Siciliano, B., Khatib, O., Eds.; Springer International Publishing: Cham, 2016; pp. 867–896. [Google Scholar] [CrossRef]
  94. Mahalanobis, P.C. On the generalized distance in statistics. Sankhyā: The Indian Journal of Statistics, Series A (2008-) 2018, 80, S1–S7. [Google Scholar]
  95. Manfredi, G.; Jestin, Y. Are You Clear About “Well Clear”? In Proceedings of the 2018 International Conference on Unmanned Aircraft Systems (ICUAS), 2018; pp. 599–605, ISSN 2575-7296. [Google Scholar] [CrossRef]
  96. Williamson, T.; Spencer, N. Development and operation of the Traffic Alert and Collision Avoidance System (TCAS). Proceedings of the IEEE Conference Name: Proceedings of the IEEE. 1989, 77, 1735–1744. [Google Scholar] [CrossRef]
  97. Consiglio, M.C.; Chamberlain, J.P.; Munoz, C.A.; Hoffler, K.D. Concepts of Integration for UAS Operations in the NAS, 2012. NTRS Author Affiliations: NASA Langley Research Center, Adaptive Aerospace Group NTRS Report/Patent Number: NF1676L-13199 NTRS Document ID: 20120015770 NTRS Research Center: Langley Research Center (LaRC).
  98. Vaidya, S.; Khot, T. Analysis of the Tau concept used in aircraft collision avoidance through kinematic simulations. In Proceedings of the 2017 9th International Conference on Communication Systems and Networks (COMSNETS), 2017; pp. 431–436, ISSN 2155-2509. [Google Scholar] [CrossRef]
  99. Munoz, C.; Narkawicz, A.; Chamberlain, J. A TCAS-II resolution advisory detection algorithm. In Proceedings of the AIAA Guidance, Navigation, and Control (GNC) Conference, 2013; p. 4622. [Google Scholar]
  100. Munoz, C.; Narkawicz, A.; Chamberlain, J.; Consiglio, M.C.; Upchurch, J.M. A family of well-clear boundary models for the integration of UAS in the NAS. In Proceedings of the 14th AIAA Aviation Technology, Integration, and Operations Conference, 2014; p. 2412. [Google Scholar]
  101. Narkawicz, A.; Muñoz, C.; Dutle, A. Sensor uncertainty mitigation and dynamic well clear volumes in DAIDALUS. In Proceedings of the 2018 IEEE/AIAA 37th Digital Avionics Systems Conference (DASC); IEEE, 2018; pp. 1–8. [Google Scholar]
  102. Thipphavong, D.; Cone, A.; Lee, S.M.; Santiago, C. Ensuring Interoperability between UAS Detect-and-Avoid and Manned Aircraft Collision Avoidance, Seattle, WA, 2017. NTRS Author Affiliations: NASA Ames Research Center, Crown Consulting, Inc. NTRS Report/Patent Number: ARC-E-DAA-TN38495 NTRS Document ID: 20170011179 NTRS Research Center: Ames Research Center (ARC).
  103. Van Dam, S.B.; Mulder, M.; Van Paassen, M. Ecological interface design of a tactical airborne separation assistance tool. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans 2008, 38, 1221–1233. [Google Scholar] [CrossRef]
  104. Hermes, P.; Mulder, M.; van Paassen, M.M.; Boering, J.H.L.; Huisman, H. Solution-Space-Based Complexity Analysis of the Difficulty of Aircraft Merging Tasks. Journal of Aircraft Publisher: American Institute of Aeronautics and Astronautics _eprint. 2009, 46, 1995–2015. [Google Scholar] [CrossRef]
  105. Abdul Rahman, S.M.; Mulder, M.; van Paassen, R. Using the solution space diagram in measuring the effect of sector complexity during merging scenarios. In Proceedings of the AIAA Guidance, Navigation, and Control Conference, 2011; p. 6693. [Google Scholar]
  106. Meng, T.; Jing, X.; Yan, Z.; Pedrycz, W. A survey on machine learning for data fusion. Information Fusion 2020, 57, 115–129. [Google Scholar] [CrossRef]
  107. Tang, Q.; Liang, J.; Zhu, F. A comparative review on multi-modal sensors fusion based on deep learning. Signal Processing 2023, 213, 109165. [Google Scholar] [CrossRef]
  108. Lin, X.; Chao, S.; Yan, D.; Guo, L.; Liu, Y.; Li, L. Multi-Sensor Data Fusion Method Based on Self-Attention Mechanism. Applied Sciences 2023, 13, 11992. [Google Scholar] [CrossRef]
  109. Skinner, L.T.; Johnson, M.A. Bayesian networks for interpretable and extensible multisensor fusion. In Proceedings of the Artificial Intelligence for Security and Defence Applications II. SPIE; 2024; Vol. 13206, pp. 11–23. [Google Scholar]
  110. Luong, M.T.; Pham, H.; Manning, C.D. Effective approaches to attention-based neural machine translation. arXiv 2015, arXiv:1508.04025. [Google Scholar] [CrossRef]
  111. Brittain, M.W.; Alvarez, L.E.; Breeden, K. Improving autonomous separation assurance through distributed reinforcement learning with attention networks. In Proceedings of the Proceedings of the AAAI Conference on Artificial Intelligence; 2024; Vol. 38, pp. 22857–22863. [Google Scholar]
  112. Brittain, M.W.; Wei, P. One to any: Distributed conflict resolution with deep multi-agent reinforcement learning and long short-term memory. In Proceedings of the AIAA Scitech 2021 Forum, 2021; p. 1952. [Google Scholar]
  113. Wang, G.; Ge, S.S. General fight rule-based trajectory planning for pairwise collision avoidance in a known environment. International Journal of Control, Automation and Systems 2014, 12, 813–822. [Google Scholar] [CrossRef]
  114. Sanches, M.P.; Faria, R.A.P.; Cunha, S.R. Visual Flight Rules-based Collision Avoidance System for VTOL UAV. In Proceedings of the 2020 5th International Conference on Robotics and Automation Engineering (ICRAE), 2020; pp. 169–174. [Google Scholar] [CrossRef]
  115. Alharbi, A.; Poujade, A.; Malandrakis, K.; Petrunin, I.; Panagiotakopoulos, D.; Tsourdos, A. Rule-Based Conflict Management for Unmanned Traffic Management Scenarios. In Proceedings of the 2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC), 2020; pp. 1–10, ISSN 2155-7209. [Google Scholar] [CrossRef]
  116. Braga, R.G.; Da Silva, R.C.; Ramos, A.C.; Mora-Camino, F. Collision avoidance based on Reynolds rules: A case study using quadrotors. In Proceedings of the Information Technology-New Generations: 14th International Conference on Information Technology; Springer, 2018; pp. 773–780. [Google Scholar]
  117. Haberkorn, T. Aircraft separation in uncontrolled airspace including human factors. Phd thesis, TU Graz, 2016. [Google Scholar]
  118. Exarchos, I.; Tsiotras, P.; Pachter, M. UAV collision avoidance based on the solution of the suicidal pedestrian differential game. In Proceedings of the AIAA Guidance, Navigation, and Control Conference, 2016; p. 2100. [Google Scholar]
  119. D’apolito, F.; Sulzbachner, C. Collision Avoidance for Unmanned Aerial Vehicles using Simultaneous Game Theory. In Proceedings of the 2018 IEEE/AIAA 37th Digital Avionics Systems Conference (DASC), 2018; pp. 1–5, ISSN 2155-7209. [Google Scholar] [CrossRef]
  120. Mujumdar, A.; Padhi, R. Reactive Collision Avoidance of Using Nonlinear Geometric and Differential Geometric Guidance. Journal of Guidance, Control, and Dynamics Publisher: American Institute of Aeronautics and Astronautics _eprint. 2011, 34, 303–311. [Google Scholar] [CrossRef]
  121. Han, S.C.; Bang, H.; Yoo, C.S. Proportional navigation-based collision avoidance for UAVs. International Journal of Control, Automation and Systems 2009, 7, 553–565. [Google Scholar] [CrossRef]
  122. Park, J.W.; Oh, H.D.; Tahk, M.J. UAV collision avoidance based on geometric approach. In Proceedings of the 2008 SICE Annual Conference, 2008; pp. 2122–2126. [Google Scholar] [CrossRef]
  123. Eby, M.S. A self-organizational approach for resolving air traffic conflicts. Lincoln Lab. J. 1995, 7, 239–254. [Google Scholar]
  124. Ruchti, J.; Senkbeil, R.; Carroll, J.; Dickinson, J.; Holt, J.; Biaz, S. Unmanned Aerial System Collision Avoidance Using Artificial Potential Fields. Journal of Aerospace Information Systems Publisher: American Institute of Aeronautics and Astronautics _eprint. 2014, 11, 140–144. [Google Scholar] [CrossRef]
  125. Liu, D.; Wang, D.; Dissanayake, G. A Force Field Method Based Multi-Robot Collaboration. In Proceedings of the 2006 IEEE Conference on Robotics, Automation and Mechatronics, 2006; pp. 1–6, ISSN 2158-219X. [Google Scholar] [CrossRef]
  126. Angelov, P. Sense and Avoid in UAS: Research and Applications; Google-Books-ID: nnxdlGanABAC; John Wiley & Sons, 2012. [Google Scholar]
  127. Reynolds, C.W. Flocks, herds and schools: A distributed behavioral model. In Proceedings of the Proceedings of the 14th annual conference on Computer graphics and interactive techniques, New York, NY, USA, 1987; SIGGRAPH ’87, pp. 25–34. [Google Scholar] [CrossRef]
  128. Friedman, A. Differential Games; Google-Books-ID: 1ZHQui4sInIC; Courier Corporation, 2013. [Google Scholar]
  129. Weintraub, I.E.; Pachter, M.; Garcia, E. An Introduction to Pursuit-evasion Differential Games. In Proceedings of the 2020 American Control Conference (ACC), 2020; pp. 1049–1066, ISSN 2378-5861. [Google Scholar] [CrossRef]
  130. Chakravarthy, A.; Ghose, D. Obstacle avoidance in a dynamic environment: a collision cone approach. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans Conference Name: IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans. 1998, 28, 562–574. [Google Scholar] [CrossRef]
  131. Rorie, R.C.; Smith, C.; Sadler, G.; Monk, K.J.; Tyson, T.L.; Keeler, J. A Human-in-the-Loop evaluation of ACAS Xu. In Proceedings of the 2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC), 2020; IEEE; pp. 1–10. [Google Scholar]
  132. Sun, J.; Tang, J.; Lao, S. Collision avoidance for cooperative UAVs with optimized artificial potential field algorithm. IEEE Access 2017, 5, 18382–18390. [Google Scholar] [CrossRef]
  133. Azarm, K.; Schmidt, G. Conflict-free motion of multiple mobile robots based on decentralized motion planning and negotiation. In Proceedings of the Proceedings of International Conference on Robotics and Automation; 1997; Vol. 4, pp. 3526–3533. [Google Scholar] [CrossRef]
  134. Keong, C.W.; Shin, H.S.; Tsourdos, A. Reinforcement learning for autonomous aircraft avoidance. In Proceedings of the 2019 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED UAS); IEEE, 2019; pp. 126–131. [Google Scholar]
  135. Zhao, P.; Liu, Y. Physics informed deep reinforcement learning for aircraft conflict resolution. IEEE Transactions on Intelligent Transportation Systems 2021, 23, 8288–8301. [Google Scholar] [CrossRef]
  136. Ribeiro, M.; Ellerbroek, J.; Hoekstra, J. Improvement of conflict detection and resolution at high densities through reinforcement learning. In Proceedings of the ICRAT 2020: International conference on research in air transportation, 2020. [Google Scholar]
  137. Ribeiro, M.; Ellerbroek, J.; Hoekstra, J. Determining optimal conflict avoidance manoeuvres at high densities with reinforcement learning. In Proceedings of the Proceedings of the Tenth SESAR Innovation Days, Virtual Conference, 2020; pp. 7–10. [Google Scholar]
  138. Pham, D.T.; Tran, N.P.; Alam, S.; Duong, V.; Delahaye, D. A machine learning approach for conflict resolution in dense traffic scenarios with uncertainties. 2019. [Google Scholar]
  139. Tran, P.N.; Pham, D.T.; Goh, S.K.; Alam, S.; Duong, V. An interactive conflict solver for learning air traffic conflict resolutions. Journal of Aerospace Information Systems 2020, 17, 271–277. [Google Scholar] [CrossRef]
  140. Julian, K.D.; Lopez, J.; Brush, J.S.; Owen, M.P.; Kochenderfer, M.J. Policy compression for aircraft collision avoidance systems. In Proceedings of the 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC); IEEE, 2016; pp. 1–10. [Google Scholar]
  141. Julian, K.D.; Kochenderfer, M.J.; Owen, M.P. Deep neural network compression for aircraft collision avoidance systems. Journal of Guidance, Control, and Dynamics 2019, 42, 598–608. [Google Scholar] [CrossRef]
  142. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Graves, A.; Antonoglou, I.; Wierstra, D.; Riedmiller, M. Playing Atari with Deep Reinforcement Learning. arXiv 2013, arXiv:1312.5602. [Google Scholar] [CrossRef]
  143. Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; Klimov, O. Proximal policy optimization algorithms. arXiv 2017, arXiv:1707.06347. [Google Scholar] [CrossRef]
  144. Lillicrap, T. Continuous control with deep reinforcement learning. arXiv 2015, arXiv:1509.02971. [Google Scholar]
  145. Christodoulou, P. Soft actor-critic for discrete action settings. arXiv 2019, arXiv:1910.07207. [Google Scholar] [CrossRef]
  146. Julian, K.D.; Kochenderfer, M.J. Guaranteeing safety for neural network-based aircraft collision avoidance systems. In Proceedings of the 2019 IEEE/AIAA 38th Digital Avionics Systems Conference (DASC); IEEE, 2019; pp. 1–10. [Google Scholar]
  147. Katz, G.; Barrett, C.; Dill, D.L.; Julian, K.; Kochenderfer, M.J. Reluplex: An efficient SMT solver for verifying deep neural networks. In Proceedings of the International conference on computer aided verification; Springer, 2017; pp. 97–117. [Google Scholar]
  148. Wang, S.; Pei, K.; Whitehouse, J.; Yang, J.; Jana, S. Formal security analysis of neural networks using symbolic intervals. In Proceedings of the 27th USENIX Security Symposium (USENIX Security 18), 2018; pp. 1599–1614. [Google Scholar]
  149. Bak, S.; Tran, H.D. Neural network compression of ACAS Xu early prototype is unsafe: Closed-loop verification through quantized state backreachability. In Proceedings of the NASA Formal Methods Symposium; Springer, 2022; pp. 280–298. [Google Scholar]
  150. European Union Aviation Safety Agency. Daedalean AG. Concepts of Design Assurance for Neural Networks (CoDANN) II with Appendix B. Technical report, European Union Aviation Safety Agency and Daedalean AG, 2024. Version 1.1.
  151. European Union Aviation Safety Agency. ToR RMT.0742: Artificial Intelligence Trustworthiness. Terms of Reference RMT.0742 Issue 1, European Union Aviation Safety Agency, 2024. ToR Series RMT.
  152. Baheri, A.; Ren, H.; Johnson, B.; Razzaghi, P.; Wei, P. A Verification Framework for Certifying Learning-Based Safety-Critical Aviation Systems. arXiv [eess]. 2022, arXiv:2205.04590. [Google Scholar] [CrossRef]
  153. Brittain, M.; Alvarez, L.E.; Breeden, K.; Jessen, I. AAM-Gym: Artificial Intelligence Testbed for Advanced Air Mobility. arXiv [cs]. 2022, arXiv:2206.04513. [Google Scholar] [CrossRef]
  154. Corso, A.; Moss, R.; Koren, M.; Lee, R.; Kochenderfer, M. A survey of algorithms for black-box safety validation of cyber-physical systems. Journal of Artificial Intelligence Research 2021, 72, 377–428. [Google Scholar] [CrossRef]
  155. Moss, R.J.; Kochenderfer, M.J.; Gariel, M.; Dubois, A. Bayesian Safety Validation for Failure Probability Estimation of Black-Box Systems. Journal of Aerospace Information Systems 2024, 21, 533–546. [Google Scholar] [CrossRef]
  156. Durand, J.G.; Dubois, A.; Moss, R.J. Formal and practical elements for the certification of machine learning systems. In Proceedings of the 2023 IEEE/AIAA 42nd Digital Avionics Systems Conference (DASC); IEEE, 2023; pp. 1–10. [Google Scholar]
  157. Katz, S.M.; Corso, A.L.; Yel, E.; Kochenderfer, M.J. Efficient determination of safety requirements for perception systems. In Proceedings of the 2023 IEEE/AIAA 42nd Digital Avionics Systems Conference (DASC); IEEE, 2023; pp. 1–10. [Google Scholar]
  158. Guo, W.; Brittain, M.; Wei, P. Safety Validation for Deep Reinforcement Learning Based Aircraft Separation Assurance with Adaptive Stress Testing. In Proceedings of the 2023 IEEE/AIAA 42nd Digital Avionics Systems Conference (DASC); IEEE, 2023; pp. 1–10. [Google Scholar]
Figure 1. Sensors and algorithms over the whole pipeline for CD&R and CA.
Figure 1. Sensors and algorithms over the whole pipeline for CD&R and CA.
Preprints 189856 g001
Figure 2. Overview of five sensor types for aerial object detection and their corresponding advantages and disadvantages.
Figure 2. Overview of five sensor types for aerial object detection and their corresponding advantages and disadvantages.
Preprints 189856 g002
Figure 3. Classification of alerting methods.
Figure 3. Classification of alerting methods.
Preprints 189856 g003
Figure 4. Classification of classical methods for CA.
Figure 4. Classification of classical methods for CA.
Preprints 189856 g004
Figure 5. Certification-patterns in AI-based CA.
Figure 5. Certification-patterns in AI-based CA.
Preprints 189856 g005
Table 1. Summary of related work compared with our paper. CD: Conflict Detection, CR: Conflict Resolution, CA: Collision Avoidance.
Table 1. Summary of related work compared with our paper. CD: Conflict Detection, CR: Conflict Resolution, CA: Collision Avoidance.
Ref. CD CR CA Coop. Noncoop. Explainability / Decentralized
Certification Autonomy
[14] X X X
[15] X X X
[16] X X X X
[17] X X
[18] X X X X
[19] X X X
this survey X X X X X X X
Table 2. Sensor technologies classified by sensor types and sensor position.
Table 2. Sensor technologies classified by sensor types and sensor position.
Sensor types Airborne or Ground-Based Reference Sensor Technology
Multiple sensors Airborne sensors [29,30] Optical and IR cameras, Radar
[31] LADAR, MMW Radar, optical and IR cameras
[32] LiDAR, Radar
[33] LiDAR, Stereo-cameras
Combination of airborne and ground-based sensors [34,35] Electro-Optical, Airborne Radar, Ground-Based Radar
Ground sensors [36] Distributed Radars
Single Sensor Airborne sensors [27] LiDAR
[28] X-Band Radar
Table 3. Overview of examined datasets needed for ML approaches for detection.
Table 3. Overview of examined datasets needed for ML approaches for detection.
Dataset Annotations Stationary Moving ADS-B Radar Thermal Visual LiDAR
Airborne Object Tracking (AOT)) Dataset [60] 3.3M+ X X
UAV point cloud segmentation dataset [61] 5.5k X X
MMAUD [62] 6 drone types X X X X
TartanAviation [63] 661 days, 3.1M X X X
Drone-detection-dataset [64] 200k+ X X X
UAVDB [65] 18k X X
SynDroneVision [66] 140k X
Table 4. Overview of examined approaches for aerial object detection divided into classical and ML-based techniques.
Table 4. Overview of examined approaches for aerial object detection divided into classical and ML-based techniques.
Sensor Approach Reference Technology
ADS-B Classical [37,38,39] flight-path modeling, RF fingerprinting, cosine-similarity
ML [68,69,70,71] CNN-base flight trajectory prediction, anomaly detection and intrusion detection
Radar Classical [41,42,43,44] CFAR probability estimation, Doppler methods
ML [72,73,74] CNN-based object detection
Thermal Classical [45,46] statistical sensitivity analysis, background extraction
ML [75,76] CNN- and transformer-based feature extraction and object detection
Visual Classical [47,48,49,50,51] optical flow, SURF feature matching, HMM filter
ML [77,78,79,80,81,82,83,83] YOLO, CNN- and transformer-based, foundation model
LiDAR Classical [52,53,54,55,56] clustering, SOCP, RANSAC, DBSCAN, CBRDD
ML [84,85] CL-Det, DeFlow
Table 5. Data Association, Filtering and Tracking algorithms for Detect and Avoid Systems.
Table 5. Data Association, Filtering and Tracking algorithms for Detect and Avoid Systems.
Reference Data Association algorithm Filtering and Tracking
[29,30] Ellipsoidal Gating EKF
[31] Track-toTrack
[34,35] MHT EKF
[86] MHT IMM
[87] Mahalanobis Distance IMM
Table 6. Surveyed non-learning-based CA methods.
Table 6. Surveyed non-learning-based CA methods.
Category References Brief description
Rule-based [113] Based on the General Flight Rule.
[114] Based on the Visual Flight Rule.
[115] Rule-based Deconfliction method based on three stages.
[116] Swarm CA based on Reynolds Rule
Game-theoretic methods [117] Pursuit-Evasion Differential Game.
[118] Suicidal Pedestrian (Pursuit-Evasion) Differential Game.
[119] Pursuit-Evasion Simultaneous Game.
Geometric [120] Collision Cone followed by Differential Geometry.
[121] Collision Cone followed by Proportional Guidance.
[122] Cooperative Geometrical approach based on Miss Distance.
[5,123] Modified Voltage Potential
Probabilistic approaches [24] MDP and dynamic programming.
Potential Field-based methods [124,125] Artificial Potential Field.
Table 7. Surveyed AI-based CA methods.
Table 7. Surveyed AI-based CA methods.
Category References Brief description
Reinforcement Learning [134] DQN.
[135] PPO from SSD-like graphical conflict representation.
[136] DDPG with pre-training of the critic network using the MVP method.
[137] DDPG to optimize MVP parameters.
[111] Attention networks followed by SACD.
[112] Multi-agent PPO for distributed conflict resolution.
[6] Safe-DQN.
[138] DDPG for optimal maneuver parameters and DQN for selecting the time of heading change.
[139] DDPG from ATCO demonstrations.
value-function approximation [140,141] Function approximation of the large ACAS Xu score table obtained via dynamic programming.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated