Preprint
Review

This version is not peer-reviewed.

Event-Based Vision Application on Autonomous Unmanned Aerial Vehicle: A Review of Prospects and Challenges

Submitted:

17 November 2025

Posted:

19 November 2025

You are already at the latest version

Abstract
Event camera vision systems are recently gaining traction as swift and agile sensing devices in the field of unmanned aerial vehicles (UAVs). Despite their inherent superior capabilities covering high dynamic range, microsecond-level temporary resolution and robustness to motion distortion which allows them to capture fast and subtle scene changes that conventional frame-based cameras often miss, there utilization is yet to be wide spread. This is due to challenges like insufficient real-world validation, unstandardized simulation platforms, limited hardware integration and a lack of ground truth datasets. This systematic review paper, presents an investigation that seek to explore the dynamic vision sensor christened event camera and its integration to (UAVs). The review synthesized peer-reviewed articles between 2015 and 2025 across five thematic domains: datasets, simulation tools, algorithmic paradigms, application areas, and future directions.The review reveals that event cameras outperformed traditional frame-based systems in terms of latency and robustness to motion blur and lighting conditions, enabling reactive and precise UAV control. However, challenges remain in standardizing evaluation metrics, improving hardware integration, and expanding annotated datasets, which are vital for adopting event cameras as reliable components in autonomous UAV systems.
Keywords: 
;  ;  ;  ;  ;  ;  

1. Introduction

This section and the following sub-sections ranging from (1.1 to 1.3, present introductory technical information covering the study’s background which introduces and articulates the concept of event cameras vision system, their operating principles and advantages of event camera over conventional frame-based cameras for fast autonomous UAVs operations. The discussion covered how event cameras asynchronously capture brightness changes at individual pixels, enabling high temporal resolution and low latency. Additionally, the sections highlighted key features such as high dynamic range and reduced data redundancy, which make event cameras well-suited for fast and challenging visual environments. The background also touched on the challenges of processing event-based data and the need for specialized algorithms to fully leverage their unique output characteristics.

1.1. Background to Study

Event cameras, sometimes referred to as dynamic vision sensors (DVS), are a paradigm shift in visual sensing since they do not focus on taking full-frame pictures but rather on catching scene changes [23]. They represent a paradigm shift in the collection of visual data since they are asynchronous sensors. This is because, as opposed to using a clock that is unrelated to the image being watched, they sample light, based on the dynamics of the scene [23]. In contrast to conventional cameras, whose pixels have a common exposure time, event-based cameras function asynchronously at the pixel level with microsecond resolution [29]. This is especially helpful in situations with fast action, as typical cameras might blur motion or need unreasonably high frame rates to capture details [31]. This speed is essential for real time UAV operations such as visual SLAM, obstacle avoidance, odometry and collision prevention under low-light conditions.
UAVs, commonly known as drones or unmanned aircraft systems (UAS) have experienced popular adoption in various sectors which include and not limited to entertainment, military, precision agriculture, smart cities system, wildlife conservation and monitoring, logistics and delivery services, amongst others, highlighting the critical need for a sophisticated and advanced aerobotics navigation system with swift dynamic capabilities, covering fast moving objects state space coordinate tracking, space collision prevention and obstacle avoidance.These intelligent agents have proven significant in the advancement of smart city transit systems [1], precision agriculture [2], the support of search and rescue operations [3], shipping and delivery such as the Amazon air [4], aerial photography [5], wildlife monitoring and conservation [6] and entertainment [7] among several others. Originally, UAVs were mostly used for military purposes, where they were very important for reconnaissance, surveillance, and targeted operations [8]. But in the last few years, things have changed where commercial, scientific, and recreational uses have become more important. Microelectromechanical systems (MEMS), sensors, and battery technology have all improved, which has also contributed make UAVs smaller and more affordable for a wider range of users [9]. Due to these technological advancements, businesses in fields like agriculture, real estate, and filmmaking have started adopting UAVs in their work. UAVs help them work more efficiently, get more data, and save money [10,11].It is especially interesting how useful UAVs could be in humanitarian and disaster relief work. Drones have been used for things like finding forest fires, mapping floods, and assessing damage after an earthquake. UAVs are unmanned, so they provide valuable information while keeping people safe [12,13,14]. UAVs are also expected to be a big part of making cities more connected and automated as part of Industry 4.0 and the push for smart cities. Despite the widespread and adoption of these UAVs, there has been several reports of crashes due to collision with static and dynamic obstacles as highlighted by [15] which show the analysis of 60 UAV accident reports identifying that design flaws and pilot response issues were the key causative factors. These Urban environments, characterized by high-rise buildings, utility poles, and other obstacles, underscore the necessity of sophisticated avoidance techniques to mitigate potential collision risks. These UAVs can perform repetitive tasks more efficiently, but only if they can navigate accurately. This requires them to process information and make decisions quickly, as well as to perceive their environment with high speed and precision. Achieving this level of autonomous navigation is crucial for UAVs to operate effectively, especially in dynamic environments where rapid response and adaptability are essential [16].
In the last few years, there has been tremendous work by various researchers in using event camera vision system for fast autonomous navigation of UAVs in a dynamic environment. This vision system offer a paradigm shift by capturing changes in brightness asynchronously, providing high temporal resolution, low latency, low power consumption, and high dynamic range. And have been widely adopted not just for autonomous navigation in UAVs but in the entire field of computer vision ([16], etc [17,18,19,20]). Leveraging event cameras vision system for dynamic obstacle avoidance in UAVs opens up numerous practical applications, including aerial imaging, last-mile delivery, and urban air mobility market that is experiencing rapid growth and worth billions of dollars, and it’s forecasted to grow to USD 132.36 billion by 2035 [21] . This capability is especially significant due to the safety concerns associated with operating aerial vehicles above crowds, as recent incidents have highlighted the risks posed by drones colliding with birds or objects thrown at quadrotors during public events. By reducing the temporal latency between perception and action, this technology helps prevent collisions and non-negligible risk factor in urban environment as well as severe hardware failure which could lead to loss [22]. These characteristics make them perfect for robotics and computer vision applications where conventional cameras are ineffective, like situations requiring high dynamic range or speed [23].
Event cameras work like super-fast eyes. They see changes quickly, like when something moves; instead of taking a picture of everything all the time, they only pay attention to what's moving. This way, they don't get too much information all at once, and they can see very fast things happening right away without missing anything. This type of sensor has high temporal resolution and low latency and are superiorly sensitive to light, which allows them to estimate motion effectively and consistently even in the most complex situations [24]. Autonomous drones without event camera react within tens of milliseconds, which falls short for swift navigation in complex, dynamic environments. To safely avoid collision with fast moving objects, drones need sensors and algorithms with minimal latency [25]. Similarly,[26]highlight the necessity of low latency for navigating unmanned aerial vehicles around dynamic obstacles. Event cameras stand out in these contexts due to their high dynamic range. For instance, [27] proposed an entirely asynchronous method for monitoring intruders using unmanned aircraft systems (UAS), leveraging event cameras’ unique properties. Compared to conventional cameras, event cameras offer significant advantages such as high temporal resolution (in milliseconds), an exceptionally high dynamic range (140 dB versus the typical 60 dB), low power consumption, and high pixel bandwidth (in kHz), which minimizes motion blur. Consequently, event cameras show strong potential for robotics and computer vision in scenarios where traditional cameras may fall short. They also produce a sparser and lighter data output, making processing more efficient ([23,25]).
The integration of event-based vision in UAV systems represents a critical juncture in the evolution of aerial autonomy. While numerous individual studies have explored aspects of this integration, the existing body of knowledge in UAVs remains fragmented. Researchers face a lack of consolidated information regarding the current state of event camera usage in UAVs, especially in areas such as publicly available datasets with ground truth, simulation environments, algorithmic developments, and real-world applications. Despite several notable similar reviews like [23], and [28], but none of them has been able to focus on UAVs which is a technology that is receiving global attention and requires critical approach for automation. This fragmentation poses a barrier for newcomers in the field of UAVs who should leverage on the features of this camera over standard camera.
This systematic literature review (SLR) seeks to bridge this gap by synthesizing the recent advancements, identifying core limitations, and uncovering future possibilities for event cameras in UAV applications. It higlights the critical need of event cameras for fast autonomous sensing in UAV to enable rapid reponse to dynamic and complex environments. As shown in Figure 1, this review is divided into 8 sections. In section 1, we discussed about the background of UAVs and the need to leverage on event camera for fast autonomous navigation. Section 2 discussed about how we used the common Prisma approach to organise articles for the systematic literature review. Then in section 3 we discussed on the various models and algorithms researcher have used in several applications UAVs using this camera. Here we categorized the methods into geometric, learning based, neuromorphic and hybrid approaches. Section 4 discussed on various real-life applications of UAVs. In section 5, we pointed researchers to the right direction where they could find the available dataset and open-source simulation tools. Section 6 provides a descriptive result that showed the relevant of this camera for UAVs applications. In section 7, is the challenges and proposes directions for future work, aiming to accelerate innovation and practical adoption of event-based vision in autonomous aerial systems and finally, section 8 is the conclusion.
Figure 2. Research Structure.
Figure 2. Research Structure.
Preprints 185444 g002
The review is guided by five interrelated objectives:
i.
To examine existing algorithms and techniques - spanning geometric methods, deep learning-based approaches, neuromorphic computing, and hybrid strategies - for processing event data in UAV settings. Understanding how these algorithms outperform or fall short compared to traditional vision pipelines is central to validating the potential of event cameras.
ii.
To explore the diverse real-world applications of event cameras in UAVs, such as obstacle avoidance, SLAM, object tracking, infrastructure inspection, and GPS-denied navigation. This review highlights both the demonstrated benefits and operational challenges faced in field deployment.
iii.
To catalog and critically assess the publicly available event camera datasets relevant to UAVs, including their quality, scope, and existing limitations. A well-curated dataset is foundational for algorithm development and benchmarking.
iv.
Identify and evaluate open-source simulation tools that support event camera modeling and their integration into UAV environments. Simulators play a vital role in reducing experimental costs and enabling reproducible research.
v.
To project the future potential of event cameras in UAV systems, including the feasibility of replacing standard cameras entirely, emerging research trends, hardware innovations, and prospective areas for interdisciplinary collaboration.
By organizing the literature according to these five thematic pillars, this review offers a structured resource for scholars, engineers, and practitioners in robotics, computer vision and autonomous systems working on UAV navigation and perception. Furthermore, it identifies unresolved challenges, benchmarks current progress, and proposes directions for future work, aiming to accelerate innovation and practical adoption of event-based vision in autonomous aerial systems.

1.2. Basic Principles of an Event Camera

Event cameras are bio-inspired sensors that differ from conventional cameras due to their asynchronous way of measuring per-pixel brightness changes compared to the former whose capture images at fixed rate.[23]. Every pixel in an event camera continually and independently tracks changes in intensity. A pixel that notices a notable shift in light intensity, either increasing or decreasing, creates an "event" that contains its location, the polarity of the change which is brightening or darkening and an exact timestamp [30]. The fundamental idea underlying event cameras is the asynchronous recognition of scene changes, enabling them to function with great temporal resolution, frequently in the microsecond range.
The capacity of event cameras to function in difficult lighting settings is another important benefit. Conventional cameras would find it difficult to simultaneously capture bright and dark areas in scenes with great dynamic ranges, whereas event cameras are simply sensitive to changes in intensity. Because only pixels that undergo a change are captured, the data produced by event cameras is also sparse and compact, requiring less data bandwidth and processing that is more effective [32]. Because of these characteristics, event cameras are perfect for robotics, autonomous driving, and surveillance applications where quick and low-latency vision is essential Nevertheless, the asynchronous character of the data presents difficulties for conventional computer vision algorithms, requiring the creation of fresh processing methods customized for the particular data format of event cameras ([23,32]).
Conventional image sensors and event-based cameras function very differently. Event-based cameras offer great temporal resolution and little motion blur, while traditional cameras collect images at fixed frame rates. Event-based cameras detect changes in pixel intensity asynchronously. This makes event cameras perfect for situations where standard sensors frequently falter, such as high-speed or low-light conditions. Furthermore, event cameras interpret sparse data and use less power, which improves efficiency in real-time applications like UAV navigation [23]. Traditional cameras, on the other hand, work better in static contexts (such as object recognition jobs) when full-frame information is essential. Though integrating event cameras with traditional computer vision processing algorithms is still a problem, they perform best in dynamic environments where only little changes are noticeable [33].

1.3. Types of Event Cameras

This subsection presents the different categories of event cameras. The itemization herein spans across five different types as highlighted in Table 1. Their detailed mode of operation and identified gaps were also presented.

2. Materials and Methods

This review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) approach [37] as shown in Figure 1. Five main research questions are addressed in this study: which open-source simulation tools facilitate the integration of event cameras into UAV systems; which publicly available event camera datasets for UAV applications and their limitations; which major model or algorithms have been developed for event-based UAV perception and how well they perform in comparison to standard vision systems; which UAV applications have successfully deployed event cameras and the challenges that have arisen; and which emerging future directions and innovations for event cameras in UAV applications are being explored.

2.1. Search Terms

Using extensive database coverage across seven major academic repositories, the search strategy included: IEEE Xplore for robotics and sensor technologies; SpringerLink for algorithmic and control system papers; Web of Science for highly cited scientific publications; ACM Digital Library for computer vision and real-time systems; Scopus for interdisciplinary engineering studies; arXiv for state-of-the-art preprints; and MDPI for open-access UAV and vision system studies. Boolean operators and wildcards were used strategically with phrases like "Event camera" OR "dynamic vision sensor" OR "neuro-morphic camera" along with specific modifiers like "Event camera" AND ("UAV" OR "drone" OR "unmanned aerial vehicle"), "Dynamic vision sensor" AND ("navigation" OR "SLAM" OR "visual odometry"), and "Event-based" AND ("optical flow" OR "object tracking" OR "collision avoidance").

2.2. Search Procedure

Peer-reviewed articles, conference proceedings, and high-quality preprints that directly applied event cameras in UAV contexts with empirical evaluation of systems, algorithms, or datasets related to UAV navigation, perception, tracking, SLAM, or object recognition were the only publications from 2015–2025 that met the inclusion criteria. Pre-2015 publications, non-English studies, event camera research unrelated to UAV applications, work without methodological rigor or empirical validation, duplicate publications or secondary summaries, and studies focusing solely on hardware or biological vision systems without any application to UAV robotics were all excluded based on the exclusion criteria.
Data extraction was done using a detailed structured matrix that recorded bibliographic data (authors, publication year, source, and title); study specifications (UAV platform type, experimental environment, and primary research objectives); technical details (event camera specifications (DVS, DAVIS, and Prophesee Gen4), sensor resolution, temporal resolution, and power consumption metrics; algorithmic classification into geometric approaches, learning-based methods, neuro-morphic computing approaches, and hybrid sensor fusion prototypes); and performance metrics (precision measures, response time and latency, robustness under different conditions, power efficiency, and cross-platform transferability).
The supported tools include Python with Pandas for quantitative summaries and exploratory data visualization, Vosviewer for clustering, Microsoft Excel for data analysis and cross-tabulation, and Mendeley for source organization and citation management. Five standardized criteria were used to evaluate each study for quality assessment: reproducibility through the availability of source code, datasets, or clear implementation details; methodological rigor through valid experimental design and evaluation procedures; innovation and contribution through novel techniques or applications for event-based UAV perception; empirical validation with quantitative results and benchmark comparisons; and clarity of objectives regarding event camera-UAV research goals. A binary system was used to score the studies; papers that satisfied at least four of the five requirements were given priority as high-quality contributions. The lead reviewer carried out the quality assessment independently, using secondary verification to ensure consistency.
Figure 3. Prisma Flowchart.
Figure 3. Prisma Flowchart.
Preprints 185444 g003
keywords event based camera, event-based camera, dynamic vision sensor, dvs, unmanned aerial vehicle, uavs, drone
Databases Google Scholar, Scopus, Proquest and Web of Science
Boolean operator OR, AND
Language English
Year of publication 2015 to 2025
Inclusion criteria Event based camera in UAVs
Exclusion criteria Not English
Document type Published scientific paper in academic journals
Figure 3. Bibliometric Mapping of Event Camera Applications in UAVs.
Figure 3. Bibliometric Mapping of Event Camera Applications in UAVs.
Preprints 185444 g004
Using the Vos viewer software [38], Figure 4 demonstrates the interdisciplinary nature of event-based UAV research, highlighting strong connections between neuromorphic sensing, autonomous navigation, and real-time visual processing, which form the foundation of current event-driven aerial navigation development.

3. Models and Algorithms

Event Processing data for events in UAVs is an important aspect because the use of algorithms is considered in the development of interpreting applications associated with event data in UAV. As a result of this the asynchronous and inimitable landscape of event streams, algorithmic approaches differs to a large extent from those which are adopted in the frame-based visualization. Owing to this, grouping these algorithms to four distinct categories is considered such as the following: Geometric Approach, Learning-based Methods, Neuro-morphic Approach of Computing, and Hybrid Sensor Integration Methods.

3.1. Geometric Approach

The geometric approach of processing is the foundational category of methods used in ego motion compensation and optical flow for UAVs equipped with event cameras. This is based on the principles of projective geometry and rigid body transformations.[45] works identified a method for a real-time optical flow using DVS sensor on a miniature embedded system that is suitable for autonomous flying robots. The local motion at each pixel is modeled as a linear combination of the three global motion components; pan, tilt and yaw rotations which are represented by vectors v x and v y where the local flow v ( x , y ) at each pixel is expressed as:
v x , y =   v x +   β v y +   γ v z  
where α, β and γ are the coefficient representing pan, tilt and yaw respectively.
The event-based visual odometry is identified as part of the major dominant and first recognized technique which was earlier introduced by [39], and it was identified that the performance features of this approach are used to track and estimate camera motion that does not have frames and also to attain a rotation error of about 0.8° and a translation error close to 2%, with computational efficiency which is appropriate for onboard UAV processing. More so, [40] argued this assertion in comparison to a visual odometry technique with low latency which eventually ensures that delays are minimized.
Contrast maximization on the other hand was introduced by [41], which is identified to take an entirely different route by optimizing alignment through motion-compensated contrast development in the event stream. Meanwhile, it is identified to be powerful in scenes that are static, and its assumption of inflexibility exposes it to vulnerable unique interference.
The dynamic vision sensors are bio-inspired with sensors that record intensity changes rather than intensity images from pixel-wise captures.
Given an event e k = ˙ x k ,   t k ,   p k that was triggered by a logarithm intensity L x k , t k at a given pixel that is more than the contrast intensity C > 0 . The logarithm intensity was modelled by xx as:
L x k , t k L x k , t k , t k = p k C ,
where x k = x k , y k T ,   is the spatio-temporal coordinates   i n   t k (with μ S resolution) and polarity p k { + 1 ,   1 } is the sign of the change of intensity, and t k t k is the time difference between the pixels.
Gallego et al. 2018 in their work modelled the contrast maximalization with a mathematical framework from a Dynamic Vision Sensor. Given a set of events ζ = e k k = 1 N e
e k = ˙ x k ,   t k ,   p k     e k = ˙ x k ,   t r e f ,   p k
According to their work, the motion model W results into a set of warped events ζ ' = e k ' k = 1 N e . This results into a warp given as:
x k ' = W x k , t k , θ
The warp transport events along the motion point trajectory θ until a time t r e f has been reached. An objective function called the image of warped events is modeled to measure the alignment of the warped events such as:
I x ; θ =   k = 1 N e b k δ x x k ' θ
where x is the pixel and sums the values for the warped evens that falls within its range of   b k =   p k   1 ,   with the former given the value if polarity was used and 1 if not used.
Continuous-time trajectory estimation was proposed by [42], which is an identified motion model connected with a continuous function instead of discrete poses, which aligns better with the asynchronous nature of event data. More so, [43] introduced the EVO, which is an identified 6-DOF system of mapping and parallel tracking used to process events in a timely manner, meanwhile, it is identified that the performance of the EVO lacks low-texture settings.[39] in their work, explored how standard cameras uses fixed frame rates to send full frames and how event cameras use its independent pixels to continuously fire intensity changes in the image plane. For the given intensity I , the sensor generates an event at the point u x , y T with a logarithm function given as:
| log I |   log I , u ˙ t   > C
where calculates the special gradient in the motion field   u   ˙ over a time frame t . These events are recorded with a timestamp and asynchronously transmitted due to advanced digital technology that works behind the scenes. The events of the cameras usually form tuple of e k =   x k ,   t k ,   p k , where x k ,   t k forms the coordinates of the event, polarity is p k , and the timestamp is t k .
In practice the Delta function in the IWE is replaced with a smooth approximation like the Gaussian, such that:
δ N x ; μ , ϵ 2 ,   w i t h   ϵ = 1   p i x e l
The objective function of the IWE model is:
G θ = V a r   I x ; θ = ˙ 1 | Ω | Ω = ˙ I x ; θ μ I 2   d x ,
where u I is the mean and Ω is the image domain.
μ I   = ˙ 1 | Ω | Ω = ˙ I x ; θ   d x ,  
Hence, the optimization algorithm that optimizes the contrast framework is give as:
θ * = arg max θ G θ
Therefore, the geometric approach is identified to maintain an attractive computational suitability and simplicity for UAVs that are resource-constrained, meanwhile, their limitations are based on the sparsity of scene, sensitivity to noise, and elements dynamism.
Table 2. Summary of Relevant contributions using Geometric approaches.
Table 2. Summary of Relevant contributions using Geometric approaches.
Author Year DVS Type Evaluation Application/Domain Future Direction
[44] 2015 DVS 128 Real Indoor test flight with a miniature quadcopter Navigation The research targeted indoor environment. Dynamic scenes with more complex environment is required
[45] 2019 DVS 128 Vision Aid Landing Further work should focus on the robustness and the accuracy of the landmark detection especially in a complex scene.
[25] 2020 SEES1 Real world Experiment with Quadrotor Dynamic obstacle avoidance This approach model obstacles as ellipsoids and relies on sparse representation. Extending this approach to a more complex environments with non-ellipsoidal obstacles and clutter urban environments remain a challenge
[46] 2023 Nil Real life with UAV Navigation and control The algorithm is limited to 3-DoF displacement (translation) and does not incorporate changes in orientation, limiting its capability to fully determine the 6-DoF pose.
[47] 2023 DAVIS 240C MVSCEC Dataset Navigation The author recommended a complete SLAM framework for high speed UAV based on even camera
[48] 2024 CeleX-5 Real world with UAV and simulation with Unreal engine and Airsim Powerline inspection and tracking Lack of dataset in that domain and inability of the model to accurately distinguish between powerlines and non-linear object in a complex scene
[49] 2024 DAVIS 326 Real-world with Octorotor UAV in indoor and outdoor Load transport; cable swing minimization Future work could focus on enhancing event detection robustness during larger cable swings, developing more sophisticated fusion techniques, and extending the method's applicability to dynamic, highly noisy environments.

3.2. Learning-Based Methods

All figures and tables Deep learning techniques have been developed to overcome many of the drawbacks of geometric models, especially when it comes to managing dynamic, complicated scenes.
E2VID One of the earliest models to translate event streams into standard frames for CNN processing was [50] and the main temporal benefits of event data were jeopardized, even if this allowed for high-quality image reconstruction. When applied to autonomous driving, [51] demonstrated that this technique worked well for simple navigation.
EV-FlowNet in their self-supervised optical flow estimation, [52] preserved the event structure, attaining exceptional accuracy (0.32 average endpoint error) and resilience in demanding settings.
Dynamic tracking based on events has also advanced and strong object detection techniques for harsh illumination situations were created by Mitrokhin et al. ([53,54]), and they were extended using the EV-IMO dataset [53].
Table 3. Summary of Relevant contributions using Learning based approaches.
Table 3. Summary of Relevant contributions using Learning based approaches.
Author Year Event Camera Type Method of Evaluation Application/Domain Learning Method Future Direction
[55] 2022 DVS The system was evaluated via simulation trials in Microsoft AirSim, Event-based object detection, obstacle avoidance Deep reinforcement learning The study highlights the need to optimize network size for better perception range, design new reward functions for dynamic obstacles, and incorporate LSTM for improved dynamic obstacle sensing and avoidance in UAVs
[56] 2023 DAVIS346 Real-world testing with a hexarotor UAV installed with both event and frame based camera and simulation in Matlab Simulink Visual servoing robustness Deep reinforcement learning The proposed DNN with noise protected MRFT lack robust high-speed target tracking under noisy visual sensor data and slow update-rate sensors; future directions include developing adaptive system identification for high-velocity targets and optimizing neural network-based tuning to improve real-time accuracy under varying sensor delays and noise conditions
[57] 2024 Prophesee camera EVK4–HD To bridge the data gap the first large scale high resolution event-based tracking dataset called EventVot was produced through UAVs and used for real world evaluation Obstacle localization; navigation Transformer-based neural networks
The high-resolution capability of the Prophesee EVK4–HD camera (1280 × 720) opens new avenues for improving event-based tracking, but it also introduces additional challenges, such as increased computational complexity and data processing requirements.
[58] 2024 DAVIS 346c Real world testing in a controlled environment with hexacopter Obstacle avoidance Graph Transformer Neural Network (GTNN) Real world experiment in a complex environment is limited in the research

3.3. Neuro-Morphic System Approach of Computing

Neuro-morphic approaches seek to maintain the biological analogies of event data because of its spike-like characteristics. Particularly applicable to UAVs with limited power are these techniques. For UAV obstacle avoidance, [59]employed spiking neural networks to present a behavioural simulation of the Locust lobula Giant movement Detector (LGMD) neurons, which senses looming objects that help UAVs avoid obstacles. The mixed-signal SLAM system known as NeuroSLAM [60] was developed specifically for neuromorphic devices. [61]examined neuro-morphic robotics platforms, and [62] demonstrated Intel's Loihi chip in UAV perception. Li et al. (2023) put into practice a bionics-based recovery mechanism for micro air vehicles, whereas [63]created a fused vision system for quick target localization. Despite their promise, neuromorphic approaches are currently hindered by the scarcity of technology, despite their enormous potential for ultra-efficient UAV vision.
Table 4. Summary of Relevant contributions Neuromorphic computing approaches.
Table 4. Summary of Relevant contributions Neuromorphic computing approaches.
Author Year Event Camera Type Method of Evaluation Application/Domain Model Future Direction
[59] 2017 DVS real-world recorded data from a DVS mounted on a QUAV Obstacle avoidance Spiking Neural network model of LGMD Integrate motion direction detection (EMD) and enhance sensitivity for diverse stimuli
[64] 2019 DVS240 Real-world testing in indoor environment using the actual data from the DVS sensor and simulation testing using data that was processed through an event simulator (PIX2NVS) Drone detection
SNNs) trained using spike-timing-dependent plasticity (STDP). The model was tested in an inddor environment. Exploring the system in a resources-contrained environment is critical
[65] 2020 DAVIS240C Real world experiment on two motor 1-DOF UAV SLAM PID+SNN The authors suggested the potential for integrating adaptation mechnism and online learning into the SNN-based controllers by utilizing the chip’s on-chip plasticity
[66] 2023 Simulated DVS implemented through the v2e tool within the AirSim environment Obstacle avoidance
Deep Double Q-Network (D2QN) integrated with SNN and CNN Improve network architecture for better performance in real world
[67] 2025 - Real-world experiment on drone Obstacle avoidance Chiasm-inspired Event Filtering (CEF) and LGN-inspired Event Matching (LEM), Extending the design principle beyond obstacle avoidance to navigation

3.4. Hybrid Sensor Integration Methods

The drawbacks of single-sensor systems are addressed by hybrid techniques, which combine event data with various sensor modalities. Ultimate SLAM by [68] integrates IMU, frame, and event data to provide reliable SLAM under high-speed, HDR circumstances. Stereo event processing and sensor fusion are supported by the MVSEC dataset [39]. [69] improved on this model when they enhanced it with a range sensor and called the model REVIO. This model outperforms existing methods on the event camera dataset, reducing position error by up to 80% in high-speed scenes and achieving better accuracy and efficiency compared to [68] and VINS-Mono in dynamic environments
Table 5. Summary of Relevant contributions Hybrid approaches.
Table 5. Summary of Relevant contributions Hybrid approaches.
Author Year Event Camera Type Method of Evaluation Application/Domain Model Future Direction
[68] 2018 DVS The result was evaluated with [39] SLAM Hybrid State Estimation combinining data from event, standard camera and IMU. Future work should expand this multimodal sensor in more complex real world application
[69] 2022 DAVIS 346 6DOF quadrotor and also using dataset from [40] VIO(Visual inertial domometry) VIO model combining event camera, IMU and depth camera for range observations. According to the author, the effect of noise and illumination on the algoithm is worth to study in the next step.
[70] 2023 DAVIS346 Real world in a static and dynamic environment using AMOV-P450 drone. Motion tracking and obstacle detection It fuses asynchronous event streams and standard image utilizing nonlinear optimization through Photometric Bundle Adjustment with sliding windows of keyframes, refining pose estimates. Future work aims to incorporate edge computing to accelerate processing
[71] 2024 Prophesee EVK4-HD sensor.
Two insulator defect datasets, CPLID and SFID. Power line inspection YOLOv8 While the experiment used reproduced event data derived from RGB images, the authors note that real-time captured event data could better exploit the advantages of neuromorphic vision sensors
[72] 2024 - Simulated data and real-world nighttime traffic scenes captured by a paired RGB and event camera setup on drones Object Tracking Dual-input 3D CNN with self-attention Integration of complementary sensors such as LIDAR and IMUs for depth-aware 3D representations and more robust object tracking
[73] 2024 Real world testing on Quadrotor in both indoor and outdoor VIO PL-EVIO which tightly-coupled optimization-based monocular event and inertial fusion.
Extending the work to event-based multi-sensor fusion beyond visual-inertial, such as integrating LiDAR for local perception and visible light positioning or GPS for global perception, to further exploit complementary sensor advantages

4. Application Benefits of Event Cameras Vision System on UAVs

The use of event cameras vision system in Unmanned Aerial Vehicles (UAVs) has made it possible to use them in a variety of fields where traditional frame-based vision system are often ineffective. The high temporal resolution, low latency, and durability of event-based cameras in dynamic and low-light environments conditions frequently encountered in airborne operations motivate their use in UAV systems. Consequently, a number of creative use cases have surfaced in both experimental and research deployments, and this section summarizes important application areas found in the literature, demonstrates how event-based vision improves performance in each situation, and considers lingering restrictions and integration difficulties.

4.1. Visual SLAM and Odometry

Visual odometry and event-based SLAM are two of the most studied topics for UAV navigation with event cameras vision system. The use of event cameras vision system in UAVs for visual odometry (VO) and simultaneous localization and mapping (SLAM) is among the oldest and most actively researched uses of these cameras. EVO [43] and Ultimate SLAM [68]are two examples of systems that show how event streams may be used for precise 6-DOF pose tracking in high-speed motion and in settings where motion blur or changing lighting would cause classic frame-based SLAM to fail. In fast-moving aerial situations, motion blur and latency are the limitations of traditional frame-based SLAM systems. On the other hand, by accurate temporal sampling, event cameras allow for continuous time pose estimation.
[39] showed that event-based visual odometry could accurately and latency-free estimate UAV motion. [68] expanded on this work with the Ultimate SLAM framework, which achieves robust SLAM in high-dynamic-range (HDR) situations by combining event data, frames, and inertial measurements.
These systems' performance can degrade in low-texture settings or during aggressive maneuvers, despite their potential in organized environments. This suggests that more algorithmic robustness and sensor fusion techniques are required.

4.2. Obstacle Avoidance and Collison Detection

Event cameras' low latency and resistance to motion blur have allowed them to perform exceptionally well in reactive obstacle avoidance and high-speed navigation. For high-speed obstacle avoidance in UAVs, event cameras are perfect because of their lightning-fast response and lack of motion blur. While event cameras may detect changes in the visual field in microseconds, traditional vision-based systems may not be able to detect fast-moving impediments in dynamic situations. [55]and [24] have tested and proposed reactive systems that can identify and avoid moving objects in milliseconds. Event-based moving object detection frameworks were created by [54] and they showed dependable segmentation in challenging motion and lighting scenarios. For ornithopter UAVs, [27] implemented a biologically inspired sense-and-avoid system that uses asynchronous event data to enable evasive maneuvers with response times of less than a millisecond. These investigations highlight the effectiveness of event-based systems in situations that call for prompt decision-making, like autonomous defense applications, drone racing, and spying.
However, there are still unresolved issues with filtering noisy activations and adjusting thresholds for event triggering, especially in cluttered, multi-object environments.

4.3. GPS-Denied Navigation and Terrain Relative Flight

In environments where GPS is not available, including tunnels, urban canyons, woodlands, or indoor spaces, UAVs must rely on vision-based navigation. A appealing substitute for conventional visual-inertial systems, event cameras allow for terrain-relative navigation that swiftly adjusts to changing conditions. For localization and map-less landing, some solutions have integrated downward-facing sensors with event cameras.
To accomplish low-power, precise localization in restricted regions, [60] introduced NeuroSLAM, a mixed-signal neuro-morphic SLAM system that took advantage of event camera data.
Although promising, these techniques still need a lot of sensor fusion with depth sensors and inertial data to guarantee stability over extended missions.

4.4. Infrastructure Inspection and Anomaly Detection

Event cameras have been mounted to unmanned aerial vehicles (UAVs) in the fields of civil engineering and smart infrastructure to perform fine-grained inspection jobs including detecting building flaws or bridge cracks. Visual systems that can function in challenging or fluctuating lighting circumstances are necessary for UAV-based inspection of vital infrastructure, such as buildings, bridges, and power lines. Event cameras are ideal for capturing fine details in areas that are overexposed or shaded because of their HDR capabilities.
The ev-CIVIL dataset was created by [74] especially for infrastructure assessment with event cameras installed on unmanned aerial vehicles. Their work showed how to successfully identify civil structural flaws in highly contrasted illumination. These applications are especially pertinent to automated maintenance workflows and smart city monitoring.
Notwithstanding these benefits, the area does not yet have large-scale annotated datasets or established criteria for comparing event-based flaw detection.

4.5. Object and Human Tracking in Dynamic Scenes

In situations when there are numerous moving agents, such as in search and rescue operations or disaster response areas, event cameras have demonstrated the ability to detect and track humans or vehicles [72]. UAVs have employed event cameras vision system to perform aggressive flight maneuvers, such as making sharp turns, avoiding swift objects, and trajectory prediction[75] and [76] in their event camera vision system demonstrate dynamic tracking.
In order to enhance tracking performance in extremely dynamic or dimly lit conditions, [77] suggested a hybrid human identification framework for UAVs that combines traditional vision with event streams. They demonstrated enhanced resilience to background motion and occlusion with their multi-modal curriculum learning strategy.

4.6. High-Speed and Aggressive Maneurvering

High-speed and aggressive flight, where quick reaction times are essential, may be the most notable use of event cameras in UAVs. To perform aggressive flight maneuvers, such as making sharp turns, avoiding swift objects, and navigating through crowded areas, UAVs have been equipped with event cameras. A bio-mimetic fused vision system for microsecond-level target localization was created by [78] allowing UAVs to chase nimble targets and execute evasive maneuvers. Their edge-optimized solution supported high-speed control with low power consumption by combining event data and spiking neural models.
These systems could be used for drone racing, military evasion, or agile urban delivery, but their transfer from lab to field still requires generalizability
Table 6. Summary of review on the Applications of Event Camera vision system in UAVs.
Table 6. Summary of review on the Applications of Event Camera vision system in UAVs.
Cited Works Application Area Challenges / Future Directions
[30,39,42,60,68,79,80] Visual SLAM and Odometry Performance degrades in low-texture or highly dynamic scenes; need for stronger sensor fusion (e.g., with IMU, depth); robustness under aggressive manoeuvres.
[24,25,53,55,58,66,70,81,82,83,84] Obstacle Avoidance and Collision Detection Filtering noisy activations; setting adaptive thresholds in cluttered, multi-object environments; scaling to dense urban or swarming scenarios.
[85] GPS-Denied Navigation and Terrain Relative Flight Requires fusion with depth and inertial data for stability; limited long-term robustness; neuromorphic SLAM hardware still in early stages.
[74] Infrastructure Inspection and Anomaly Detection Lack of large, annotated datasets; absence of benchmarking standards; need for generalization across varied materials and lighting.
[86] Object and Human Tracking in Dynamic Scenes Sparse, non-textured data limits fine-grained classification; re-identification with event-only streams remains difficult; improved multimodal fusion needed.
[78] High-Speed and Aggressive Maneuvering Algorithms need to generalize from lab to real-world; neuromorphic hardware maturity; power-efficiency vs. control accuracy trade-offs.

5. Datasets and Open-Source Tools

In this section, we delve into the different datasets that are available for UAV applications using event cameras. The objective is to expose researchers to a wide array of event datasets and their challenges that are available specifically for UAV applications. Furthermore, we discussed various open-source tools and simulators, including their variation and challenges.

5.1. Available Datasets for Event Cameras in UAVs Applications

Event cameras vision system are being used more often in UAVs for jobs requiring high temporal resolution, low latency, and effective data processing. These cameras function by detecting changes in the visual scene instead of taking entire image frames. Specialised datasets are required to maximise the usage of event cameras in UAV applications due to their distinct capabilities
A.
Event-Camera Dataset for High-Speed Robotic Tasks
This dataset includes high-speed dynamic scenes that are relevant to UAV manoeuvres, like fast-paced tracking and navigation tasks. It provides ground truth measurements from motion capture systems along with event data, which makes it useful for benchmarking high-speed perception algorithms in UAVs [39]. They indicated that there are two recent datasets that also utilise DAVIS: [87] and . The first study is designed for comparing algorithms that estimate optical flow based on events [87]. This dataset includes both synthetic and real examples featuring pure rotational motion (3 degrees of freedom) within simple scenes that have strong visual contrasts, and the ground truth information was obtained using an inertial measurement unit. However, the duration of the recording of this dataset is not sufficient for a reliable assessment of SLAM algorithm performance [88].
  • B. Davis Drone Racing Dataset
This is the first drone racing dataset, and it contains synchronised inertia measuring units, standard camera images, event camera data and precise ground truth poses recorded in indoor and outdoor environments [89]. The event camera used for this dataset is miniDAVIS346 with a special resolution of 346 x 260 pixels proved to have a better quality than the one used by [30], which is DAVIS240C, with a resolution of 240 by 180 pixels.
  • C. Extreme Event Dataset (EED)
This dataset was collected using the DAVIS246B bio-inspired sensor across two scenarios.
It was mounted on a quadrotor and on handheld devices for non-rigid camera movement [54]. This is the first event camera dataset that is specifically designed for moving object detection and was used as a benchmark dataset by [90] in their segmentation method to split a scene into independent moving objects.
  • D. Multi-Vehicle Stereo Event Camera Dataset (MVSEC)
MVSEC provides event data captured in a diverse set of environments, including indoor and outdoor scenes. It includes stereo event cameras mounted on a UAV, synchronized with other sensors like IMUs and standard cameras. The dataset is crucial for stereo depth estimation, visual odometry, and SLAM (Simultaneous Localization and Mapping) in UAVs [91]. This dataset was combined with the accuracy of the frame-based camera for high-speed optical flow estimation for UAV navigation with a validation of 19% error degradation at 4x speed up [63].
  • D. RPG Group Zurich Event Camera Dataset
The research team at the University of Zurich is the leading force in advancing research on event-based cameras. These datasets were generated from their iniLabs using the DAVIS240C sensor. They were generated for different motions and scenes and contain events, images, IMU measurements, and camera calibration. The output is available in text files and ROSbag binary files, which are compatible with Robot Operating System (ROS). This dataset is a standard for the development and assessment of algorithms in pose estimation, visual odometry [92], and SLAM [68], especially within UAV applications, but its dataset scenarios may not cover real-world UAV environments, potentially constraining generalizability [39]
  • E. EVDodgeNet Dataset
This dataset called Moving Object Dataset (MOD) was created using synthetic scenes for generating “unlimited” amount of training data with one or more dynamics objects in the scene [93]. This is the first dataset that focused on event-based obstacle avoidance and was specifically generated for neural network training.
  • F. Event-Based Vision Dataset (EV-IMO)
The most well-known dataset created especially for event cameras integrated into UAV systems is the Event-Based Vision Dataset (EV-IMO). It has dynamic sceneries with a variety of moving objects that mimic UAV flight situations. According to [53], this dataset is especially helpful for problems involving object tracking, motion prediction, and feature extraction from event-based data.
  • G. DSEC
This dataset is similar to MVSEC since it’s gotten from monochrome camera and LIDAR sensor for ground truth. However, the data from these two Prophese Gen 3.1 sensors event camera has 3 times higher resolution than from MVSEC[94].
  • H. EVIMO2
This dataset expanded on EV-IMO with improved temporal synchronization between sensors and enhanced depth ground truth accuracy. Using Prophesee Gen3 cameras (640×480 pixels), it supported more complex perception tasks including optical flow and structure from motion [95].

5.1.1. Summary of Available Datasets

The evolution of event camera datasets for UAV applications has progressed through three distinct generations since 2017, showing remarkable technical advancement. Starting with foundational collections using low-resolution DAVIS sensors (240×180 pixels), these datasets have evolved to incorporate high-resolution Prophesee cameras (up to 1280×720 pixels), sophisticated ground truth methodologies, and diverse environmental settings. The MVSEC dataset [91] has emerged as the most widely adopted benchmark due to its comprehensive multi-vehicle scenarios and stereo vision capabilities, with over 500 citations in the literature. For researchers focused on high-speed drone applications, the UZH-FPV Drone Racing dataset [96] offers superior sub-millisecond precision essential for racing applications, while those requiring detailed motion segmentation should utilize EV-IMO [53] with its pixel-wise ground truth. The DSEC dataset [94] provides the best option for high-resolution perception tasks, whereas EVIMO2 [95] represents the current state-of-the-art for researchers requiring advanced sensor fusion and depth estimation capabilities.
Despite significant progress, current event camera datasets for UAVs face substantial limitations that impede broader adoption and real-world application. These challenges include restricted operational scenarios predominantly in controlled environments rather than authentic UAV missions; application bias toward racing and obstacle avoidance with insufficient representation of inspection, mapping, or multi-UAV operations; and persistent technical issues including inconsistent calibration approaches, non-standardized data formats, and varying annotation quality. For specific applications, researchers should select datasets strategically: obstacle avoidance systems should build upon EVDodgeNet; autonomous racing should leverage UZH-FPV; SLAM applications are best served by MVSEC's diverse environments, while low-light operations benefit most from the EED dataset's unique strobe light scenarios. Future datasets must address the critical gap in long-duration autonomous flights, adverse weather conditions, and multi-UAV interaction scenarios to facilitate the transition from laboratory research to commercial applications in inspection, delivery, and surveillance domains.

5.2. Simulator and Emulators

The development and testing of UAVs with event cameras prior to their deployment in real-world situations requires the use of simulators and emulators. Developers can test algorithms in a controlled setting by using tools such as the Event Camera Simulator (ESIM), which offers an online environment. According to [97] these technologies utilize simulated scenarios to mimic the output of event cameras, which enables software developers to improve their product without requiring on-site testing.
A.
Robotic Operating System (ROS)
When creating UAVs equipped with event cameras, the Robotic Operating System (ROS) is frequently utilized. ROS offers an adaptable structure for combining sensors, handling information, and managing unmanned aerial vehicles. Event cameras require an event-driven architecture, which is supported with packages that make real-time processing and data fusion easier. Because ROS provides a wide range of libraries and tools for managing sensor data, path planning, and control algorithms, it is very beneficial. Rapid prototyping and testing are made possible by the collaborative development environment that ROS's open-source nature supports [98].
  • B. Gazebo and Rviz
Popular simulation and visualization tools, Gazebo and Rviz, are utilized with ROS for UAV development. With Gazebo, UAVs may be tested in virtual environments with dynamic objects and changing lighting an essential feature for event cameras. Gazebo is a 3D simulation environment. Rviz, on the other hand, makes it simpler to debug and improve algorithms by providing real-time visualization of sensor data and the UAV's condition as it was used by [85]

5.2.1. Challenges in Software Development for Event Cameras Vision System in UAVs

The asynchronous and sparse nature of the data creates special issues for developing software for event cameras in UAVs. To handle event-based data, traditional vision algorithms which are frequently frame-based need to be modified or completely redesigned. Development may also be hampered by the lack of uniformity in event camera data formats and processing technologies [25]. Creating software that works is made more difficult by the requirement for specific understanding in both robotics and computer vision. Computational load is another issue that needs to be addressed by developers because real-time processing of high-frequency event streams demands a lot of processing power and effective code optimization [23]. To fully utilize event cameras in UAVs and enable improved capabilities in dynamic and unpredictable environments, certain hardware and software components are necessary [23].
Table 7. Opensource event camera simulator and source codes.
Table 7. Opensource event camera simulator and source codes.
S/N Name Inventor Year Source
1 ESIM (Event Camera Simulator) [97] 2018 https://github.com/uzh-rpg/rpg_esim
2 ESVO (Event-based Stereo Visual Odometry) [99] 2022 https://github.com/HKUST-Aerial-Robotics/ESVO
3 UltimateSLAM [68] 2018 https://github.com/uzh-rpg/rpg_ultimate_slam_open
4 DVS ROS (Dynamic Vision Sensor ROS Package) [100] 2015 https://github.com/uzh-rpg/rpg_dvs_ros
5 rpg_evo (Event-based Visual Odometry) [101] 2020 https://github.com/uzh-rpg/rpg_evo

6. Discussion

Figure 5 highlights the top 10 institutions in the research dataset. ETH Zürich leads the group, closely followed by Universität Zürich and Universidad de Sevilla. The Institute of Neuroinformatics also makes a significant contribution, alongside CNRS Centre National de la Recherche and the National University of Defense Technology. Additional key contributors include Beihang University, Tsinghua University, Delft University of Technology, and the Air Force Research Laboratory. The distribution shows a strong concentration of research activity among prominent European and Asian technical universities, with Swiss institutions notably prominent. The involvement of defense-related organizations indicates that the research area likely has military or security applications.
Figure 2. Summary of the top Research Institute in this domain.
Figure 2. Summary of the top Research Institute in this domain.
Preprints 185444 g005
Table 8. Summary of the type of event camera used over the years.
Table 8. Summary of the type of event camera used over the years.
Year Event Camera Type(s)
2015 DVS 128
2017 DVS
2018 DVS
2019 DVS 128, DVS 240
2020 SEES1, DAVIS 240C
2022 Celex4 Dynamic Vision Sensor, DAVIS 346
2023 DAVIS, DAVIS 240C, DAVIS 346, DAVIS 346c
2024 CeleX-5, Prophesee EVK4-HD, DAVIS 326
2025 DVS346
In Table 7, from 2015 to 2017, UAV obstacle avoidance relied mainly on low resolution DVS128/DVS sensors for indoor navigation. By 2019–2020, more diverse sensors like SEES1 and DAVIS240C were introduced for real-world tests. In 2022–2023, the DAVIS family and Celex4 gained popularity due to their higher resolution supporting hybrid frame-event sensing. From 2024 till date, higher-resolution, domain-specific sensors such as CeleX-5 and Prophesee EVK4-HD emerged for specialized tasks.
Figure 3. Summary of the method used over the year.
Figure 3. Summary of the method used over the year.
Preprints 185444 g006
Figure 6 shows that geometry-based methods have been consistently used since 2015, peaking around 2023–2024. Hybrid approaches started gaining attention from 2018 and surged in 2024, highlighting growing interest in sensor fusion. Learning-based methods emerged from 2022 onward, signaling a shift toward deep reinforcement learning and transformer models. Neuromorphic techniques appeared early in 2017 due to the need to develop UAVs with spiking neural networks and bio-inspired designs. Overall, the research trend is moving from traditional geometry methods toward hybrid approaches.
Figure 7 illustrates the leading institutions/bodies contributing to research on event-based vision in UAVs. ETH Zurich ranks first with the highest number of publications, followed closely by Universität Zurich. Other notable contributors include the Institute of Neuroinformatics, CNRS (France), and the National University of Defense Technology (China). Universities such as Beihang, Tsinghua, Delft University of Technology, and the Air Force Research Laboratory also demonstrate active participation in this research area.
Overall, the chart highlights that European and Asian institutions are leading in research output, with Switzerland emerging as a significant hub for innovation in event-based UAV vision systems.

7. Conclusions

The research around the integration of event cameras vision system on Unmanned Aerial Vehicles (UAVs) from 2015 to 2025 has been analyzed in this comprehensive literature review. The review emphasizes how event-based vision can revolutionize UAV performance, especially in areas like dynamic obstacle avoidance, high-speed navigation, HDR environments, and GPS-denied localization where traditional frame-based cameras have significant limitations. The review highlights the increasing depth and breadth of work in this interdisciplinary subject by thematically organizing the literature into datasets, simulation tools, algorithmic approaches (neuromorphic, learning-based, geometric, and hybrid fusion), and application domains. Even though there has been development, event cameras vision system is still not widely used in real-life UAV applications. The absence of established evaluation methodologies, inadequate real-world validation, immature simulation platforms, hardware integration limitations, and inadequate datasets with ground truth are some of the main obstacles. These restrictions show a gap between the practical needs for reliable, real-time UAV operation and exciting academic research.
Despite many challenges, this review has shown that event cameras vision systems hold immense potential in advancing UAV autonomy, particularly in real life complex and dynamic environments where convention frame-based cameras fall short.

Abbreviations

The following abbreviations as used in this paper are as depicted below:
APS Active Pixel Sensor
ATIS Asynchronous Time-based Image Sensor
AirSIM Aerial Information and Robotics Simulation
CEF Chiasm-inspired Event Filtering
CNN Convolution Neural Network
D2QN Deep Double Q-Network
DAVIS Dynamic and Active-pixel Vision Sensor
DNN Deep Neura Network
DOF Degree Of Freedom
DVS Dynamic Vision Sensor
EED Extreme Event Dataset
ESIM Event-camera Simulator
EVO Event-based Visual Inertia Odometry
GPS Global Position System
GTNN Graph Transformer Neural Network
HDR High Dynamic Range
IMU Inertia Measuring Unit
LGMD Locus lobula Giant Movement Detecto
LIDAR Light Detection and Ranging
MEMS Micromechanical System
MOD Moving Object Detection
MVSEC Multi-Vehicle Stereo Event Camera
PID Propotional Integra Derivative
PRISMA Preferred Reporting Items for Systematic Reviews and Meta-Analyses
RGB Red, Green Blue
ROS Robotics Operating System
SLR Systematic Literature Review
SLAM Simultaneous Localization and Mapping
SNN Spiking Neural Network
UAV Unmanned Aerial Vehicle
UAS Unmanned Aircraft System
VIO Visual Inertia Odometry
YOLO You Only Look Once

References

  1. F. Outay, H. A. Mengash, and M. Adnan. Applications of unmanned aerial vehicle (UAV) in road safety, traffic and highway infrastructure management: Recent advances and challenges. Transp Res Part A Policy Pract 2020, 141, 116–129. [Google Scholar] [CrossRef] [PubMed]
  2. S. Ahirwar, R. Swarnkar, S. Bhukya, and G. Namwade. Application of drone in agriculture. Int J Curr Microbiol Appl Sci 2019, 8, 2500–2505. [Google Scholar] [CrossRef]
  3. S. Waharte and N. Trigoni. Supporting search and rescue operations with UAVs. in 2010 international conference on emerging security technologies, IEEE, 2010, pp. 142–147.
  4. S. Jung and H. Kim. Analysis of amazon prime air uav delivery service. Journal of Knowledge Information Technology and Systems 2017, 12, 253–266. [Google Scholar] [CrossRef]
  5. J. Guo, X. Liu, L. Bi, H. Liu, and H. Lou. Un-yolov5s: A uav-based aerial photography detection algorithm. Sensors 2023, 23, 5907. [Google Scholar] [CrossRef]
  6. L. F. Gonzalez, G. A. Montes, E. Puig, S. Johnson, K. Mengersen, and K. J. Gaston. Unmanned aerial vehicles (UAVs) and artificial intelligence revolutionizing wildlife monitoring and conservation. Sensors 2016, 16, 97. [Google Scholar] [CrossRef]
  7. S. J. Kim, Y. S. J. Kim, Y. Jeong, S. Park, K. Ryu, and G. Oh. A survey of drone use for entertainment and AVR (augmented and virtual reality). in Augmented reality and virtual reality: empowering human, place and business, Springer, 2017, pp. 339–352.
  8. L. F. Gonzalez, G. A. Montes, E. Puig, S. Johnson, K. Mengersen, and K. J. Gaston. Unmanned aerial vehicles (UAVs) and artificial intelligence revolutionizing wildlife monitoring and conservation. Sensors 2016, 16, 97. [Google Scholar] [CrossRef]
  9. Y. -K. Wang, S.-E. Wang, and P.-H. Wu. Spike-event object detection for neuromorphic vision. IEEE Access 2023, 11, 5215–5230. [Google Scholar] [CrossRef]
  10. V. Chamola, V. Hassija, V. Gupta, and M. Guizani. A comprehensive review of the COVID-19 pandemic and the role of IoT, drones, AI, blockchain, and 5G in managing its impact. Ieee access 2020, 8, 90225–90265. [Google Scholar] [CrossRef]
  11. S. Makam, B. K. Komatineni, S. S. Meena, and U. Meena. Unmanned aerial vehicles (UAVs): an adoptable technology for precise and smart farming. Discover Internet of Things 2024, 4, 12. [Google Scholar] [CrossRef]
  12. S. Sudhakar, V. Vijayakumar, C. S. Kumar, V. Priya, L. Ravi, and V. Subramaniyaswamy. Unmanned Aerial Vehicle (UAV) based Forest Fire Detection and monitoring for reducing false alarms in forest-fires. Comput Commun 2020, 149, 1–16. [Google Scholar] [CrossRef]
  13. Khan, S. Gupta, and S. K. Gupta. Emerging UAV technology for disaster detection, mitigation, response, and preparedness. J Field Robot 2022, 39, 905–955. [Google Scholar] [CrossRef]
  14. Z. Chen. Application of UAV remote sensing in natural disaster monitoring and early warning: an example of flood and mudslide and earthquake disasters. Highlights in Science, Engineering and Technology 2024, 85, 924–933. [Google Scholar] [CrossRef]
  15. Abate De Mey. Event Cameras – An Evolution in Visual Data Capture. https://robohub.org/event-cameras-an-evolution-in-visual-data-capture.
  16. W. Shariff, M. S. Dilmaghani, P. Kielty, M. Moustafa, J. Lemley, and P. Corcoran. Event cameras in automotive sensing: A review. IEEE Access 2024, 12, 51275–51306. [Google Scholar] [CrossRef]
  17. B. Chakravarthi, A. A. B. Chakravarthi, A. A. Verma, K. Daniilidis, C. Fermuller, and Y. Yang. Recent event camera innovations: A survey. in European Conference on Computer Vision, Springer, 2024, pp. 342–376.
  18. K. Iddrisu, W. Shariff, P. Corcoran, N. E. O’Connor, J. Lemley, and S. Little. Event camera-based eye motion analysis: A survey. IEEE Access 2024, 12, 136783–136804. [Google Scholar] [CrossRef]
  19. D. Gehrig and D. Scaramuzza. Low-latency automotive vision with event cameras. Nature 2024, 629, 1034–1040. [Google Scholar] [CrossRef]
  20. Fortune Business Insights. Unmanned Aerial Vehicle [UAV] Market Size, Share, Trends & Industry Analysis, By Type (Fixed Wing, Rotary Wing, Hybrid), By End-use Industry, By System, By Range, By Class, By Mode of Operation, and Regional Forecast, 2024–2032. https://www.fortunebusinessinsights.com/industry-reports/unmanned-aerial-vehicle-uav-market-101603.
  21. T. Li, J. T. Li, J. Liu, W. Zhang, Y. Ni, W. Wang, and Z. Li. Uav-human: A large benchmark for human behavior understanding with unmanned aerial vehicles. in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 16266–16275.
  22. G. Gallego et al.. Event-based vision: A survey. IEEE Trans Pattern Anal Mach Intell 2020, 44, 154–180. [Google Scholar]
  23. Mitrokhin, C. Fermüller, C. Parameshwara, and Y. Aloimonos. Event-based moving object detection and tracking. in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2018, pp. 1–9.
  24. D. Falanga, K. Kleber, and D. Scaramuzza. Dynamic obstacle avoidance for quadrotors with event cameras. Sci Robot 2020, 5, eaaz9712. [Google Scholar] [CrossRef]
  25. N. J. Sanket et al.. Evdodgenet: Deep dynamic obstacle dodging with event cameras. in 2020 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2020, pp. 10651–10657.
  26. J. P. Rodríguez-Gómez, R. Tapia, M. del M. G. Garcia, J. R. Martínez-de Dios, and A. Ollero. Free as a bird: Event-based dynamic sense-and-avoid for ornithopter robot flight. IEEE Robot Autom Lett 2022, 7, 5413–5420. [Google Scholar] [CrossRef]
  27. D. Cazzato and F. Bono. An application-driven survey on event-based neuromorphic computer vision. Information 2024, 15, 472. [Google Scholar] [CrossRef]
  28. T. Stoffregen, G. T. Stoffregen, G. Gallego, T. Drummond, L. Kleeman, and D. Scaramuzza. Event-based motion segmentation by motion compensation. in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 7244–7253.
  29. C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck. A 240× 180 130 db 3 µs latency global shutter spatiotemporal vision sensor. IEEE J Solid-State Circuits 2014, 49, 2333–2341. [Google Scholar] [CrossRef]
  30. D. Gehrig, A. D. Gehrig, A. Loquercio, K. G. Derpanis, and D. Scaramuzza. End-to-end learning of representations for asynchronous event-based data. in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 5633–5643.
  31. J. Wan et al.. Event-based pedestrian detection using dynamic vision sensors. Electronics (Basel) 2021, 10, 888. [Google Scholar]
  32. P. Lichtsteiner, C. Posch, and T. Delbruck. A 128$\times $128 120 dB 15$\mu $ s latency asynchronous temporal contrast vision sensor. IEEE J Solid-State Circuits 2008, 43, 566–576. [Google Scholar] [CrossRef]
  33. E. Mueggler, H. Rebecq, G. Gallego, T. Delbruck, and D. Scaramuzza. The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM. Int J Rob Res 2017, 36, 142–149. [Google Scholar] [CrossRef]
  34. C. Posch, D. C. Posch, D. Matolin, and R. Wohlgenannt. An asynchronous time-based image sensor. in 2008 IEEE International Symposium on Circuits and Systems (ISCAS), IEEE, 2008, pp. 2130–2133.
  35. D. Joubert, A. Marcireau, N. Ralph, A. Jolley, A. Van Schaik, and G. Cohen. Event camera simulator improvements via characterized parameters. Front Neurosci 2021, 15, 702765. [Google Scholar] [CrossRef]
  36. M. Beck et al.. An extended modular processing pipeline for event-based vision in automatic visual inspection. Sensors 2021, 21, 6143. [Google Scholar] [CrossRef]
  37. D. P. Moeys et al.. Color temporal contrast sensitivity in dynamic vision sensors. in 2017 IEEE International Symposium on Circuits and Systems (ISCAS), IEEE, 2017, pp. 1–4.
  38. C. Scheerlinck, H. C. Scheerlinck, H. Rebecq, T. Stoffregen, N. Barnes, R. Mahony, and D. Scaramuzza. CED: Color event camera dataset. in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019, p. 0.
  39. D. Moher, A. Liberati, J. Tetzlaff, D. G. Altman, and P. Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. International journal of surgery 2010, 8, 336–341. [Google Scholar] [CrossRef]
  40. E. Mueggler, H. Rebecq, G. Gallego, T. Delbruck, and D. Scaramuzza. The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM. Int J Rob Res 2017, 36, 142–149. [Google Scholar] [CrossRef]
  41. R. Vidal, H. Rebecq, T. Horstschaefer, and D. Scaramuzza. Ultimate SLAM? Combining events, images, and IMU for robust visual SLAM in HDR and high-speed scenarios. IEEE Robot Autom Lett 2018, 3, 994–1001. [Google Scholar] [CrossRef]
  42. G. Gallego, H. G. Gallego, H. Rebecq, and D. Scaramuzza. A unifying contrast maximization framework for event cameras, with applications to motion, depth, and optical flow estimation. in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 3867–3876.
  43. E. Mueggler, G. Gallego, H. Rebecq, and D. Scaramuzza. Continuous-time visual-inertial odometry for event cameras. IEEE Transactions on Robotics 2018, 34, 1425–1440. [Google Scholar] [CrossRef]
  44. H. Rebecq, T. Horstschäfer, G. Gallego, and D. Scaramuzza. Evo: A geometric approach to event-based 6-dof parallel tracking and mapping in real time. IEEE Robot Autom Lett 2016, 2, 593–600. [Google Scholar]
  45. J. Conradt. On-board real-time optic-flow for miniature event-based vision sensors. Institute of Electrical and Electronics Engineers Inc., 2015, pp. 1858–1863. [CrossRef]
  46. M. Liu and T. Delbrück. Adaptive time-slice block-matching optical flow algorithm for dynamic vision sensors. BMVA Press, 2018. [Online]. Available: https://www.scopus.com/inward/record.uri?eid=2-s2. 8508.
  47. J. Zhang, Y. J. Zhang, Y. Hu, B. Zhang, and Q. Gao. Research on Unmanned Aerial Vehicle vision-aid landing with Dynamic vision sensor. B. Xu and K. Mou, Eds., Institute of Electrical and Electronics Engineers Inc., 2019, pp. 965–969. [CrossRef]
  48. H. Stuckey, A. Al-Radaideh, L. Sun, and W. Tang. A Spatial Localization and Attitude Estimation System for Unmanned Aerial Vehicles Using a Single Dynamic Vision Sensor. IEEE Sens J 2022, 22, 15497–15507. [Google Scholar] [CrossRef]
  49. Z. Jianguo, W. Z. Jianguo, W. Pengfei, H. Sunan, X. Cheng, and T. S. Huat Rodney. Stereo Depth Estimation Based on Adaptive Stacks from Event Cameras. in IECON Proceedings (Industrial Electronics Conference), 2023. [CrossRef]
  50. N. Escudero, M. W. N. Escudero, M. W. Hardt, and G. Inalhan. Enabling UAVs night-time navigation through Mutual Information-based matching of event-generated images. in AIAA/IEEE Digital Avionics Systems Conference - Proceedings, Institute of Electrical and Electronics Engineers Inc., 2023. [CrossRef]
  51. T. 845 LNEE. 2023. [CrossRef]
  52. J. Zhao, W. J. Zhao, W. Zhang, Y. Wang, S. Chen, X. Zhou, and F. Shuang. EAPTON: Event-based Antinoise Powerlines Tracking with ON/OFF Enhancement. in Journal of Physics: Conference Series, 2024. [CrossRef]
  53. F. Panetsos, G. C. Karras, and K. J. Kyriakopoulos. Aerial Transportation of Cable-Suspended Loads with an Event Camera. IEEE Robot Autom Lett 2024, 9, 231–238. [Google Scholar] [CrossRef]
  54. H. Rebecq, R. H. Rebecq, R. Ranftl, V. Koltun, and D. Scaramuzza. Events-to-video: Bringing modern computer vision to event cameras. in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 3857–3866.
  55. I. Maqueda, A. I. Maqueda, A. Loquercio, G. Gallego, N. García, and D. Scaramuzza. Event-based vision meets deep learning on steering prediction for self-driving cars. in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 5419–5427.
  56. Z. Zhu, L. Z. Zhu, L. Yuan, K. Chaney, and K. Daniilidis. EV-FlowNet: Self-supervised optical flow estimation for event-based cameras. arXiv, arXiv:1802.06898.
  57. Mitrokhin, C. Ye, C. Fermüller, Y. Aloimonos, and T. Delbruck. EV-IMO: Motion segmentation dataset and learning pipeline for event cameras. in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2019, pp. 6105–6112.
  58. Mitrokhin, C. Fermüller, C. Parameshwara, and Y. Aloimonos. Event-based moving object detection and tracking. in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2018, pp. 1–9.
  59. X. 13606 LNAI. 2022. [CrossRef]
  60. A. Hay et al.. Noise-Tolerant Identification and Tuning Approach Using Deep Neural Networks for Visual Servoing Applications. IEEE Transactions on Robotics 2023, 39, 2276–2288. [Google Scholar] [CrossRef]
  61. X. Wang et al.. Event Stream-Based Visual Object Tracking: A High-Resolution Benchmark Dataset and A Novel Baseline. in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, 2024, pp. 19248–19257. [CrossRef]
  62. Y. Alkendi, O. A. Y. Alkendi, O. A. Hay, M. A. Humais, R. Azzam, L. D. Seneviratne, and Y. H. Zweiri. Dynamic-Obstacle Relative Localization Using Motion Segmentation with Event Cameras. 2024, pp. 1056–1063. [CrossRef]
  63. L. Salt, G. L. Salt, G. Indiveri, and Y. Sandamirskaya. Obstacle avoidance with LGMD neuron: Towards a neuromorphic UAV implementation. in Proceedings - IEEE International Symposium on Circuits and Systems, Institute of Electrical and Electronics Engineers Inc., 2017. [CrossRef]
  64. J. -H. Yoon and A. Raychowdhury. NeuroSLAM: A 65-nm 7.25-to-8.79-TOPS/W Mixed-Signal Oscillator-Based SLAM Accelerator for Edge Robotics. IEEE J Solid-State Circuits 2021, 56, 66–78. [Google Scholar] [CrossRef]
  65. Y. Sandamirskaya, M. Kaboli, J. Conradt, and T. Celikel. Neuromorphic computing hardware and neural architectures for robotics. Sci Robot 2022, 7, eabl8419. [Google Scholar] [CrossRef]
  66. M. Davies et al.. Advancing neuromorphic computing with loihi: A survey of results and outlook. Proceedings of the IEEE 2021, 109, 911–934. [Google Scholar] [CrossRef]
  67. S. Lele and A. Raychowdhury. Fusing frame and event vision for high-speed optical flow for edge application. in 2022 IEEE International Symposium on Circuits and Systems (ISCAS), IEEE, 2022, pp. 804–808.
  68. P. Kirkland, G. P. Kirkland, G. Di Caterina, J. Soraghan, Y. Andreopoulos, and G. Matich. UAV Detection: A STDP Trained Deep Convolutional Spiking Neural Network Retina-Neuromorphic Approach. in Lecture Notes in Computer Science, I. V Tetko, P. Karpov, F. Theis, and V. Kurková, Eds., Springer Verlag service@springer.de, 2019, pp. 724–736. [CrossRef]
  69. R. K. Stagsted, A. R. K. Stagsted, A. Vitale, J. Binz, A. Renner, L. B. Larsen, and Y. Sandamirskaya. Towards neuromorphic control: A spiking neural network based PID controller for UAV. in Robotics: Science and Systems, M. Toussaint, A. Bicchi, and T. Hermans, Eds., MIT Press Journals, 2020. [CrossRef]
  70. L. Zanatta, A. Di Mauro, F. Barchi, A. Bartolini, L. Benini, and A. Acquaviva. Directly-trained spiking neural networks for deep reinforcement learning: Energy efficient implementation of event-based obstacle avoidance on a neuromorphic accelerator. Neurocomputing 2023, 562, 126885. [Google Scholar] [CrossRef]
  71. D. Li et al.. Taming Event Cameras With Bio-Inspired Architecture and Algorithm: A Case for Drone Obstacle Avoidance. IEEE Trans Mob Comput 2025, 24, 4202–4216. [Google Scholar] [CrossRef]
  72. R. Vidal, H. Rebecq, T. Horstschaefer, and D. Scaramuzza. Ultimate SLAM? Combining events, images, and IMU for robust visual SLAM in HDR and high-speed scenarios. IEEE Robot Autom Lett 2018, 3, 994–1001. [Google Scholar] [CrossRef]
  73. Y. Wang, B. Y. Wang, B. Shao, C. Zhang, J. Zhao, and Z. Cai. REVIO: Range- and Event-Based Visual-Inertial Odometry for Bio-Inspired Sensors. Biomimetics, 2022; 4. [Google Scholar] [CrossRef]
  74. W. Guan, P. Chen, Y. Xie, and P. Lu. PL-EVIO: Robust Monocular Event-Based Visual Inertial Odometry with Point and Line Features. IEEE Transactions on Automation Science and Engineering 2024, 21, 6277–6293. [Google Scholar] [CrossRef]
  75. Y. Wu et al.. FlyTracker: Motion Tracking and Obstacle Detection for Drones Using Event Cameras. in Proceedings - IEEE INFOCOM, Institute of Electrical and Electronics Engineers Inc., 2023. [CrossRef]
  76. X. 14261 LNCS. 2023. [CrossRef]
  77. Y. Q. Han, X. H. Y. Q. Han, X. H. Yu, H. Luan, and J. L. Suo. Event-Assisted Object Tracking on High-Speed Drones in Harsh Illumination Environment. DRONES, 2024; 1. [Google Scholar] [CrossRef]
  78. L. Sun, Y. L. Sun, Y. Li, X. Zhao, K. Wang, and H. Guo. Event-RGB Fusion for Insulator Defect Detection Based on Improved YOLOv8. Institute of Electrical and Electronics Engineers Inc., 2024, pp. 794–802. [CrossRef]
  79. D. Hannan, R. D. Hannan, R. Arnab, G. Parpart, G. T. Kenyon, E. Kim, and Y. Watkins. Event-To-Video Conversion for Overhead Object Detection. in Proceedings of the IEEE Southwest Symposium on Image Analysis and Interpretation, Institute of Electrical and Electronics Engineers Inc., 2024, pp. 89–92. [CrossRef]
  80. U. G. Gamage et al.. Event-based Civil Infrastructure Visual Defect Detection: ev-CIVIL Dataset and Benchmark. arXiv, arXiv:2504.05679.
  81. Safa, T. Verbelen, I. Ocket, A. Bourdoux, F. Catthoor, and G. G. E. Gielen. Fail-Safe Human Detection for Drones Using a Multi-Modal Curriculum Learning Approach. IEEE Robot Autom Lett 2022, 7, 303–310. [Google Scholar] [CrossRef]
  82. S. Lele, Y. Fang, A. Anwar, and A. Raychowdhury. Bio-mimetic high-speed target localization with fused frame and event vision for edge application. Front Neurosci 2022, 16, 1010302. [Google Scholar] [CrossRef]
  83. Jones, *!!! REPLACE !!!*; et al. . A neuromorphic SLAM architecture using gated-memristive synapses. Neurocomputing 2020, 381, 89–104. [Google Scholar] [CrossRef]
  84. X. J. Cai et al.. TrinitySLAM: On-board Real-time Event-image Fusion SLAM System for Drones. ACM Trans Sens Netw, 2024; 6. [CrossRef]
  85. X. Zhang <i></i>. Dynamic Obstacle Avoidance for Unmanned Aerial Vehicle Using Dynamic Vision Sensor. in <i>Lecture Notes in Computer Science</i>, L.; et al. X. Zhang et al.. Dynamic Obstacle Avoidance for Unmanned Aerial Vehicle Using Dynamic Vision Sensor. in Lecture Notes in Computer Science, L. Iliadis, A. Papaleonidas, P. Angelov, and C. Jayne, Eds., Springer Science and Business Media Deutschland GmbH, 2023, pp. 161–173. [CrossRef]
  86. Z. Wan et al.. A Fast and Safe Neuromorphic Approach for Obstacle Avoidance of Unmanned Aerial Vehicle. in Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics, Institute of Electrical and Electronics Engineers Inc., 2024, pp. 1963–1968. [CrossRef]
  87. X. Hu, Z. X. Hu, Z. Liu, X. Wang, L. Yang, and G. Wang. Event-Based Obstacle Sensing and Avoidance for an UAV Through Deep Reinforcement Learning. in Lecture Notes in Computer Science, L. Fang, D. Povey, G. Zhai, T. Mei, and R. Wang, Eds., Springer Science and Business Media Deutschland GmbH, 2022, pp. 402–413. [CrossRef]
  88. L. Salt, D. Howard, G. Indiveri, and Y. Sandamirskaya. Parameter Optimization and Learning in a Spiking Neural Network for UAV Obstacle Avoidance Targeting Neuromorphic Processors. IEEE Trans Neural Netw Learn Syst 2020, 31, 3305–3318. [Google Scholar] [CrossRef] [PubMed]
  89. Elamin, A. El-Rabbany, and S. Jacob. Event-based visual/inertial odometry for UAV indoor navigation. Sensors 2024, 25, 61. [Google Scholar] [CrossRef] [PubMed]
  90. Safa, T. Verbelen, I. Ocket, A. Bourdoux, F. Catthoor, and G. G. E. Gielen. Fail-safe human detection for drones using a multi-modal curriculum learning approach. IEEE Robot Autom Lett 2021, 7, 303–310. [Google Scholar] [CrossRef]
  91. Rueckauer and, T. Delbruck. Evaluation of event-based algorithms for optical flow with ground-truth from inertial measurement sensor. Front Neurosci 2016, 10, 176. [Google Scholar]
  92. J. Yin, A. Li, T. Li, W. Yu, and D. Zou. M2dgr: A multi-sensor and multi-scenario slam dataset for ground robots. IEEE Robot Autom Lett 2021, 7, 2266–2273. [Google Scholar]
  93. J. Delmerico, T. J. Delmerico, T. Cieslewski, H. Rebecq, M. Faessler, and D. Scaramuzza. Are we ready for autonomous drone racing? the UZH-FPV drone racing dataset. in 2019 International Conference on Robotics and Automation (ICRA), IEEE, 2019, pp. 6713–6719.
  94. T. Stoffregen, G. T. Stoffregen, G. Gallego, T. Drummond, L. Kleeman, and D. Scaramuzza. Event-based motion segmentation by motion compensation. in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 7244–7253.
  95. Z. Zhu, D. Thakur, T. Özaslan, B. Pfrommer, V. Kumar, and K. Daniilidis. The multivehicle stereo event camera dataset: An event camera dataset for 3D perception. IEEE Robot Autom Lett 2018, 3, 2032–2039. [Google Scholar] [CrossRef]
  96. Kueng, E. Mueggler, G. Gallego, and D. Scaramuzza. Low-latency visual odometry using event-based feature tracks. in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2016, pp. 16–23.
  97. N. J. Sanket et al.. Evdodgenet: Deep dynamic obstacle dodging with event cameras. in 2020 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2020, pp. 10651–10657.
  98. M. Gehrig, W. Aarents, D. Gehrig, and D. Scaramuzza. Dsec: A stereo event camera dataset for driving scenarios. IEEE Robot Autom Lett 2021, 6, 4947–4954. [Google Scholar] [CrossRef]
  99. L. Burner, A. L. Burner, A. Mitrokhin, C. Fermüller, and Y. Aloimonos. Evimo2: an event camera dataset for motion segmentation, optical flow, structure from motion, and visual inertial odometry in indoor scenes with monocular or stereo algorithms. arXiv, arXiv:2205.03467.
  100. J. Delmerico, T. J. Delmerico, T. Cieslewski, H. Rebecq, M. Faessler, and D. Scaramuzza. Are we ready for autonomous drone racing? the UZH-FPV drone racing dataset. in 2019 International Conference on Robotics and Automation (ICRA), IEEE, 2019, pp. 6713–6719.
  101. H. Rebecq, D. H. Rebecq, D. Gehrig, and D. Scaramuzza. Esim: an open event camera simulator. in Conference on robot learning, PMLR, 2018, pp. 969–982.
  102. Koubâa, Robot Operating System (ROS)., vol. 1. Springer, 2017.
Figure 1. Reported Drones or UAV accidents by year of occurrence [15].
Figure 1. Reported Drones or UAV accidents by year of occurrence [15].
Preprints 185444 g001
Figure 7. Top Global Research Institutions Publishing on Event-Based Vision and UAV Technologies.
Figure 7. Top Global Research Institutions Publishing on Event-Based Vision and UAV Technologies.
Preprints 185444 g007
Table 1. Types of Event Cameras.
Table 1. Types of Event Cameras.
Type Operation Gaps
Dynamic Vision Sensors (DVS).
[29]
Detecting variations in brightness is the sole method used by DVS, the most popular kind of event camera. When the amount of light in the scene varies enough, each pixel in a DVS independently scans the area and initiates an event. With their high temporal resolution and lack of motion blur, DVS sensors work especially well in situations involving rapid movement. DVS has a number of benefits over conventional high-speed cameras, one of which being their incredibly low data rate, which qualifies them for real-time applications. Despite these capabilities, integrating DVS sensors with UAVs remains a challenge, especially on the issues of real-time processing and data synchronization [23]. Lack of standardized datasets is also making it difficult to evaluate the performance of DVS camera-based UAV applications [30].
Asynchronous Time-based Image Sensors (ATIS) [31] ATIS combines the capability of capturing absolute intensity levels with event detection. Not only can ATIS record events that are prompted by brightness variations, but it can also record the scene's actual brightness at particular times. Rebuilding intensity images alongside event data is made possible by this hybrid technique, which enables greater information acquisition and is especially helpful for applications that need both temporal precision and intensity information. Data from an event-based ATIS camera can be noisy, especially in low light conditions. So, there is a need for an efficient noise filtering model to address this [32]
Dynamic and Active Pixel Vision Sensors (DAVIS) [33] DAVIS sensors combine traditional active pixel sensors (APS) and DVS capability. Because of its dual-mode functionality, DAVIS may be used as an event-based sensor to identify changes in brightness or as a conventional camera to record full intensity frames. DAVIS's dual-mode capacity makes it adaptable to a variety of scenarios, including those in which high-speed motion must be monitored while retaining the ability to capture periodic full-frame photos. This capability of combining both APS and DVS capability poses challenges in complex data integration and sensor fusion [34]
Colour Event Cameras [35] Color event cameras are one of the more recent innovations that increase the functionality of traditional DVS by capturing color information. These sensors enable the camera to record color events asynchronously by detecting changes in intensity across various color channels using a modified pixel architecture. This breakthrough enables event cameras to be utilized in more complicated visual settings where colour separation is critical. There is scarcity of comprehensive dataset repository specifically for training and evaluating models that use this camera [36]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated