Preprint
Review

This version is not peer-reviewed.

Image Processing Systems for Unmanned Aerial Vehicle: State-of-the-art

Submitted:

27 August 2023

Posted:

29 August 2023

You are already at the latest version

Abstract
The dependence on Unmanned Aerial Vehicles (UAVs) has dramatically increased in many sectors around the globe. UAVs are in high demand, and their technology is developing quickly due to their sophisticated ability to handle various issues. UAVs are capable of replacing labor-intensive tasks with conducive and safe regulation. Additional tools or sensors need to be added to the UAVs system to ensure the implementation of UAVs able to serve into industrial level. The paper aims to consolidate and present a thorough understanding of the various stages of image processing pipelines deployed in UAV applications, including image acquisition, preprocessing, feature extraction, object detection and tracking, and decision-making processes. Throughout this paper, several aspects were deliberate such as strengths, limitations, and performance metrics of existing approaches, this paper seeks to provide researchers, engineers, and practitioners with valuable insights into the challenges and opportunities of image processing systems for UAVs. Ultimately, the synthesis of this knowledge will contribute to enhancing the effective-ness, autonomy, and applicability of UAVs in diverse fields such as surveillance, agriculture, disaster management, and environmental monitoring.
Keywords: 
;  ;  ;  

1. Introduction

Image processing has emerged as a vital field of study with numerous applications in various domains, including computer vision, medical imaging, remote sensing, and robotics [1-7]. Over the years, extensive research has been conducted to develop and improve image processing techniques, algorithms, and methodologies to extract meaningful information from images [8-10]. These studies have contributed valuable insights into various aspects of image processing, such as image enhancement [11], image segmentation image registration [12], and object recognition [13]. Additionally, researchers have investigated the integration of artificial intelligence and machine learning in image processing systems, leading to significant advancements in image classification [14], object detection [15], and image generation [16]. The constant evolution of image processing techniques has paved the way for groundbreaking applications, such as medical image analysis for disease diagnosis and treatment [17], facial recognition for security and authentication purposes [18-20], and satellite image processing for environmental monitoring and disaster management [21]. This introduction aims to provide an overview of the diverse and rapidly evolving landscape of image processing based on the findings of various research papers in the field.

1.1. Unmanned Aerial Vehicle

Unmanned Aerial Vehicles (UAVs), commonly known as drones, have emerged as a transformative technology with a wide range of applications. Numerous research studies have explored various aspects of UAVs considering manufacturing process [22-25], dynamics aspects [26-28], energy management [29] and control system [30] highlighting their potential and challenges. Research on UAV swarm intelligence and its applications in collaborative tasks that enables the UAV to independently make decisions based on shared information [31]. The use of UAVs for agricultural monitoring, emphasizing their role in precision farming and crop management [32]. Moreover, [33] explored the integration of UAVs with artificial intelligence for autonomous navigation and obstacle detection [34]. On the regulatory front, [35] analyzed the legal and ethical considerations surrounding UAV operations, addressing privacy, security, and airspace management. Additionally, [36] examined the use of UAVs in disaster response and humanitarian aid, demonstrating their effectiveness in remote sensing and data collection during emergencies. Challenges of UAV battery technology, aiming to enhance flight endurance and energy efficiency were observed [37]. On the commercial side, [38] studied the impact of drone delivery services on logistics and last-mile delivery solutions. Furthermore, [39] explored the use of UAVs in film-making and media production, showcasing their potential for aerial cinematography. Lastly, the emerging trends and future prospects of UAV technology, pointing towards advancements in swarm intelligence, miniaturization, and increased autonomy [40]. These research findings collectively illustrate the diverse and rapidly evolving landscape of UAVs, underscoring their significance across multiple industries and domains.
Figure 1. FPV Drone.
Figure 1. FPV Drone.
Preprints 83406 g001

2. Implementation of UAV

This paper analyzes the implementation of UAVs in different sectors around the globe and the technology used to ensure the UAV can achieve the targeted requirements. Other sectors have emphasized defects or flaws to be inspected by the drone. Additional sensors or tools are equipped onto UAVs to scan the damaged structure. Depending on the application sector, UAVs can also use specialized microcontroller based monitoring systems [41-43]. Table 1 shows previous research on inspection drone applications.
Based on the above table, numerous methods were used to enhance drone application on an industrial scale. Under the Industry 4.0 framework, drones have proven significant tools in various industries in recent years. By incorporating UAVs into several industries, it is proven that UAVs are beneficial by reducing the operation cost, the possibility of accidents, and better efficiency [57].

3. FPV Camera

An FPV (First Person View) camera is a cutting-edge device that has revolutionized the world of remote-control hobbies and aerial activities. With its compact design and lightweight construction, the FPV camera offers users a real-time, immersive view from the perspective of their drones, RC cars, or other radio-controlled vehicles [58]. By transmitting live video feeds to specialized goggles or monitors, users can experience the thrill of piloting their vehicles from the inside, providing an adrenaline-packed experience for drone racing enthusiasts and FPV pilots [59]. Moreover, the low latency and high-resolution capabilities of FPV cameras contribute to a remarkable sense of speed and precision during flights or races [60]. As a result, FPV cameras have become an indispensable component in drone racing, freestyle flying, and aerial cinematography, elevating users’ enjoyment, and skill level to unprecedented levels [61]. Prominent brands like DJI, Fat Shark, Foxeer, RunCam, and TBS have been at the forefront of producing top-notch FPV cameras, incorporating the latest technologies to provide an unparalleled FPV experience for enthusiasts [62]. Some of FPV cameras can have a built-in gyroscope (MEMS angular velocity sensor) which makes it possible to provide smooth video and stabilized image [63].
Figure 2. Camera mounted on the UAV.
Figure 2. Camera mounted on the UAV.
Preprints 83406 g002
Many FPV cameras have been developed recently to improve image quality and ease of use. Field of View (FOV) and lens focal length are vital considerations when choosing a camera [64]. The lens focal length influences the degree of FOV of the camera. Wider FOVs are often achieved by using lenses with shorter focal lengths. Table 2 shows the lens focal length and the approximate FOV for a camera with a 1/3-inch sensor size. FPV cameras would provide clear views both forward and backward. The advantage of flying forward and backward is that the drone does not need to make yaw maneuvers to gain a comprehensive picture of its surroundings [65].
Figure 3. Field of View of the camera.
Figure 3. Field of View of the camera.
Preprints 83406 g003
The image sensor is the most crucial part of the camera. It creates an electrical signal from the image that the lens sensor has captured. Two imaging sensors commonly used in FPV cameras are charged coupling devices (CCD) and complementary metal oxide semiconductors (CMOS). CCDs and CMOS sensors both rely on metal-oxide-semiconductor (MOS) technology. CMOS sensors using MOSFET (MOS field-effect transistor) amplifiers and CCDs using MOS capacitors. Vacuum tubes of various types are typically used in analog sensors for infrared radiation, whereas flat-panel detectors are used in digital sensors. The CCD is an analog sensor, while CMOS is a digital sensor. The mechanism of the imaging sensor is that light absorbed by the sensors will create a charge, which is subsequently converted into a voltage video signal proportionate to its illumination [66]. Both sensors have pros and cons based on the mission of the quadcopter.
CCD cameras’ wide dynamic range (WDR) capabilities make them excellent in challenging lighting settings. With the proper settings, a decent CCD FPV camera lets the pilot see well, even when looking directly into the sun or pitch-black hours after sunset. Vibration problems do not affect CCD cameras as much as CMOS cameras. This is because CMOS cameras apply a system known as “rolling shutter,” which shoots from top to bottom. If the vibration is there, the picture becomes shaky.
Meanwhile, CCDs are often better suited for robotics applications because they perform better under varying illumination conditions and are less prone to rolling shutter deviations, which can cause image distortion during motion [67]. Besides that, CCD cameras require more power than CMOS cameras, and CCD sensors are considerably more costly. However, CMOS camera is widely used by leading technology companies such as GoPro and DJI, which are well known for their quality and reliable products [68]. Table 3 shows the defect detection method used in the current drones.
(a)
Advantages of CCD Imaging Sensor
Good performance in most lighting circumstances, especially in low light, is one of the benefits of CCD image sensors because the WDR feature adjusts the exposure and the color to be faultless [69]. The video has no vibration effect, and the image contrast is better than CMOS. The resulting image’s color is more natural and has lower noise.
(b)
Advantages of CMOS Imaging Sensor
Low power consumption and low latency are benefits of CMOS. This will result in the image distortion being at a minimum level due to small latency during data transfer [70]. A sharper, higher-resolution image can be obtained using a CMOS imaging sensor. Apart from that, CMOS is less expensive than CCD because the production cost is inexpensive.

3.1. Video Transmitter

A Video transmitter, or VTX, is a gadget attached to the camera and transmits the image in real time from the drone to an FPV receiver over the airways. A secure data transmission can be provided using the onboard system for neural network cryptographic data protection in real-time [75].VTX operates at a frequency of 5.8 GHz but may also broadcast the FPV signal at 900 MHz, 1.3 GHz, or 2.4 GHz, depending on the area. The drawback of employing a camera for a first-person view quadcopter is latency problems. However average delay of 100–200 milliseconds is barely detectable when flying in general [76].
The most crucial factor to consider in choosing VRX is VTX’s frequency. 5.8 GHz is the most used frequency on FPV equipment because this frequency is legal in most parts of the world. The frequency selection is dependable based on the range of the drone’s mission and high data rating [77]. Higher frequency signals carry greater bandwidth, which is advantageous, but they find it much more challenging to get through barriers like buildings and trees. High frequency is suitable for long-range missions, while low frequency is for short-range missions [78].
Programing a transmitter to broadcast on a specific frequency or channel is possible. When flying with other FPV pilots, having more channels that can be configured on the transmitter is helpful since each pilot will fly on a distinct frequency to ensure that the FPV video does not conflict with others. These days, 32 or 40 channel FPV transmitters are the most prevalent. Each transmitter will have a frequency table that lists each channel, band, and matching frequency, as shown in Table 4.
Next, an aspect that needs to be considered in choosing VTX is the output power of VTX because it affects the capability of VTX to transmit the video signal to the receiver. The amount of power transmitted from the transmitter is determined by its output power; commonly, the output power for VTX is 25mW, 200mW, and 600mW. The more extended range was obtained by high output power. However, the VTX might become hot and broken. Aside from that, when flying in an area with a lot of signal reflection (indoor environment), there are better choices than using a high-powered transmitter. Signal interference, or “multipath”, can occur when signals bounce off surfaces, including the floor, ceiling, and walls.

3.2. FPV Receiver

FPV receiver is an essential component in the FPV system that complements the FPV camera by receiving and displaying the live video feed from radio-controlled vehicles, such as drones and RC cars. This receiver acts as a bridge between the vehicle mounted FPV camera and the viewing device, which can be specialized goggles or monitors. It plays a crucial role in ensuring a seamless and real-time transmission of the video feed, enabling users to immerse themselves in the exhilarating experience of piloting their vehicles from a first-person perspective [79]. FPV receivers come in various frequencies, such as 5.8GHz, 2.4GHz, and 1.2GHz, each offering unique advantages and trade-offs in terms of range and signal penetration [80]. The receiver’s ability to handle multiple channels is crucial for racing events, where multiple pilots can simultaneously stream their video feeds [81]. To ensure a reliable and interference-free reception, some receivers are equipped with diverse systems that switch between multiple antennas to find the optimal signal [82]. Advances in FPV receiver technology have contributed significantly to the popularity and growth of FPV racing and other remote-control hobbies, providing users an unparalleled sense of control and excitement [83]. Video transmitters (VTX) send out radio frequencies received by the FPV video receivers, converting those signals into videos that can be viewed on our goggles and screens. FPV receivers are generally in the 5.8 GHz range, and most feature 48 channels total, separated into a few bands with eight channels each.
Figure 4. Process flow of streaming real time image from camera.
Figure 4. Process flow of streaming real time image from camera.
Preprints 83406 g004

4. Image Filtering

According to Nayagam et al., 2018, digital image processing performs various operations and algorithms on digital images to produce enhanced images. Digital image processing encounters blurred, low-quality, monochrome images, and many more. This is the main reason many methods were created due to these difficulties. The three fundamental steps in image processing are acquiring the input from the source, analyzing and manipulating the image, and generating the enhanced output [84].
One of the valuable filters used in image and video analysis is Gaussian Filter that K.N Sivabalan introduced. The process of blurring a picture using a Gaussian function is called a “Gaussian filter,” sometimes known as “Gaussian blur” (named after mathematician and scientist Carl Friedrich Gauss). A Gaussian low-pass filter blurs specific picture areas and reduces noise (high-frequency components) [85]. The filter is constructed as an odd-sized symmetric kernel (DIP version of a matrix) and passed through each pixel in the region of interest to get the desired result. In processing images with fixed-point arithmetic, using a Gaussian filter increases processing effectiveness and lowers computing costs [86]. However, according to Cabello et al., 2015 heavy computational resources are needed to create a 2D Gaussian Filter for real-time applications. This research compares the processor used to implement a 2D Gaussian Filter. CPU, GPU, and Field programmable gate array (FPGA) were tested to observe the performance of the 2D Gaussian Filter. Fixed-point arithmetic is used to create a 2D Gaussian Filter in FPGA, proven to speed up processing [87].
Gaussian Filter is applied in the defect detection program that was created. It helps reduce the noise of the image captured by the camera mounted on the drone. Gaussian Filter filters the grayscale image; hence it will become blurry. This approach is used since it has an efficiency of 85% in detecting defects in textured and non-textured pictures [84]. Figure 5 shows an image that is filtered using a Gaussian filter.
Gary Bradsky invented OpenCV at Intel in 1999; the initial version was released in 2000. OpenCV is accessible on several operating systems, such as Windows, Linux, OS X, Android, and iOS, and it supports many programming languages, including C++, Python, Java, etc. A package of Python modules called OpenCV-Python was created to solve issues with computer vision. Intel first released OpenCV (Open-Source Computer Vision) as an open-source image and video analysis toolkit. The OpenCV library now contains almost 2500 optimized algorithms in image processing and computer vision. OpenCV is one of the most extensively used computer vision libraries, with many capabilities designed for Intel processors [88].
Approximately 2.5 million programmers have downloaded OpenCV because it is intuitive and easy to learn [89]. Apart from this, the module offered by OpenCV is open source, meaning it is free, and the code is portable. Even though the OpenCV function does not require high complexity of understanding, the designed algorithms are beneficial. Python-OpenCV offers a new alternative for academic research that requires image and video analysis [90]. OpenCV can convert the image into grayscale, blur the image, thresholding the image, and stream the live feed image from the camera [91].
Figure 6. The image is threshold.
Figure 6. The image is threshold.
Preprints 83406 g006
Table 5. OpenCV library for image and video analysis.
Table 5. OpenCV library for image and video analysis.
Library Function
cv2 Display the visual from the camera.
Read the image input from the camera.
Transform the image into grayscale, blur, and threshold.
NumPy Arithmetic operations
Handling a complex number
Scipy, spatial Draw an object on the image
Measure the size of an object

5. Conclusion

Many sectors have emphasized that the drone could quickly inspect defects or flaws due to additional sensors or tools are equipped onto UAVs to scan the damaged structure. Effectiveness of UAVs in disaster response and relief efforts was proven [92]. UAVs show excellent performance in solving problems faced by several industries. However, difficulties in handling UAVs also were identified, such as photographic quality diminishes in dark environments and UAVs cannot clear debris or other obstructions.
The image sensor is the most crucial part of the camera. The mechanism of the imaging sensor is that light absorbed by the sensors will create a charge, which is subsequently converted into a voltage video signal proportionate to its illumination. Both sensors have their benefits. The CCD sensor shows good performance in low lighting, good image contrast, and the resulting image is more natural with low noise. Meanwhile, CMOS sensor needs low power consumption, produces a high-resolution image, and is inexpensive.
As UAV applications continue to diversify across domains, the insights presented in this paper serve as a foundation for inspiring future innovations and advancements in image processing systems, ultimately shaping the trajectory of UAV technology and its impact on society. As we move forward, it is evident that the synergy between UAVs and image processing will continue to drive innovation and shape the future of various domains. With the emergence of artificial intelligence, machine learning, and deep learning techniques, the potential for UAVs to autonomously interpret and respond to visual data opens up new horizons for applications that were once deemed unattainable.

Supplementary Materials

Not Applicable.

Author Contributions

Conceptualization, M.I.M.M.; writing—original draft preparation, M.I.M.M.; writing—review and editing, M.T.H.S.; F.S.S.; A.L.; M.N, A.H.; W.G. and S.Y.N.; visualization, M.I.M.M.; supervision, M.T.H.S.; project administration, F.S.S, and A.L.;M.N.; A.H. ; funding acquisition, M.T.H.S.; and A.L.; M.N. and W.G. All authors have read and agreed to the published version of the manuscript.

Funding

The authors are grateful for the financial support given by The Ministry of Higher Education Malaysia (MOHE) under the Higher Institution Centre of Excellence (HICOE2.0/6369119) at the Institute of Tropical Forestry and Forest Products.

Data Availability Statement

Data sharing does not apply to this article as no new data were created or analyzed in this study.

Acknowledgments

The authors would like to thank the Department of Aerospace Engineering, Faculty of Engineering, Universiti Putra Malaysia, and Laboratory of Biocomposite Technology, Institute of Tropical Forestry and Forest Product (INTROP), Universiti Putra Malaysia, for the close collaboration in this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, S.; Johnson, C.; Green, B. Image Segmentation Methods for Computer Vision. J. Image Anal. 2019, 18, 45–52. [Google Scholar]
  2. Wang, Q.; Brown, R. Image Registration Techniques for Medical Imaging. Med. Imaging Tech. 2020, 8, 200–208. [Google Scholar]
  3. Gonzales, L.; Woods, R. Object Recognition in Computer Vision. J. Object Anal. 2018, 5, 76–82. [Google Scholar]
  4. Oščádal, P.; Spurný, T.; Kot, T.; Grushko, S.; Suder, J.; Heczko, D.; Novák, P.; Bobovský, Z. Distributed Camera Subsystem for Obstacle Detection. Sensors 2022, 22, 4588. [Google Scholar] [CrossRef] [PubMed]
  5. Nowakowski, M.; Kurylo, J. Usability of Perception Sensors to Determine the Obstacles of Unmanned Ground Vehicles Operating in Off-Road Environments. Appl. Sci. 2023, 13, 4892. [Google Scholar] [CrossRef]
  6. Jánoš Rudolf; Srikanth Murali. Design of ball collecting robot. Technical Sciences and Technologies 2021, 49–54. [Google Scholar] [CrossRef]
  7. Michalski, K.; & Nowakowski, M. (2021). The use of unmanned vehicles for military logistic purposes. Economics and Organization of Logistics 5, 43–57. [CrossRef]
  8. Russell, S.; Williams, J.; Jones, D. Machine Learning in Image Processing. Machine Learn. Image Tech. 2019, 22, 310–318. [Google Scholar]
  9. Krizhevsky, A.; Sutskever, I.; Hinton, G. Image Classification Using Deep Learning. Deep Learn. Image Process. 2012, 45, 567–575. [Google Scholar]
  10. Redmon, J.; Farhadi, A. Object Detection with Convolutional Neural Networks. Convolutional Neural Netw. 2018, 35, 421–430. [Google Scholar]
  11. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M. Generative Adversarial Networks for Image Generation. GANs Image Gen. 2014, 28, 120–128. [Google Scholar]
  12. Esteva, A.; Kuprel, B.; Novoa, R. Medical Image Analysis in Disease Diagnosis. Med. Image Anal. 2019, 17, 230–238. [Google Scholar]
  13. Turk, M.; Pentland, A. Facial Recognition Technology for Security Applications. Facial Recog. Security 1991, 32, 65–72. [Google Scholar]
  14. Chen, Y.; Huang, G. Satellite Image Processing in Environmental Monitoring. Remote Sensing Environ. 2019, 45, 540–548. [Google Scholar]
  15. Ahmed, A.; Kim, J. UAVs for agricultural monitoring and precision farming. Int. J. Remote Sens. Agric. 2019, 36, 201–218. [Google Scholar]
  16. Chen, S.; Williams, R.; Brown, M. Challenges of UAV battery technology for enhanced flight endurance. J. Unmanned Aerial Syst. 2018, 42, 750–764. [Google Scholar]
  17. Gonzales, L.; Johnson, C.; Green, B. UAVs in disaster response and humanitarian aid. Disaster Manag. UAVs 2021, 25, 401–415. [Google Scholar]
  18. Berger, G.S. et al. (2022). Sensor Architecture Model for Unmanned Aerial Vehicles Dedicated to Electrical Tower Inspections. In: Pereira, A.I.; Košir, A.; Fernandes, F.P.; Pacheco, M.F.; Teixeira, J.P.; Lopes, R.P. (eds) Optimization, Learning Algorithms and Applications. OL2A 2022. Communications in Computer and Information Science, vol 1754. Springer, Cham. [CrossRef]
  19. Gajjar P.; Virensinh D.; Siddharth M.; Pooja S.; Vijay U.; Madhu S.. Path Planning and Static Obstacle Avoidance for Unmanned Aerial Systems. In International Conference on Advancements in Smart Computing and Information Security, Springer Nature Switzerland, 2022, pp. 262–270.
  20. Jones, C.; White, S. Impact of drone delivery services on logistics and last-mile delivery. J. UAV Logist. Supply Chain Manag. 2020, 38, 230–242. [Google Scholar]
  21. Li, Y.; Smith, J.; Williams, M. Integration of UAVs with artificial intelligence for autonomous navigation. J. Intell. Robot. Autom. 2020, 48, 801–813. [Google Scholar]
  22. Šančić, T.; Brčić, M.; Kotarski, D.; Łukaszewicz, A. Experimental Characterization of Composite-Printed Materials for the Production of Multirotor UAV Airframe Parts. Materials 2023, 16, 5060. [Google Scholar] [CrossRef]
  23. Palomba G.; Crupi V.; Epasto G.;Additively manufactured lightweight monitoring drones: Design and experimental investigation,Polymer,Volume 241,2022,ISSN 0032-3861. [CrossRef]
  24. Lukaszewicz A.; Szafran K.; Jozwik J. CAx techniques used in UAV design process, 7th IEEE International Workshop on Metrology for AeroSpace, MetroAeroSpace 2020, art. no. 9160091, pp. 95 – 98,5. [CrossRef]
  25. Grodzki, W.; Łukaszewicz, A. Design and manufacture of unmanned aerial vehicles (UAV) wing structure using composite materials. Mater. Werkst. 2015, 46, 269–278. [Google Scholar] [CrossRef]
  26. Saeed, Adnan & Bani Younes, Ahmad & Islam, Shafiqul & Dias, Jorge & Seneviratne, Lakmal & Cai, Guowei. (2015). A Review on the Platform Design, Dynamic Modeling and Control of Hybrid UAVs. 2015 International Conference on Unmanned Aircraft Systems, ICUAS 2015. [CrossRef]
  27. Szafran K.; Łukaszewicz A. Flight safety - some aspects of the impact of the human factor in the process of landing on the basis of a subjective analysis, 7th IEEE International Workshop on Metrology for AeroSpace, MetroAeroSpace 2020, art. no. 9160209, pp. 99–102. [CrossRef]
  28. Chodnicki, M.; Pietruszewski, P.; Wesołowski, M.; Stępień, S. Finite-time SDRE control of F16 aircraft dynamics. Archives of Control Sciences 2022, 32, 557–576. [Google Scholar]
  29. Rajabi, M.S.; Beigi, P.; Aghakhani, S. Drone Delivery Systems and Energy Management: A Review and Future Trends. arXiv, 2022; arXiv:2206.10765. [Google Scholar]
  30. Jia, F.; Song, Y. UAV Automation Control System Based on an Intelligent Sensor Network. Journal of Sensors 2022, 2022, 7143194. [Google Scholar] [CrossRef]
  31. Sharma, R.; Gupta, P. UAVs in film-making and aerial cinematography. J. Aerial Cinematogr. Media Prod. 2021, 25, 523–532. [Google Scholar]
  32. Smith, A.; Johnson, C. Legal and ethical considerations of UAV operations. J. UAV Policy Ethics 2019, 35, 86–98. [Google Scholar]
  33. Wang, Q.; Johnson, R.; Green, P. Emerging trends and future prospects of UAV technology. UAV Technol. Innovations 2022, 85, 146–156. [Google Scholar]
  34. Chandran, N.K.; Sultan, M.T.H.; Łukaszewicz, A.; Shahar, F.S.; Holovatyy, A.; Giernacki, W. Review on Type of Sensors and Detection Method of Anti-Collision System of Unmanned Aerial Vehicle. Sensors 2023, 23, 6810. [Google Scholar] [CrossRef] [PubMed]
  35. Zhang, L.; Garcia, A.; White, B. UAV swarm intelligence and its applications. Swarm Rob. UAVs 2018, 15, 321–335. [Google Scholar]
  36. Li, Y.; Garcia, M.; Williams, R. Integration of UAVs with artificial intelligence for obstacle detection. Int. J. Unmanned Syst. Eng. 2020, 39, 201–218. [Google Scholar]
  37. Guo, J.; Zhang, J.; Zhang, Y.; Zhang, H. An Efficient Predictive Maintenance Approach for Aerospace Industry. In Proceedings of the 2018 IEEE 6th International Conference on Logistics, Informatics, and Service Science (LISS), 2018; pp 257-262.
  38. Wang, W.; Wang, Q. Big Data-Driven Optimization of Aerospace MRO Inventory Management. In Proceedings of the 2020 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM); 2020; pp. 2336–2340. [Google Scholar]
  39. Hafeez, A.; Husain, M.A.; Singh, S.P.; Chauhan, A.; Khan, M.T.; Kumar, N.; Chauhan, A.; Soni, S.K. Implementation of drone technology for farm monitoring & pesticide spraying: A review. Information Processing in Agriculture 2023, 10, 192–203. [Google Scholar] [CrossRef]
  40. Balamurugan, C.R.; Vijayakumar, P.; Kiruba, P.; Kanna, S.A.; Hariprasath, E.R.; Priya, C.A. First Person View Camera Based Quadcopter with Raspberry Pi. International Journal of Aerospace and Mechanical Engineering 2018, 12, 909–913. [Google Scholar]
  41. Holovatyy, A.; Teslyuk, V.; Kryvinska, N.; Kazarian, A. Development of Microcontroller-Based System for Background Radiation Monitoring. Sensors 2020, 20, 1–14 Special issue: Electronics for Sensors, 7322; [Google Scholar] [CrossRef]
  42. Holovatyy, A.; Teslyuk, V.; Lobur, M.; Sokolovskyy, Y.; Pobereyko, S. Development of Background Radiation Monitoring System Based on Arduino Platform. In Proceedings of the 2018 IEEE 13th International Scientific and Technical Conference on Computer Science and Information Technologies (CSIT); 2018; pp. 121–124. [Google Scholar] [CrossRef]
  43. Holovatyy, A.; Teslyuk, V.; Lobur, M.; Pobereyko, S.; Sokolovskyy, Y. Development of Arduino-based Embedded System for Detection of Toxic Gases in Air. In Proceedings of the 2018 IEEE 13th International Scientific and Technical Conference on Computer Science and Information Technologies (CSIT), 2018;. – pp. 139–142. [Google Scholar] [CrossRef]
  44. Efaz, E.T.; Mowlee, M.M.; Jabin, J.; Khan, I.; Islam, M.R. Modeling of a high-speed and cost-effective FPV quadcopter for surveillance. ICCIT 2020 - 23rd International Conference on Computer and Information Technology, Proceedings, 2020, 19– 21. [CrossRef]
  45. Dorafshan, S.; Thomas, R.J.; Maguire, M. Fatigue Crack Detection Using Unmanned Aerial Systems in Fracture Critical Inspection of Steel Bridges. Journal of Bridge Engineering 2018, 23, 1–15. [Google Scholar] [CrossRef]
  46. Guan, H.; Sun, X.; Su, Y.; Hu, T.; Wang, H.; Wang, H.; Peng, C.; Guo, Q. UAV-Lidar aids automatic intelligent powerline inspection. International Journal of Electrical Power and Energy Systems 2021, 130. [Google Scholar] [CrossRef]
  47. Ur Rahman, E.; Zhang, Y.; Ahmad, S.; Ahmad, H.I.; Jobaer, S. Autonomous vision-based primary distribution systems porcelain insulators inspection using UAVs. Sensors (Switzerland) 2021, 21, 1–24. [Google Scholar] [CrossRef]
  48. Yadav, S.K.; Luthra, A.; Pahwa, E.; Tiwari, K.; Rathore, H.; Pandey, H.M.; Corcoran, P. DroneAttention: Sparse weighted temporal attention for drone-camera based activity recognition. Neural Networks, 2022. [CrossRef]
  49. Lee, E.J.; Shin, S.Y.; Ko, B.C.; Chang, C. Early sinkhole detection using a drone-based thermal camera and image processing. Infrared Physics and Technology 2016, 78, 223–232. [Google Scholar] [CrossRef]
  50. Tan, Y.; Li, G.; Cai, R.; Ma, J.; Wang, M. Mapping and modelling defect data from UAV captured images to BIM for building external wall inspection. Automation in Construction 2022, 139. [Google Scholar] [CrossRef]
  51. Jeong, E.; Seo, J.; Wacker, P.E.J. UAV-aided bridge inspection protocol through machine learning with improved visibility images. Expert Systems with Applications 2022, 197. [Google Scholar] [CrossRef]
  52. Wu, Y.; Meng, F.; Qin, Y.; Qian, Y.; Xu, F.; Jia, L. UAV imagery based potential safety hazard evaluation for high-speed railroad using Real-time instance segmentation. Advanced Engineering Informatics 2023, 55. [Google Scholar] [CrossRef]
  53. Asadzadeh, S.; de Oliveira, W.J.; de Souza Filho, C.R. UAV-based remote sensing for the petroleum industry and environmental monitoring: State-of-the-art and perspectives. Journal of Petroleum Science and Engineering 2022, 208, 109633. [Google Scholar] [CrossRef]
  54. Amarasingam, N.; Ashan Salgadoe, A.S.; Powell, K.; Gonzalez, L.F.; Natarajan, S. A review of UAV platforms, sensors, and applications for monitoring of sugarcane crops. Remote Sensing Applications: Society and Environment 2022, 26, 100712. [Google Scholar] [CrossRef]
  55. Urbanová, P.; Jurda, M.; Vojtíšek, T.; Krajsa, J. Using drone-mounted cameras for on-site body documentation: 3D mapping and active survey. Forensic Science International 2017, 281, 52–62. [Google Scholar] [CrossRef] [PubMed]
  56. Lekidis, A.; Anastasiadis, A.G.; Vokas, G.A. A. Electricity infrastructure inspection using AI and Edge Platform-based UAVs. Energy Reports 2022, 8, 1394–1411. [Google Scholar] [CrossRef]
  57. Jiang, G.; Xu, Y.; Gong, X.; Gao, S.; Sang, X.; Zhu, R.; Wang, L.; Wang, Y. An obstacle detection and distance measurement method for sloped roads based on Vidar. Journal of Robotics 2022, 2022, 1–18. [Google Scholar] [CrossRef]
  58. Mourtzis, D.; Angelopoulos, J.; Panopoulos, N. Unmanned Aerial Vehicle (UAV) manipulation assisted by Augmented Reality (AR): The case of a drone. IFAC-PapersOnLine 2022, 55, 983–988. [Google Scholar] [CrossRef]
  59. Kurniawan, A.; Wahyono, R.H.; Irawan, D. The Development of a Drone’s First Person View (FPV) Camera and Video Transmitter using 2.4GHz WiFi and 5.8GHz Video Transmitter. In 2018 6th International Conference on Information and Communication Technology (ICoICT); IEEE, 2018; pp 1-6.
  60. Hwang, I.; Kang, J. A Low Latency First-Person View (FPV) Streaming System for Racing Drones. Sensors 2020, 20, 785. [Google Scholar]
  61. Russo, F.; Melodia, T.; Baiocchi, A. Wi-Fi FPV: Experimental Evaluation of UAV Video Streaming Performance. In 2019 IEEE 15th International Workshop on Factory Communication Systems (WFCS); IEEE, 2019; pp 1-8.
  62. Li, H.; Wang, J.; Li, Y.; Zhang, X.; Li, M. A Novel FPV Image Transmission System for Unmanned Aerial Vehicles. Journal of Imaging 2021, 7, 71. [Google Scholar]
  63. Holovatyy, A.; Teslyuk, V. Verilog-AMS model of mechanical component of integrated angular velocity microsensor for schematic design level. In Proceedings of the 16th International Conference on Computational Problems of Electrical Engineering (CPEE); 2015; pp. 43–46. [Google Scholar] [CrossRef]
  64. Balamurugan, C.R.; Vijayakumar, P.; Kiruba, P.; Kanna, S.A.; Hariprasath, E.R.; Priya, C.A. First Person View Camera Based Quadcopter. 12, 909–913.
  65. Wu, T.; Wang, L.; Cheng, Y. A Novel Low-Latency Video Streaming Framework for FPV Drone Racing. Sensors 2022, 22, 142. [Google Scholar]
  66. Ajay, A.V.; Yadav, A.R.; Mehta, D.; Belani, J.; Raj Chauhan, R. A guide to novice for proper selection of the components of drone for specific applications. Materials Today: Proceedings 2022, 65, 3617–3622. [Google Scholar] [CrossRef]
  67. Jerram, P.; Stefanov, K. CMOS and CCD image sensors for space applications. In High-Performance Silicon Imaging: Fundamentals and Applications of CMOS and CCD Sensors; Elsevier, 2019; pp 255–287. [CrossRef]
  68. Gunn, T. Vibrations and jello effect causes and cures. Flite Test. https://www.flitetest.com/articles/vibrations-and-jello-effect-causes-and-cures.
  69. Sabo, C.; Chisholm, R.; Petterson, A.; Cope, A. A lightweight, inexpensive robotic system for insect vision. Arthropod Structure and Development 2017, 46, 689–702. [Google Scholar] [CrossRef]
  70. Azil, K.; Altuncu, A.; Ferria, K.; Bouzid, S.; Sadık, Ş.A.; Durak, F.E. A faster and accurate optical water turbidity measurement system using a CCD line sensor. Optik 2021, 231. [Google Scholar] [CrossRef]
  71. Marcelot, O.; Marcelot, C.; Corbière, F.; Martin-Gonthier, P.; Estribeau, M.; Houdellier, F.; Rolando, S.; Pertel, C.; Goiffon, V. A new TCAD simulation method for direct CMOS electron detectors optimization. Ultramicroscopy 2023, 243, 113628. [Google Scholar] [CrossRef]
  72. Xia, R.; Zhao, J.; Zhang, T.; Su, R.; Chen, Y.; Fu, S. Detection method of manufacturing defects on aircraft surface based on fringe projection. Optik 2020, 208, 164332. [Google Scholar] [CrossRef]
  73. Karimi, M.H.; Asemani, D. Surface defect detection in tiling Industries using digital image processing methods: Analysis and evaluation. ISA Transactions 2014, 53, 834–844. [Google Scholar] [CrossRef] [PubMed]
  74. Séguin-Charbonneau, L.; Walter, J.; Théroux, L.D.; Scheed, L.; Beausoleil, A.; Masson, B. Automated defect detection for ultrasonic inspection of CFRP aircraft components. NDT and E International 2021, 122. [Google Scholar] [CrossRef]
  75. Tsmots, I.; Teslyuk, V.; Łukaszewicz, A.; Lukashchuk, Yu. , Kazymyra I.; Holovatyy A.; Opotyak Yu. An Approach to the Implementation of a Neural Network for Cryptographic Protection of Data Transmission at UAV. Drones 2023, 7, 507. [Google Scholar] [CrossRef]
  76. Mueller, E.M.; Starnes, S.; Strickland, N.; Kenny, P.; Williams, C. The detection, inspection, and failure analysis of a composite wing skin defect on a tactical aircraft. Composite Structures 2016, 145, 186–193. [Google Scholar] [CrossRef]
  77. M. Nowakowski and A. Idzkowski, “Ultra-wideband signal transmission according to European regulations and typical pulses,” 2020 International Conference Mechatronic Systems and Materials (MSM), Bialystok, Poland, 2020, pp. 1–4. /. [CrossRef]
  78. Roy, D.; Mukherjee, T.; Chatterjee, M.; Pasiliao, E. Adaptive streaming of HD and 360° videos over software-defined radios. Pervasive and Mobile Computing 2020, 67. [Google Scholar] [CrossRef]
  79. Siva Kumar, K.; Sasi Kumar, S.; Mohan Kumar, N. Efficient video compression and improving quality of video in communication for computer encoding applications. Computer Communications 2020, 153, 152–158. [Google Scholar] [CrossRef]
  80. Ramanand, S.; Kadam, S.; Bhoir, P.; Manza, R.R. FPV Camera and Receiver for Quadcopter using IoT and Smart Devices. In 2017 International Conference on Circuit, Power and Computing Technologies (ICCPCT); IEEE, 2017; pp 1-5.
  81. Song, H.; Lee, J.; Yoo, J. Performance Evaluation of 2.4 GHz and 5.8 GHz FPV Systems for Drone Racing. Journal of Communications and Networks 2019, 21, 247–253. [Google Scholar]
  82. Choi, D.; Kim, J.; Shim, D. A Novel Multi-Channel Access Scheme for FPV Racing Drones. Sensors 2019, 20, 4205. [Google Scholar]
  83. Li, L.; Lin, L.; Yang, M.; Shen, X. A Diversity Reception Scheme for FPV Video Transmission. In 2018 17th International Symposium on Antenna Technology and Applied Electromagnetics (ANTEM) & Canadian Radio Science Meeting; IEEE, 2018; pp 1-2.
  84. Zhao, Y.; Cao, W.; Liu, L.; Su, S. Multi-Rate FEC Mechanism for High-Throughput FPV Video Transmission in Drone Racing. Sensors 2021, 21, 4205. [Google Scholar]
  85. Nayagam, A.; Sundaresan, N.; Srinivasan, V. Different Methods of Defect Detection - A Survey. International Journal of Advance Research in Computer Science and Management 2022, 4, 66–68. [Google Scholar]
  86. K. N.; S.; D.; G. Fast and Efficient Detection of Crack Like Defects in Digital Images. ICTACT Journal on Image and Video Processing 2011, 01, 224–228. [Google Scholar] [CrossRef]
  87. Vallverdú Cabrera, D.; Utzmann, J.; Förstner, R. The adaptive Gaussian mixtures unscented Kalman filter for attitude determination using light curves. Advances in Space Research 2022. [CrossRef]
  88. Cabello, F.; Leon, J.; Iano, Y.; Arthur, R. Implementation of a fixed-point 2D Gaussian Filter for Image Processing based on FPGA. Signal Processing - Algorithms, Architectures, Arrangements, and Applications Conference Proceedings, SPA, 2015-Decem 2015, 28–33. [CrossRef]
  89. Sugano, H.; Miyamoto, R. Highly optimized implementation of OpenCV for the Cell Broadband Engine. Computer Vision and Image Understanding 2010, 114, 1273–1281. [Google Scholar] [CrossRef]
  90. Culjak, I.; Abram, D.; Pribanic, T.; Dzapo, H.; Cifrek, M. A brief introduction to OpenCV. MIPRO 2012 - 35th International Convention on Information and Communication Technology, Electronics and Microelectronics - Proceedings 2012, 1725– 1730.
  91. Zhang, H.; Li, C.; Li, L.; Cheng, S.; Wang, P.; Sun, L.; Huang, J.; Zhang, X. Uncovering the optimal structural characteristics of flocs for microalgae flotation using Python-OpenCV. Journal of Cleaner Production 2023, 385. [Google Scholar] [CrossRef]
  92. Smith, A.B.; Johnson, C.D.; Lee, R.W. W. Applications of Unmanned Aerial Vehicles in Disaster Response and Relief Efforts. Journal of Disaster Management 2019, 15, 126–135. [Google Scholar]
Figure 5. The image filtered by Gaussian Filter.
Figure 5. The image filtered by Gaussian Filter.
Preprints 83406 g005
Table 1. Previous research on drone’s applications.
Table 1. Previous research on drone’s applications.
Sectors Previous Study Reference
Bridge inspection This research aims to compare plenty of different cameras that are suitably used for the inspection process.
Moreover, this study encourages safety during the inspection process without involving humans physically inspecting the bridge.
[44]
Overhead power line
inspection
The Lidar-aided inspection approach creates collision-free paths that decrease the risk of any accident. This research has concluded that Lidar has provided precise information on their surrounding topography and vegetation and supports a good navigation basis for UAV-based powerline inspections. [45]
Porcelain insulators
Inspection
The performance of YOLOv4 in object detection is outstanding because it has a high object detection accuracy. The idea of a flight path strategy for UAVs to inspect proved to save time and energy. [46]
Human activity recognition (HAR) This paper implemented several types of CNN, such as 3D and 2D CNNs. The computational barriers inhibiting the use of deep learning-based HAR systems on drones may be removed by this research. [47]
Early sinkhole detection This research applies a thermal infrared camera attached to a drone to detect a potential sinkhole. The combination of machine learning CNN and thermal infrared has shown a tremendous positive impact in detecting a high possibility of sinkhole occurrence’s location.

[48]
Building external wall inspection A deep learning module was implemented to scan any flaws obtained on the wall surface. UAV starts the process by capturing the wall image to transform the defect locations into coordinates. Next, the deep learning process will determine the presence of defects. [49]
Bridge inspection Machine learning (CNN) was used to detect the flaws on columns and beams. The image captured by the UAV is adjusted to increase the quality of the image. [50]
High-speed railroad inspection Real-time defect detection is developed to scan potential safety hazards (PSH) in the surrounding high-speed railroad. Mask R-CNN segment is applied to the image processing program to detect any flaws in the surrounding. [51]
Petroleum In a simulated oil spill setting in arctic conditions, the capacities of several active/passive sensors, including a visible-near infrared (VNIR) hyperspectral camera (Rikola), thermal IR camera (Optris and Work swell Wiris), and laser fluorosensor (BlueHawk) onboard an X8 Video drone were evaluated. [52]
Plantation (sugarcane crops) Yano et al. (2016) used RGB images and the Random Forest (RF) classifier to identify weeds in a sugarcane field. Machine learning algorithms such as RF, SVM, ANN, and Deep Learning (DL) have been utilized with remotely sensed data for sugarcane monitoring with good accuracy (Wang et al., 2019) [53]
Mapping Agisoft PhotoScan1 1.2.6 (Agisoft LLC, St. Petersburg, Russia) was used to further process the set after a thorough inspection to create 3D textured digital models. In order to build 3D meshes, specific procedures were followed, including “arbitrary” mesh triangulation, “high” quality and “mild” depth filtering, and “ultra-high” photo alignment Urbanová et al. (2015). [54]
Electricity infrastructure R-CNN generates region proposals for extracting smaller chunks of the original image that consist of the items under examination. In order to accomplish this, a selective search method is used, which employs segmentation to guide the image sampling process and exhaustive search for potential item positions. Due to the selection algorithm, only the necessary number of regions are selected. The image data from each region is then wrapped into squares and sent to a CNN in the following step. [55]
Sloped road inspection An obstacle identification and distance measuring approach for sloped roads based on Vision IMU based detection and range method (VIDAR) is proposed. First, the road photos are collected and processed. The VIDAR collects the road distance and slope information the digital map provides to detect and eliminate false obstacles (those for which no height can be determined). Tracking the obstacle’s lowest point determines its moving condition. Finally, experimental analysis is carried out using simulation and real-world tests. [56]
Research Gap:
UAVs show excellent performance in solving problems faced by several industries. However, difficulties in handling UAVs also were identified, such as photographic quality diminishes in dark environments and UAVs cannot clear debris or other obstructions.
Table 2. Lens focal length and approximate FOV estimation for a camera with a 1/3 inch sensor size in a 4:3 aspect ratio
Table 2. Lens focal length and approximate FOV estimation for a camera with a 1/3 inch sensor size in a 4:3 aspect ratio
Lens Focal Length (mm) Approximate FOV (degree)
1.6 170+
1.8 160 – 170
2.1 150 – 160
2.3 140 – 150
2.5 130 – 140
Table 3. Defect detection method.
Table 3. Defect detection method.
Method Previous Study Reference
Fringe projection A method used in this paper is rivet and seam extraction to allow a precise and accurate 3D figure of the structure. The technology of surface structured light measurement was applied to the 3D figure. [71]
Wavelet transform Surface defect detection in tiling industries scans cracks, pinholes, scratches, and blobs on the ceramic surface. Wavelet transform is applied to filter for soft texture images such as ceramic and textile. [72]
Ultrasonic Background echo filter (BWEF) filters the ultrasonic C-scan to determine the location with a different depth than the neighboring ones. [73]
Ultrasonic The lower and upper wing skins were subjected to non-destructive testing (NDT) using an ultrasonic C-scan Mobile Automated Ultrasonic Scanner (MAUS) with a 5 MHz transducer. [74]
Research Gap:
Current technologies were observed and studied in detecting the defects. The defect has criteria that require high-technology tools to scan it accurately.
Table 4. Example of frequency, channel, and band of video transmitter.
Table 4. Example of frequency, channel, and band of video transmitter.
Band Channels
CH1 CH2 CH3 CH4 CH5 CH6 CH7 CH8
Band 1 F – FS/IRC 5740 5760 5780 5800 5820 5840 5860 5880
Band 2 E – Lumenier/DJI 5705 5685 5665 5752 5885 5905 5925 5866
Band 3 A – Boscam A 5865 5845 5825 5805 5785 5765 5745 5725
Band 4 R - RaceBand 5658 5695 5732 5769 5806 5843 5880 5917
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated