Preprint
Article

This version is not peer-reviewed.

Design and Field Validation of a Modular Vision-Guided UAV System for Real-Time Adaptive Vegetative Restoration

Submitted:

09 April 2026

Posted:

10 April 2026

You are already at the latest version

Abstract
Vegetative restoration in degraded landscapes requires deployment strategies that can scale while adapting to heterogeneous terrain conditions. Conventional aerial seeding is typically performed in an open-loop manner, where seeds are distributed uniformly without accounting for local suitability for plant establishment. This paper describes a modular, unmanned aerial vehicle (UAV)-independent system for vision-guided aerial seeding, integrating onboard sensing, embedded processing, and real-time actuation within a closed-loop framework. The system combines a downward-facing visible-spectrum camera, a lightweight embedded computing unit, and a custom seed-dispensing mechanism organized in a perception–decision–actuation pipeline. Terrain suitability is evaluated in real time using three convolutional neural network (CNN) models and a conventional color-based greenness ratio method, enabling classification of sowable and non-sowable areas based on soil exposure, vegetation density, and obstacle presence. A confidence-based decision strategy, combined with temporal filtering, reduces noisy measurements, while an altitude-adaptive pulse-width modulation (PWM) controller regulates seed release to maintain a target seed density across varying flight heights. Field experiments conducted under semi-arid conditions show that terrain classification accuracy exceeds 85%, with inference latency below 100 ms per frame on an embedded Jetson Nano platform. In addition, the proposed control strategy maintains consistent seed density across different altitudes. These results indicate that onboard perception can be effectively coupled with adaptive aerial actuation, enabling more selective and efficient UAV-based vegetative restoration.
Keywords: 
;  ;  ;  ;  ;  ;  

1. Introduction

Vegetative restoration is fundamental to mitigating soil erosion, recovering biodiversity, and enhancing water conservation, particularly in the Chihuahuan Desert. This biologically diverse semiarid region is shared by the United States and Mexico, covering at least 450,000 k m 2 [1]. In addition, this territory has experienced extensive land degradation. Studies have reported widespread loss of perennial grass cover and trees due to long-term grazing pressure, land-use change, and increasing climatic aridity [2]. One consequence of this degradation is a reduction in groundwater availability; e.g., a significant drop of 1 meter in groundwater levels was recorded in 2020 alone [3]. The need for technological innovations and knowledge transfer is essential to address these challenges; effective management practices are crucial for the sustainable use of grasslands and semi-arid woodlands, where re-seeding is a costly but viable option for restoring degraded areas [4].
Effective restoration contributes not only to ecological stability but also to climate change mitigation through natural carbon capture and improved land productivity [5]. Conventional reforestation practices often rely on manual direct seeding, in which seeds are distributed by field workers, resulting in high labor demands and spatial distribution variability [6]. In addition, reforestation operations frequently employ ground-based machinery such as tractor-mounted planting systems, which improve productivity but remain constrained by terrain accessibility and operational costs [7]. These limitations have motivated the exploration of aerial and autonomous approaches for scalable restoration. Unmanned aerial vehicles (UAVs), also known as drones, are increasingly used for vegetative restoration and aerial seeding due to their efficiency, cost-effectiveness, and ability to access challenging terrain [8]. Despite these advances, most UAV-based restoration approaches rely on predefined flight plans and fixed-rate seeding strategies, assuming homogeneous terrain conditions. Such open-loop approaches do not account for the fine-scale spatial variability in soil exposure, vegetation density, and obstacle presence, all of which strongly influence seed establishment success.
The integration of perception sensors, e.g., LiDAR and cameras, offers an opportunity to overcome these limitations by enabling UAVs to interpret their environment in real time [9]. To complement the use of sensors, recent advances in lightweight convolutional neural network (CNN) algorithms running on embedded devices and deployable on UAV platforms enable online perception-driven decision-making. However, most existing computer vision UAV applications in agriculture and environmental applications focus on post-flight analysis, vegetation mapping, and monitoring, rather than closing the control loop between perception and actuation during flight.
This manuscript presents a real-time vision-guided UAV system for precision vegetative restoration, capable of autonomously identifying suitable sowing areas and dispensing seeds based on a closed-loop approach. The proposed system integrates a red, green and blue (RGB) camera, a LiDAR sensor, an embedded computer, and a custom seeding device for real-time dispensing based on terrain conditions. Field experiments were conducted to validate the proposed real-time adaptive sowing approach by evaluating the system’s detection accuracy, seed density dispersion, and inference latency. The main contributions of this work can be summarized as:
  • The design and implementation of a closed-loop UAV-based system that tightly integrates real-time visual perception with autonomous, context-aware seed dispensing.
  • A modular and drone-independent system architecture that enables straightforward integration with a wide range of commercial UAV platforms.
  • A comparative evaluation of different computer vision algorithms, assessing their suitability for onboard deployment through systematic performance analysis.
  • The experimental validation under field conditions, quantifying terrain detection accuracy, onboard inference latency, and seed density precision.

3. Materials and Methods

This section details the system architecture, hardware components and functional modules, followed by the terrain classification methods, and seed deployment strategy used in the experimental evaluation. These elements are presented in sufficient detail to ensure reproducibility and to support a comprehensive evaluation under field conditions.

3.1. System Architecture

The proposed system was developed as a modular, UAV-independent restoration payload, designed to operate as a self-contained, closed-loop unit that can be integrated into multiple aerial platforms without modifying their native flight control architecture. Unlike conventional UAV-based restoration systems tightly coupled to specific drone firmware or proprietary interfaces, the presented architecture functions as an autonomous perception–decision–actuation module mounted onto a UAV carrier.
To validate platform independence, the system was implemented and experimentally evaluated on two distinct multirotor platforms: a commercial DJI Matrice 100 (DJI Technology Co., Ltd., Shenzhen, China) [39], and an open-source Holybro X650 (Holybro Ltd., Hong Kong, China) [40]. In both configurations, the restoration module operated independently of the flight stabilization system, demonstrating hardware and software portability. The system’s functional modules were added to the UAV carrier platform, as presented in Figure 1.
During normal flight operation, the UAV carrier maintains stable flight while the perception unit continuously acquires ground imagery. The embedded computing unit processes the imagery in real time and determines terrain suitability. When sowable terrain is detected, a command is transmitted to the seed-dispensing mechanism, which executes controlled releases. Simultaneously, telemetry data are transmitted to the ground computer for monitoring. The proposed system comprises four interconnected functional modules, along with the drone and its remote control, as depicted in Figure 2.
  • UAV carrier platform: The UAV serves exclusively as a mobility and stabilization carrier, providing lift, navigation, and hover control. The restoration module does not interface directly with the internal flight-control loops and does not require firmware modification.
  • UAV remote control: The remote-control system is used exclusively for navigation, positioning, and safety override.
  • Perception unit: The perception unit is responsible for acquiring environmental information required for terrain classification and altitude estimation. The RGB camera, model U20CAM-OV2719, (InnoMaker, Shenzhen, China) [41], continuously captures ground imagery during flight, while a LeddarOne single-element LiDAR sensor (LeddarTech Inc., Québec City, QC, Canada) [42] measures the current distance from the ground. The perception unit functions as the system’s environmental interface, supplying real-time data to the embedded computing module.
  • Embedded computer: The embedded computing unit serves as the core decision-making module of the system. The embedded system was implemented using a Jetson Nano developer kit (NVIDIA Corporation, Santa Clara, CA, USA) [43] for onboard neural network inference and high-level perception tasks, together with an ESP32 microcontroller (Espressif Systems, Shanghai, China) [44] responsible for low-level control of the seed dispensing mechanism and peripheral communication. The CNN model produces a binary classification output (sowable / non-sowable terrain). Based on this decision and the current altitude, a pulse width modulation (PWM) signal is sent to dispense the seeds.
  • Seed-dispensing mechanism: The seed-dispensing mechanism constitutes the actuation module of the system. PWM signals generated by the control unit drive a MG995 servomotor (TowerPro, Shenzhen, China) [45] that rotates an internal propeller within the dispenser, metering the seed flow and releasing seeds in discrete batches during each actuation cycle. The duration of the PWM command determines the number of propeller rotations and, consequently, the number of seed batches dispensed, enabling adaptive control of seed density during aerial deployment.
  • Ground computer: The ground computer functions as a supervisory and monitoring station. It does not participate in the closed-loop control process but provides real-time telemetry visualization and system status monitoring. Communication between the UAV and the ground computer is established via a LoRa wireless link using RYLR998 modules (Rayax Technologies, Taipei, Taiwan) [46], enabling low-power, long-range transmission of telemetry data, such as terrain classification confidence, sensor measurements, and seed deployment events, during flight operations. LoRa communication was selected for its extended range (5-10 km) and low power consumption.

3.2. Autonomous Sowing Strategy

Seed deployment is driven by real-time terrain perception, where classification confidence and flight conditions determine when and how seeds are released. Unlike fixed-rate aerial seeding approaches, the present method integrates visual inference, temporal filtering, and altitude-aware actuation to achieve context-sensitive seed placement, following emerging closed-loop UAV paradigms that link perception, decision-making, and adaptive intervention in precision agriculture and environmental applications [17,47,48]. The strategy reduces sensitivity to noisy predictions while remaining compatible with UAV flight dynamics. The autonomous sowing logic comprises three principal components: confidence-based decision-making, temporal filtering, and synchronization with UAV kinematics and altitude.
Seed deployment is conditioned on the output probability of the convolutional neural network (CNN). For each processed frame, the CNN produces a confidence score P s [ 0 , 1 ] , representing the likelihood that the observed terrain is classified as sowable. A deployment event is triggered only when
P s τ ,
where τ is a predefined confidence threshold. For the performed experiments, the threshold was selected empirically to balance false positives (seed waste) and false negatives (missed viable terrain). By enforcing a minimum confidence level, the system avoids releasing seeds under uncertain classification conditions, thereby reducing unnecessary dispersion onto unsuitable surfaces such as dense vegetation, rocky areas, or obstacles.
To reduce short-term fluctuations in the terrain classification output, a first-order low-pass filter based on exponential averaging was applied. The filtered value is computed as
P s f ( t ) = α P s ( t ) + ( 1 α ) P s f ( t 1 ) ,
where P s is the current sowable-terrain confidence value, P s f is the filtered value, and α is a smoothing coefficient controlling the filter response. This temporal filtering improves the stability of the deployment decision by preventing spurious seed release caused by noisy frame-by-frame predictions, without introducing significant latency. Therefore, the trigger condition is now enhanced as
P s f τ .
The sowing mechanism must operate consistently with UAV motion to maintain spatial deployment accuracy. The horizontal displacement d of the UAV during the perception–decision–actuation cycle is given by:
d = v · T s ,
where v is UAV horizontal velocity, and T s represents the total perception–decision–actuation period. To limit spatial error, the system operates under constrained flight velocity such that d remains below the target deployment tolerance.
Additionally, seed dispersion characteristics vary with altitude. As altitude increases, the footprint of the seed spread expands due to gravitational drop time and wind effects. To compensate for this, the actuation signal is modulated based on altitude measurements from the LiDAR sensor. Specifically, the PWM duty cycle D controlling the servomotor is adjusted as
D = f ( h ) ,
where h is the altitude above ground level. For higher altitudes, the PWM activation duration is increased, resulting in a greater quantity of seeds released per actuation cycle. Conversely, at lower altitudes, shorter activation intervals are applied to maintain spatial precision and avoid excessive seed concentration. This altitude-adaptive mechanism ensures consistent ground coverage despite variations in flight height.
The proposed autonomous sowing strategy is summarized in Algorithm 1. Terrain classification confidence, altitude, and flight velocity were integrated into a perception-driven decision logic that modulates PWM actuation for context-aware seed deployment.
Algorithm 1 Autonomous Vision-Based Sowing Strategy
  1:
Initialize confidence threshold τ
  2:
Initialize low pass filter coefficient α
  3:
Initialize closed-loop sampling period T s
  4:
while UAV in flight every T s seconds do
  5:
    Capture RGB frame
  6:
     P s CNN_Inference(frame)
  7:
     P s f Low_Pass_Filter( P s )
  8:
     h Altitude_Measurement()
  9:
    if  P s f τ  then
10:
        Altitude-based PWM
11:
         D f ( h )
12:
        Activate_Servo(D)
13:
        Log deployment event
14:
    end if
15:
    Transmit telemetry to ground station
16:
end while

3.3. Vision-Based Terrain Classification

This study evaluates four inference approaches for real-time terrain classification to support autonomous seed deployment:
  • Customized CNN model.
  • EfficientNet-B0 CNN model [49].
  • MobileNetV2 CNN model [50].
  • Color-based greenness ratio method.
All methods were implemented to output a binary decision (sowable vs. non-sowable) and were tested under the same experimental conditions to enable a fair comparison. The evaluation focuses on both classification performance (accuracy, precision, recall, F1-score) and embedded feasibility (latency, model size, and runtime memory), which are critical for closed-loop operation onboard UAV platforms.

3.3.1. Customized CNN

A compact CNN architecture was designed to maximize inference speed and robustness under embedded constraints. The model consists of stacked convolutional blocks followed by a final dense classifier producing the sowable probability. The architecture is optimized for onboard operation through reduced parameter count and efficient input resizing. This model serves as the primary onboard inference engine due to its favorable latency–accuracy trade-off. The customized CNN is a 4-block convolutional feature extractor followed by a large fully connected classifier trained from scratch using Adam with adaptive learning rate scheduling and model checkpointing, as observed in Figure 3.
A customized CNN model was used for evaluation, providing a lightweight architecture specifically adapted to the dataset’s characteristics and the classification task. Unlike large pretrained networks, a custom CNN can be designed with fewer layers and parameters, allowing faster training and lower computational requirements while still capturing the most relevant visual features of the images. Including a customized CNN in the evaluation also enables a baseline comparison with more complex architectures, such as EfficientNet-B0 and MobileNetV2, helping assess whether simpler models can achieve comparable performance in the specific application.

3.3.2. EfficientNet-B0

EfficientNet is a high-performance CNN architecture that achieves strong accuracy at a relatively lower computational cost, making it suitable for embedded applications. It uses compound scaling, which balances the network’s depth, width, and input resolution to achieve high performance with fewer parameters. In this system, EfficientNet-B0, the EfficientNet family’s baseline model, was implemented using transfer learning with ImageNet-pretrained weights to leverage previously learned visual features.
The model was adapted to the specific application by removing its original classification layer and adding a custom classification head composed of a GlobalAveragePooling layer, a Dropout layer to reduce overfitting, and a Dense softmax layer for two-class prediction. Input images were resized to 224×224 pixels and preprocessed using EfficientNet’s normalization function, while data augmentation techniques were applied to improve generalization. During training, the EfficientNet feature extractor was kept frozen, and only the new classification layers were trained, allowing the system to efficiently learn to classify images into the defined categories while reducing training time and computational cost. EfficientNet was used because it provides high classification accuracy while maintaining computational efficiency.

3.3.3. MobileNetV2

MobileNet is considered an embedded-friendly deep model based on depthwise separable convolutions. MobileNetV2 was implemented as a lightweight convolutional neural network to perform efficient image classification while maintaining low computational requirements.
In this work, transfer learning was applied by loading a MobileNetV2 model pretrained on ImageNet with an input size of 224 × 224, while removing the original classification layers. The base network was frozen to preserve its pretrained feature extraction capabilities, and a new classification head consisting of a GlobalAveragePooling2D layer and a Dense layer with a softmax activation was added for binary classification. The dataset images were loaded using ImageDataGenerator, which also applied data augmentation techniques such as rotation, zoom, horizontal flipping, shifts, brightness variation, and channel shifts to improve generalization. Evaluating MobileNetV2 provided insight into whether a lighter model could achieve comparable performance while offering advantages in speed and computational efficiency for practical field applications.

3.3.4. Color-Based Greenness Ratio Method

A non-learning baseline was implemented using hue, saturation and value (HSV) color-space thresholding to estimate vegetation presence. Each RGB frame is converted to HSV, and “green” pixels are detected using predefined hue/saturation/value ranges. The vegetation ratio is computed as:
R g = N g r e e n N t o t a l ,
where N g r e e n is the number of pixels classified as green and N t o t a l is the total pixel count.
First, the input RGB image is preprocessed by adjusting brightness, saturation, and gamma to reduce illumination variations and enhance color contrast. The image is then converted from the RGB color space to the HSV, which separates color information from brightness and makes green tones easier to detect. A green mask is generated by selecting pixels whose HSV values fall within predefined ranges associated with vegetation colors. Morphological operations are applied to remove noise and improve the quality of the mask. The algorithm then counts the number of detected green pixels and computes the vegetation (greenness) ratio, defined as the number of green pixels divided by the total number of pixels in the image. This ratio indicates the level of vegetation present: a value between 0 and 0.05 indicates the absence of vegetation (non-sowable), 0.05 and 0.5 indicate moderate vegetation (sowable), and 0.5 and 1.0 indicate dense vegetation (non-sowable).
This method was included as a computationally lightweight and deterministic baseline suitable for deployment on the Jetson Nano. Its negligible memory footprint, low power consumption, and minimal inference latency make it attractive for strict real-time UAV applications. Although less robust than CNN-based approaches, it provides a lower bound for embedded performance and enables systematic evaluation of the accuracy–efficiency trade-off.

4. Results

This section presents the results obtained from field experiments conducted to evaluate the proposed vision-guided UAV system. The analysis includes terrain classification performance, real-time inference behavior on the embedded platform, and seed dispensing characteristics under different operating conditions. A comparative evaluation of the implemented methods is provided, along with an analysis of seed dispersion as a function of flight altitude to validate the proposed density-control approach.

4.1. Training CNN Models

All three CNN models were trained under identical preprocessing, dataset partitioning, and augmentation protocols to ensure a fair architectural comparison. The dataset consisted of a total of 500 RGB images (250 sowable and 250 non-sowable), split into 350 training and 150 validation samples. They were acquired from the onboard UAV camera across multiple semi-arid field locations. Terrain classes were defined according to operational seeding criteria, distinguishing sowable surfaces (loam soil and light vegetation) from non-sowable areas (rocky terrain, dense vegetation, and obstacles). A consistent preprocessing pipeline and identical augmentation policies were applied to mitigate overfitting and environmental variability. Although the dataset is moderate in size, its balanced composition, domain consistency, and controlled training framework provide a reliable basis for evaluating the performance of embedded real-time terrain classification. Given that the primary objective of this work is to evaluate real-time onboard feasibility and comparative architectural performance rather than large-scale generalization across geographic regions, the dataset size is sufficient to assess relative model behavior under controlled, representative conditions.
Figure 4, Figure 5 and Figure 6 show the training/validation loss and accuracy results for Customized CNN, EfficientNet-B0, and MobileNetV2 models respectively. All three CNN architectures converged stably within approximately 15 epochs. EfficientNet-B0 achieved the highest validation accuracy (93%), followed by the customized CNN (91%) and MobileNetV2 (85%). Training and validation curves remained closely aligned across models, indicating limited overfitting despite the moderate dataset size.
Although EfficientNet-B0 achieved the highest validation accuracy, its increased computational complexity may impose higher latency and power consumption on the Jetson Nano. MobileNetV2 offered improved efficiency but reduced classification performance. The customized CNN provided the best overall balance between accuracy, computational cost, and real-time compatibility, supporting its suitability for real-time embedded deployment.

4.2. Experimental Setup

The proposed UAV-based vegetative restoration system was evaluated in an experimental field located at 28.675412, -106.080464. The selection of this location provided an adequate representation of semi-arid conditions. The terrain includes areas representing
  • sowable surfaces: exposed loam soil and sparse grass, and
  • non-sowable surfaces: compacted ground, denser vegetation, and artificially introduced obstacles.
This controlled variability allowed repeatable evaluation of classification and deployment behavior. The experimental field and the route followed by the drone are illustrated in Figure 7. Experiments were conducted under low wind conditions (below 1 m/s), and the drone traveled approximately 400 meters.
Visual detection performance was quantified using three primary metrics: accuracy, precision, recall and F1-score. Detection accuracy evaluates the ability of the onboard classifier to provide correct predictions among all predictions,
Accuracy = T P + T N T P + T N + F P + F N ,
where T P is the number of true positives, T N is the number of true negatives, F P is the number of false positives, and F N is the number of false negatives. Detection precision measures how many of the positive predictions made by the model are actually correct, thereby reducing unnecessary seed deployment,
Precision = T P T P + F P .
Detection recall measures the true positive ratio, for this case it means when a sowable area does not receive seeds,
Recall = T P T P + F N .
F1-score is used when data is imbalanced, and is defined as
F 1 - score = 2 · P r e c i s i o n · R e c a l l P r e c i s i o n + R e c a l l .
A total of 4 flight trials were conducted for each visual detection strategy. Each flight followed a predefined linear trajectory at a constant altitude of 5 meters and controlled velocity of 1.5 m/s to ensure consistent data acquisition. Flights were performed using the DJI Matrice 100 and Holybro X650 to validate platform independence.
Figure 7. The left side shows vegetation characteristics at the moment of the experiments, the right side illustrates the route followed by the UAV during the system evaluation, red zones represents non-sowable areas.
Figure 7. The left side shows vegetation characteristics at the moment of the experiments, the right side illustrates the route followed by the UAV during the system evaluation, red zones represents non-sowable areas.
Preprints 207525 g007
Figure 8. System field evaluation mounted in a DJI Matrice 100 drone (left), and a Holybro x650 drone (right). Four images analyzed by the algorithms classified as sowable and non-sowable.
Figure 8. System field evaluation mounted in a DJI Matrice 100 drone (left), and a Holybro x650 drone (right). Four images analyzed by the algorithms classified as sowable and non-sowable.
Preprints 207525 g008

4.3. Image Classification Field Evaluation

The performance of the proposed UAV-based vegetative restoration system was evaluated in terms of terrain classification accuracy, real-time embedded feasibility, and spatial sowing precision. Results are presented comparatively across the four evaluated inference methods: Customized CNN, EfficientNet-B0, MobileNetV2, and the color-based greenness ratio (HSV-GR) method.
Table 1 summarizes the classification performance of the four methods on the validation dataset. EfficientNet-B0 achieved the highest validation accuracy, followed closely by the customized CNN. MobileNetV2 showed slightly lower performance, while the HSV-GR method demonstrated the lowest classification robustness, particularly under heterogeneous illumination conditions.
EfficientNet-B0 demonstrated the lowest false-negative rate, improving the detection of sowable terrain. The customized CNN maintained balanced true-positive and true-negative rates, indicating stable discrimination between soil and vegetation patches. MobileNetV2 exhibited a moderate increase in false negatives, while the HSV-GR method showed higher false positives under variable lighting and mixed soil–vegetation textures.
Real-time performance was evaluated by measuring inference latency directly on the Jetson Nano to reflect realistic embedded deployment conditions. Inference latency represents the processing time required for terrain classification per frame; the results are shown in Table 2.
HSV-GR exhibits negligible latency. Customized CNN provided the best deep-learning real-time performance, while MobileNetV2 was a close second. EfficientNet-B0 operates at approximately 4 FPS, which limits its suitability for closed-loop real-time deployment under typical UAV flight conditions. Performance variability across flights remained within ±20%, indicating stable behavior under repeated trials.
EfficientNet-B0 exhibited the highest memory consumption, while MobileNetV2 and the customized CNN remained within stable operating margins. The HSV-GR method required minimal computational resources, as observed in Table 3.
Results show that while EfficientNet-B0 achieved the highest classification accuracy, the customized CNN provided the best balance between accuracy, latency, and embedded feasibility. MobileNetV2 offered improved computational efficiency but reduced robustness under heterogeneous terrain conditions. The HSV-GR method exhibited minimal computational cost but significantly lower classification reliability.

4.4. Seed Dispensing Analysis

The seeds used in this study are from the Mexican pinyon pine (Pinus cembroides), a small to medium-sized pine tree native to dry and semi-arid regions of North America. The seeds received a pelletized treatment, in which they were coated with protective materials to form pellets that improve handling, protection, and planting efficiency. The system seed container has a capacity of 500 grams of pelletized pinyon pine seeds.
Experimental measurements showed an inverse relationship between release height and ground seed density, indicating that seed density decreases with increasing release height. Figure 9 shows the observed expansion of the dispersion footprint at higher altitudes and provides a practical calibration model for the seed density controller.
Considering that each batch releases approximately 18 seeds, the controller provides a practical mechanism to compensate for the wider dispersion footprint observed at higher altitudes by using the following control rule:
B ( k ) = ρ s p ρ ( h ) ,
where B is the number of batches released at instant k, ρ s p is the target density ( s e e d s / m 2 ), and ρ ( h ) is the density produced by one batch at height h.
Using the calibrated density model, the required number of dispensing batches was computed for different target seeding densities. Results show that the batch count increases with release altitude to compensate for the larger dispersion footprint. This behavior is reflected in the controller’s structure, where the batch number remains constant over a height interval and increases when the expected density falls below the target threshold. For example, to achieve a target density of 10 seeds/m², the controller requires approximately 1, 2, 2, 3, 3, 4, and 4 batches at release heights of 2 m, 4 m, 6 m, 8 m, 10 m, 12 m, and 14 m, respectively.
Figure 10. Altitude-adaptive batch controller derived from the calibrated seed density model. The figure illustrates the number of required seed batches as a function of UAV release height for target densities of 5, 10, 15, and 20 s e e d s / m 2 . Higher release heights produce wider dispersion footprints, requiring additional seed batches to maintain the desired ground seed density.
Figure 10. Altitude-adaptive batch controller derived from the calibrated seed density model. The figure illustrates the number of required seed batches as a function of UAV release height for target densities of 5, 10, 15, and 20 s e e d s / m 2 . Higher release heights produce wider dispersion footprints, requiring additional seed batches to maintain the desired ground seed density.
Preprints 207525 g010

5. Discussion

Achieving low inference latencies, i.e., less than 100 ms, is critical for maintaining spatial consistency in closed-loop UAV seed deployment. During forward flight, any delay between terrain perception and actuation results in larger horizontal displacements, which affect seeding performance. At typical operational speeds, even small increases in latency can significantly shift the seed release location, thereby reducing placement accuracy. Maintaining low and stable latency, therefore, ensures that terrain classification results are translated into precise physical deployment actions. In parallel, achieving field classification accuracy above 85% indicates that the perception system remains robust under realistic outdoor variability, including heterogeneous soil textures, sparse vegetation patterns, and natural surface irregularities. In restoration contexts, classification errors have both ecological and economic implications: false positives lead to unnecessary seed expenditure, whereas false negatives reduce restoration coverage in suitable areas. In practice, both low latency and sufficiently high classification accuracy are required to achieve reliable deployment.
Beyond perception performance, the system also enables altitude-adaptive control of seed dispensing density. Experimental measurements demonstrated that seed density decreases with increasing release height due to the expansion of the dispersion footprint. By estimating the UAV altitude during flight and adjusting the number of dispensing batches accordingly, the system can compensate for this effect and maintain a desired seed density on the ground. This capability transforms the dispenser from a fixed-rate mechanism into an adaptive actuator, allowing consistent seeding performance across different flight altitudes and operational conditions.
Despite these promising results, several limitations should be acknowledged. RGB-based terrain classification remains sensitive to lighting variability, including shadows, specular reflections, and diurnal illumination changes. Furthermore, vegetation appearance in semi-arid ecosystems varies seasonally, with color and texture changes occurring between wet and dry periods that may affect model generalization. Because the dataset used in this study was collected within a limited temporal window, system performance under extreme seasonal transitions remains uncertain. In addition, although the experimental site provided representative semi-arid surface variability, it does not fully capture the complexity of natural degraded landscapes characterized by uneven terrain, variable slopes, and heterogeneous vegetation patches. Future experiments conducted across multiple seasons and restoration sites would further strengthen the ecological robustness of the proposed approach.
From an ecological perspective, by enabling context-aware seed placement, the system reduces indiscriminate broadcasting and promotes targeted restoration, potentially improving seed–soil contact and germination success in water-limited ecosystems such as the Chihuahuan Desert. The selection of RGB sensing rather than multispectral imaging reflects a deliberate trade-off between ecological information richness and embedded feasibility. While multispectral cameras provide valuable indicators of vegetation health (e.g., near-infrared–based vegetation indices), they typically add significant payload weight, power consumption, and computational demands, limiting real-time onboard processing on small UAV platforms. The RGB + CNN approach, therefore, offers a scalable, cost-effective, and real-time compatible solution that balances ecological relevance with operational practicality, while leaving open the possibility of integrating lightweight spectral sensing technologies in future system iterations.
Overall, the results demonstrate that real-time vision-guided sowing is technically feasible and ecologically meaningful under semi-arid field conditions. The achieved balance between classification accuracy, low inference latency, spatial deployment precision, and altitude-adaptive seed density control supports the viability of closed-loop UAV-based vegetative restoration systems. Although environmental variability and seasonal changes remain challenges, the proposed framework establishes a foundation for scalable, context-aware ecological restoration using autonomous aerial platforms.

6. Conclusions

A modular, vision-guided UAV system for adaptive vegetative restoration was developed and tested under field conditions. The proposed system enables context-aware seed deployment by combining RGB-based terrain classification with an altitude-adaptive seed dispensing mechanism. Unlike conventional aerial seeding approaches that operate in open-loop configurations, the developed system uses real-time image analysis to identify suitable sowing areas and trigger seed release accordingly. The architecture is platform-independent and was successfully implemented on two UAV platforms, demonstrating the flexibility and modularity of the proposed design.
Experimental validation under semi-arid field conditions demonstrated that the system can operate reliably in realistic outdoor environments. The terrain classification module achieved field accuracies above 85%, while maintaining inference latency below 100 ms per frame on an embedded Jetson Nano platform. This latency enables real-time operation and ensures spatial consistency between perception and seed deployment during forward flight. In addition, experimental measurements of seed dispersion revealed a clear relationship between release height and ground seed density. A power-law model was derived to describe this relationship, allowing the system to implement altitude-adaptive batch control of the dispensing mechanism. By adjusting the number of seed batches based on UAV altitude, the system can maintain a target ground seed density despite variations in release height.
From an ecological perspective, the proposed framework enables a transition from uniform aerial broadcasting to precision vegetative restoration, in which seeds are deployed selectively in areas of the terrain with favorable conditions for plant establishment. This capability has the potential to reduce seed waste, improve restoration efficiency, and enhance seed–soil contact in water-limited ecosystems such as the Chihuahuan Desert. Furthermore, the use of lightweight RGB sensing and embedded inference provides a cost-effective and scalable solution compatible with small UAV platforms, making the approach suitable for large-scale restoration operations in remote or difficult-to-access landscapes.
Future work will focus on expanding the system’s ecological and operational capabilities. In particular, integrating multispectral sensing could improve terrain characterization and vegetation health assessment. Reinforcement learning strategies may further optimize seed deployment decisions by incorporating environmental feedback and dispersion models. Finally, long-term ecological monitoring will be necessary to evaluate germination success, plant establishment rates, and overall ecosystem recovery, thereby linking robotic deployment performance with measurable restoration outcomes.

Author Contributions

Conceptualization, C.L. and L.O.; methodology, C.L., L.O. and L.C.F.-H.; software, A.L.-M. and C.L.; validation, A.L.-M. and C.L.; formal analysis, C.L., L.O. and L.C.F.-H.; investigation, A.L.-M. and C.L.; resources, C.L.; data curation, A.L.-M., and C.L.; writing—original draft preparation, A.L.-M. and C.L.; writing—review and editing, C.L., L.O. and L.C.F.-H.; visualization, A.L.-M. and C.L.; supervision, C.L., L.O. and L.C.F.-H.; project administration, C.L.; funding acquisition, C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Chihuahua State Government (Mexico) through the "Fondo Estatal de Ciencia, Innovación y Tecnología", grant number FECTI/2024/CV-CDF/020.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to privacy agreements with reforestation partners that restrict public sharing of field-level sensor data.

Acknowledgments

During the preparation of this manuscript, the authors used ChatGPT (OpenAI, GPT-5.3) for language refinement and editing support. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
CNN Convolutional Neural Network
HSV Hue, Saturation and Value
LiDAR Light Detection and Ranging
NDVI Normalized Difference Vegetation Index
NIR Near Infrared
PWM Pulse Width Modulation
RGB Red, Green and Blue
UAV Unmanned Aerial Vehicle

References

  1. Abell, R.; Iñigo, E.; Enkerlin, E.; Williams, C.; Castilleja, G.; Allnutt, T. Ecoregion-Based Conservation in the Chihuahuan Desert A Biological Assessment; WWF: Gland, Switzerland; CONABIO: Mexico City, Mexico; The Nature Conservancy: Arlington, VA, USA; PRONATURA Noreste: Monterrey, Mexico; ITESM: Ciudad Juárez, Mexico, 2000. [Google Scholar]
  2. Chiquoine, L.P.; Abella, S.R.; Schelz, C.D.; Medrano, M.F.; Fisichelli, N.A. Restoring historical grasslands in a desert national park: Resilience or unrecoverable states in an emerging climate? Biological Conservation 2024, 289, 110387. [Google Scholar] [CrossRef]
  3. Ochoa, C.G.; Villarreal-Guerrero, F.; Prieto-Amparán, J.A.; Garduño, H.R.; Huang, F.; Ortega-Ochoa, C. Precipitation, Vegetation, and Groundwater Relationships in a Rangeland Ecosystem in the Chihuahuan Desert, Northern Mexico. Hydrology 2023, 10. [Google Scholar] [CrossRef]
  4. Jurado-Guerra, P.; Velázquez-Martínez, M.; Sánchez-Gutiérrez, R.A.; Álvarez Holguín, A.; Domínguez-Martínez, P.A.; Gutiérrez-Luna, R.; Garza-Cedillo, R.D.; Luna-Luna, M.; Chávez-Ruiz, M.G. The grasslands and scrublands of arid and semi-arid zones of Mexico: Current status, challenges and perspectives. Revista Mexicana de Ciencias Pecuarias 2021, 12, 261–285. [Google Scholar] [CrossRef]
  5. Gann, G.D.; McDonald, T.; Walder, B.; Aronson, J.; Nelson, C.R.; Jonson, J.; Hallett, J.G.; Eisenberg, C.; Guariguata, M.R.; Liu, J.; et al. International principles and standards for the practice of ecological restoration. Second edition. Restoration Ecology 2019, 27, S1–S46. [Google Scholar] [CrossRef]
  6. Lozano-Baez, S.E.; Morio, A.; Bonnet, B.; Valderrama, P.D.; Sánchez, O.A.C.; Trujillo, J.R.T.; Paladines, H.M.; Flórez, M.; Gómez, E.; Medina, F.J.; et al. Lessons Learned From Direct Seeding to Restore Degraded Mountains in Cauca, Colombia. Ecological Management & Restoration 2025, 26. [Google Scholar] [CrossRef]
  7. Khoza, M.J.; Ramantswana, M.M.; Spinelli, R.; Magagnotti, N. Enhancing Silvicultural Practices: A Productivity and Quality Comparison of Manual and Semi-Mechanized Planting Methods in KwaZulu-Natal, South Africa. Forests 2024, 15. [Google Scholar] [CrossRef]
  8. Mohan, M.; Richardson, G.; Gopan, G.; Aghai, M.M.; Bajaj, S.; Galgamuwa, G.A.P.; Vastaranta, M.; Arachchige, P.S.P.; Amorós, L.; Corte, A.P.D.; et al. Uav-supported forest regeneration: Current trends, challenges and implications. Remote Sensing 2021, 13. [Google Scholar] [CrossRef]
  9. Buters, T.; Belton, D.; Cross, A. Seed and seedling detection using unmanned aerial vehicles and automated image classification in the monitoring of ecological recovery. Drones 2019, 3, 1–16. [Google Scholar] [CrossRef]
  10. Stamatopoulos, I.; Le, T.C.; Daver, F. UAV-assisted seeding and monitoring of reforestation sites: a review. Australian Forestry 2024, 87, 90–98. [Google Scholar] [CrossRef]
  11. Lan, Y.; Thomson, S.J.; Huang, Y.; Hoffmann, W.C.; Zhang, H. Current status and future directions of precision aerial application for site-specific crop management in the USA. Computers and Electronics in Agriculture 2010, 74, 34–38. [Google Scholar] [CrossRef]
  12. Song, C.; Zhou, Z.; Jiang, R.; Luo, X.; He, X.; Ming, R. Design and parameter optimization of pneumatic rice sowing device for unmanned aerial vehicle. Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering 2018, 34, 80–88. [Google Scholar] [CrossRef]
  13. Huang, X.; Xu, H.; Zhang, S.; Li, W.; Luo, C.; Deng, Y. Design and experiment of a device for rapeseed strip aerial seeding. Transactions of the Chinese Society of Agricultural Engineering 2020, 36, 78–87. [Google Scholar] [CrossRef]
  14. Arun Kumar, M.; Telagam, N.; Mohankumar, N.; Ismail, K.M.; Rajasekar, T. Design and implementation of real-time amphibious unmanned aerial vehicle system for sowing seed balls in the agriculture field. International Journal on Emerging Technologies 2020, 11, 213–218, Cited by: 12. [Google Scholar]
  15. Maldonado, E.; Lozoya, C.; Gonzalez-Espinoza, C.; Orona, L. Modular seed-sowing control system for drone reforestation. In Proceedings of the IEEE Conference on Technologies for Sustainability, SusTech, 2025. [Google Scholar] [CrossRef]
  16. Späti, K.; Huber, R.; Finger, R. Benefits of Increasing Information Accuracy in Variable Rate Technologies. Ecological Economics 2021, 185. [Google Scholar] [CrossRef]
  17. Šarauskis, E.; Kazlauskas, M.; Naujokienė, V.; Bručienė, I.; Steponavičius, D.; Romaneckas, K.; Jasinskas, A. Variable Rate Seeding in Precision Agriculture: Recent Advances and Future Perspectives. Agriculture 2022, 12. [Google Scholar] [CrossRef]
  18. Du, Z.; Yang, L.; Zhang, D.; Cui, T.; He, X.; Xie, C.; Jiang, Y.; Zhang, X.; Mu, J.; Wang, H.; et al. Design and experiment of a soil organic matter sensor-based variable-rate seeding control system for maize. Computers and Electronics in Agriculture 2025, 229. [Google Scholar] [CrossRef]
  19. Tan, X.; Bai, M.; Wang, Z.; Xiang, C.; Cheng, Y.; Yin, Y.; Wang, J.; Xu, Z.; Zhao, J.; Wang, B.; et al. Simple-efficient cultivation for rapeseed under UAV-sowing: Developing a high-density and high-light-efficiency population via tillage methods and seeding rates. Field Crops Research 2025, 327. [Google Scholar] [CrossRef]
  20. Fraser, B.T.; Congalton, R.G. Issues in Unmanned Aerial Systems (UAS) data collection of complex forest environments. Remote Sensing 2018, 10. [Google Scholar] [CrossRef]
  21. Castro, J.; Morales-Rueda, F.; Alcaraz-Segura, D.; Tabik, S. Forest restoration is more than firing seeds from a drone. Restoration Ecology 2023, 31. [Google Scholar] [CrossRef]
  22. Muñoz, G.; Abaunza, H.; Lozoya, C.; Castañeda, H. Experimental Evaluation of an Observer-Based Controller for an Unmanned Aerial Vehicle in Reforestation Activities. Journal of Field Robotics 2025, 42, 867–879. [Google Scholar] [CrossRef]
  23. Agapiou, A. Vegetation Extraction Using Visible-Bands from Openly Licensed Unmanned Aerial Vehicle Imagery. Drones 2020, 4. [Google Scholar] [CrossRef]
  24. Villalobos-Montiel, J.; Aguilar-Gonzalez, A.; Orona, L.; Lozoya, C. Identifying deforested areas through convolutional neural network for drone reforesting. In Proceedings of the 2023 IEEE Conference on Technologies for Sustainability, SusTech 2023, 2023; pp. 138–143. [Google Scholar] [CrossRef]
  25. Stanley, M.; Morris, G. Machine learning-based vegetation cover analysis in Lake Hawdon North using point segmentation. 2024 International Conference on Machine Intelligence for GeoAnalytics and Remote Sensing (MIGARS), 2024; pp. 1–2. [Google Scholar] [CrossRef]
  26. Huang, S.; Tang, L.; Hupy, J.P.; Wang, Y.; Shao, G. A commentary review on the use of normalized difference vegetation index (NDVI) in the era of popular remote sensing. Journal of Forestry Research 2021, 32. [Google Scholar] [CrossRef]
  27. Al-Ali, Z.; Abdullah, M.; Asadalla, N.; Gholoum, M. A comparative study of remote sensing classification methods for monitoring and assessing desert vegetation using a UAV-based multispectral sensor. Environmental Monitoring and Assessment 2020, 192. [Google Scholar] [CrossRef]
  28. Holman, F.H.; Riche, A.B.; Castle, M.; Wooster, M.J.; Hawkesford, M.J. Radiometric calibration of ’commercial offthe shelf’ cameras for UAV-based high-resolution temporal crop phenotyping of reflectance and NDVI. Remote Sensing 2019, 11. [Google Scholar] [CrossRef]
  29. van der Merwe, D.; Burchfield, D.R.; Witt, T.D.; Price, K.P.; Sharda, A. Drones in agriculture. Advances in Agronomy 2020, 162, 1–30. [Google Scholar] [CrossRef]
  30. Zhang, J.; Liang, X.; Wang, M.; Yang, L.; Zhuo, L. Coarse-to-fine object detection in unmanned aerial vehicle imagery using lightweight convolutional neural network and deep motion saliency. Neurocomputing 2020, 398, 555–565. [Google Scholar] [CrossRef]
  31. Niculescu, V.; Lamberti, L.; Conti, F.; Benini, L.; Palossi, D. Improving Autonomous Nano-Drones Performance via Automated End-to-End Optimization and Deployment of DNNs. IEEE Journal on Emerging and Selected Topics in Circuits and Systems 2021, 11, 548–562. [Google Scholar] [CrossRef]
  32. Radovic, M.; Adarkwa, O.; Wang, Q. Object recognition in aerial images using convolutional neural networks. Journal of Imaging 2017, 3. [Google Scholar] [CrossRef]
  33. Narayanan, P.; Borel-Donohue, C.; Lee, H.; Kwon, H.; Rao, R. A real-time object detection framework for aerial imagery using deep neural networks and synthetic training images. In Proceedings of the Proceedings of SPIE - The International Society for Optical Engineering, 2018; Vol. 10646. [Google Scholar] [CrossRef]
  34. Mumuni, F.; Mumuni, A.; Amuzuvi, C.K. Deep learning of monocular depth, optical flow and ego-motion with geometric guidance for UAV navigation in dynamic environments. Machine Learning with Applications 2022, 10. [Google Scholar] [CrossRef]
  35. Makrigiorgis, R.; Siddiqui, S.; Kyrkou, C.; Kolios, P.; Theocharides, T. Efficient Deep Vision for Aerial Visual Understanding; Springer, 2023; pp. 73–94. [Google Scholar] [CrossRef]
  36. Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. Deep learning techniques to classify agricultural crops through UAV imagery: a review. Neural Computing and Applications 2022, 34, 9511–9536. [Google Scholar] [CrossRef]
  37. Csillik, O.; Cherbini, J.; Johnson, R.; Lyons, A.; Kelly, M. Identification of citrus trees from unmanned aerial vehicle imagery using convolutional neural networks. Drones 2018, 2, 1–16. [Google Scholar] [CrossRef]
  38. Khan, S.; Tufail, M.; Khan, M.T.; Khan, Z.A.; Anwar, S. Deep-learning-based spraying area recognition system for unmanned-aerial-vehicle-based sprayers. Turkish Journal of Electrical Engineering and Computer Sciences 2021, 29, 241–256. [Google Scholar] [CrossRef]
  39. DJI Technology Co., Ltd.. DJI Matrice 100 User Manual, 2016.
  40. Holybro Ltd.. Holybro X650 Development Kit Documentation, 2023.
  41. InnoMaker. U20CAM-OV2719 USB Camera Module. 2026. Available online: https://www.inno-maker.com/product/u20-ov2710-isp-2mp/ (accessed on 22 March 2026).
  42. LeddarTech Inc. LeddarOne LiDAR Sensor. 2026. Available online: https://leddartech.com/products/leddarone/ (accessed on 22 March 2026).
  43. NVIDIA Corporation. Jetson Nano Developer Kit. 2026. Available online: https://developer.nvidia.com/embedded/jetson-nano-developer-kit (accessed on 22 March 2026).
  44. Systems, Espressif. ESP32 Series Datasheet. 2026. Available online: https://www.espressif.com/en/products/socs/esp32 (accessed on 22 March 2026).
  45. TowerPro. MG995 High-Speed Servo Motor Datasheet. 2026. Available online: https://www.towerpro.com.tw/product/mg995/ (accessed on 22 March 2026).
  46. Reyax Technology Co; Ltd. RYLR998 LoRa Module Datasheet. 2026. Available online: https://reyax.com/products/rylr998 (accessed on 22 March 2026).
  47. Li, W.; Luo, Y.; Jiang, P.; Dong, X.; Tang, K.; Liang, Z.; Shi, Y. A sustainable crop protection through integrated technologies: UAV-based detection, real-time pesticide mixing, and adaptive spraying. Scientific Reports 2025, 15, 35748. [Google Scholar] [CrossRef] [PubMed]
  48. Agrawal, J.; Arafat, M.Y. Transforming Farming: A Review of AI-Powered UAV Technologies in Precision Agriculture. Drones 2024, 8. [Google Scholar] [CrossRef]
  49. Tan, M.; Le, Q. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the 36th International Conference on Machine Learning; Chaudhuri, K., Salakhutdinov, R., Eds.; PMLR, 09–15 Jun 2019, Vol. 97, Proceedings of Machine Learning Research, pp. 6105–6114.
  50. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018; pp. 4510–4520. [Google Scholar] [CrossRef]
Figure 1. Functional modules integrated into the drone platform: (a) perception unit, (b) embedded computing unit, (c) autonomous seed-dispensing mechanism.
Figure 1. Functional modules integrated into the drone platform: (a) perception unit, (b) embedded computing unit, (c) autonomous seed-dispensing mechanism.
Preprints 207525 g001
Figure 2. Functional block diagram of the proposed modular UAV-independent vegetative restoration system. The architecture separates the UAV carrier platform from the closed-loop restoration payload, which integrates perception, embedded decision-making, and autonomous seed deployment. Telemetry is transmitted to a ground monitoring station, while manual flight control remains independent of the restoration subsystem.
Figure 2. Functional block diagram of the proposed modular UAV-independent vegetative restoration system. The architecture separates the UAV carrier platform from the closed-loop restoration payload, which integrates perception, embedded decision-making, and autonomous seed deployment. Telemetry is transmitted to a ground monitoring station, while manual flight control remains independent of the restoration subsystem.
Preprints 207525 g002
Figure 3. Workflow of the customized Convolutional Neural Network (CNN) training pipeline. The diagram illustrates data augmentation, followed by four Conv2D–MaxPooling blocks for hierarchical feature extraction. The extracted features are flattened and passed through a 128-unit fully connected layer and a Softmax output layer for binary classification.
Figure 3. Workflow of the customized Convolutional Neural Network (CNN) training pipeline. The diagram illustrates data augmentation, followed by four Conv2D–MaxPooling blocks for hierarchical feature extraction. The extracted features are flattened and passed through a 128-unit fully connected layer and a Softmax output layer for binary classification.
Preprints 207525 g003
Figure 4. Customized CNN training and validation performance. Left panel shows the evolution of training and validation loss, while the right panel shows the corresponding accuracy curves.
Figure 4. Customized CNN training and validation performance. Left panel shows the evolution of training and validation loss, while the right panel shows the corresponding accuracy curves.
Preprints 207525 g004
Figure 5. EfficientNet-B0 training and validation performance. Left panel shows the evolution of training and validation loss, while the right panel shows the corresponding accuracy curves.
Figure 5. EfficientNet-B0 training and validation performance. Left panel shows the evolution of training and validation loss, while the right panel shows the corresponding accuracy curves.
Preprints 207525 g005
Figure 6. MobileNetV2 training and validation performance. Left panel shows the evolution of training and validation loss, while the right panel shows the corresponding accuracy curves.
Figure 6. MobileNetV2 training and validation performance. Left panel shows the evolution of training and validation loss, while the right panel shows the corresponding accuracy curves.
Preprints 207525 g006
Figure 9. Seed density ρ , as a function of UAV release height h. Experimental measurements are shown as individual points. A power-law regression model was fitted to the experimental data, with a high correlation value R 2 , to describe the relationship between release height and seed density.
Figure 9. Seed density ρ , as a function of UAV release height h. Experimental measurements are shown as individual points. A power-law regression model was fitted to the experimental data, with a high correlation value R 2 , to describe the relationship between release height and seed density.
Preprints 207525 g009
Table 1. Terrain classification performance comparison.
Table 1. Terrain classification performance comparison.
Method Accuracy Precision Recall F1-score
Customized CNN 0.8681 0.8689 0.8543 0.8616
EfficientNet-B0 0.8816 0.8747 0.8796 0.8771
MobileNetV2 0.7739 0.7255 0.8515 0.7835
HSV-GR Method 0.7402 0.8015 0.6106 0.6932
Table 2. Embedded real-time performance comparison on Jetson Nano.
Table 2. Embedded real-time performance comparison on Jetson Nano.
Method Mean Latency (ms) Std. Dev. (ms) FPS
Customized CNN 63.29 12.46 15
EfficientNet-B0 241.36 19.98 4
MobileNetV2 78.80 10.84 12
HSV-GR Method 11.81 3.63 84
Table 3. Memory footprint comparison during inference.
Table 3. Memory footprint comparison during inference.
Method Model Size (MB) Runtime Memory (MB)
Customized CNN 115.59 1,828.63
EfficientNet-B0 16.49 1,708.25
MobileNetV2 9.32 1,667.56
HSV-GR Method 0 Negligible
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated