Preprint
Review

This version is not peer-reviewed.

Deep Learning and Machine Learning Techniques for Apple Leaf Disease Recognition: A Systematic Review and Future Directions

Submitted:

13 May 2025

Posted:

14 May 2025

You are already at the latest version

Abstract
The new generations of exploring automated quality assessment of fruits and vegetables in post-harvest processing based on machine vision, hyperspectral imaging and deep learning applications. We examine the technical issues and challenges associated with the implementation of such technologies in quality control systems and their role in achieving efficiency and sustainability. They also pointed out the enabling role of AI, IoT, and big data in scalable, low-cost robotic solutions. This review highlights research gaps that need to be addressed and presents future directions for optimization of automated systems for post-harvest food quality assessment through analysis of the current state of research.
Keywords: 
;  ;  ;  ;  ;  

1. Introduction

1.1. Background and Importance of Disease Detection in Horticulture

Horticulture plays a vital role in global agriculture, with crops like apples contributing significantly to both the economy and food supply. However, apple crops face numerous diseases that can reduce yield and quality, leading to economic losses. Apple diseases such as apple scab, powdery mildew, and rust have historically been managed through chemical interventions, which, while effective, have substantial environmental and health costs. Timely detection of these diseases is crucial for minimizing the use of pesticides, improving crop yield, and enhancing food safety. As such, there is a growing need for efficient, reliable, and automated systems for disease identification. Traditional methods of disease detection, primarily based on visual inspections by skilled experts, are time-consuming, labour-intensive, and prone to human error. The adoption of modern technologies such as machine learning (ML) and deep learning (DL) offers a promising solution to address these challenges.

1.2. Overview of Machine Learning (ML) and Deep Learning (DL) Applications

Machine Learning and Deep Learning: Basics: All these forms of artificial intelligence have established compelling tools for agricultural disease detection, and apple leaf disease is one of them. Dharm et al. (2019) [1] examined several apple diseases and their detection methods and suggested that ML algorithms can be effective in automating their identification. They pointed out that used in combination with ML algorithms, computer vision techniques could enable quicker and more precise disease identification, which will decrease dependence on traditional inspection techniques that require manual inspection. Convolutional neural networks (CNNs),  a class of deep learning algorithms, have gained remarkable progress in terms of classification performance recently. Zhang et al. (2021) [2] deep learning model for apple leaf diseases and its performance was better than traditional machine learning, and deep learning also provides more accuracy and less computational complexity. These findings indicate the potential of deep learning to become a key technology in automated disease detection systems for horticultural crops.
AI methodologies and specifically machine learning have proved to be effective in identifying and classifying diseases in real time for example with Al-Wasabi et al. (2022) [3]. Their study utilized AI-based systems for the classification of apple leaf diseases and reported remarkable enhancements in diagnostic accuracy. Their system ensured the detection earlier and accurately detection of diseases affecting the apple crops by using sophisticated measures such as CNNs and other deep learning models (Al-Wesabi et al., 2022). However, traditional ways of scanning plants can be really time-consuming as it's practically impossible to do this for every single plant in a large field, outsourcing to intelligent AI methods that can deliver the required accuracy with speed is the key here.
Kumar et al. As increasingly reported in more recent literature by Yan et al. (2022) [4], many machine learning methods have also been studied, confirming again the dominance of ML and DL models in the classification of diseases in apple orchards. Their review emphasized how these approaches could be effectively employed to overcome challenges posed by large datasets and complex disease patterns (Kumar et al., 2022). They focused on various types of ML algorithms for example, support vector machines (SVM) and decision trees, that are widely used in plant disease classification tasks. This amount of research emphasises the growing relevance and practical use of ML and DL in contemporary agriculture.

1.3. Challenges and Research Gaps

Despite these promising results, there are various challenges in the application of machine learning and deep learning methods for apple leaf disease detection. The major problem of disease identification is due to several factors that can affect the appearance of the symptoms, with the main factors influencing these data are related to some key issues such as environmental conditions, plant age or the disease progression. So these variables hinder the creation of generalized models that can generalize and have a good performance over varying orchards and geographical locations. Meanwhile, limited access to large, high-quality datasets containing well-labelled diseased and healthy samples of apple leaves remains as a substantial hindrance to model development and evaluation.
CNNs have demonstrated phenomenal success in classifying diseases with outstanding accuracy however, success is intimately connected to access to large annotation sets. This is frequently a constraint in agricultural setups, as information acquisition can be a time-consuming and costly process. The second issue is that training deep learning models is computationally intensive and time-consuming. Furthermore, the deployment of these models in the field is usually limited in real-time due to the requirement for proper hardware and infrastructure.
Additionally, although high accuracy of disease detection has been observed using AI methods, methods that can explain or interpret the performance of these models are not yet well-established. Making these AI systems explainable and intelligible to farmers is one of the widely accepted requirements before their practical implementations for precision agriculture. This ability to interpret is important for acceptance of these technologies, because only if the farmers trust the system by understanding its suggestions, can they follow it and really improve their disease management.
Finally, future works should consider the challenges in designing models that can solve the multi-class disease classification problem, where many different diseases are observed on the same plant, and a model that can distinguish between the diseases accurately is required. Kumar et al. (2022), which noted this as an area for significant improvement, proposed future research to include multi-class classifiers to improve their capability to handle this complexity (Kumar et al., 2022) [4].
Figure 1. Distribution across each section.
Figure 1. Distribution across each section.
Preprints 159380 g001

2. Machine Learning and Deep Learning Techniques for Disease Detection

2.1. Traditional Machine Learning Techniques:

Conventional machine learning (ML) methods have been critical to the early detection of apple leaf diseases based on hand-crafted features (e.g., texture, shape, and color). Using ML Models- support vector machines (SVM) and k-nearest neighbours (k-NN), Singh and Gupta [5] (2018) achieved good classification accuracy to distinguish between apple scab and Marssonina coronaria. A broader outline of ML algorithms was provided in Sarker (2021) [6] along with their real-world applications mentioning their interpretability, efficiency and relevance of ML algorithms in agriculture-related domains. Meshram et al. (2021) [7] highlighted that traditional models with limited access to annotated data are suitable for crop diseases with few annotated data, thus reaffirmed that ML is a key component of agricultural innovation and specifically crop phenotype disease growth prediction and precision agriculture. Sandhu (2021) [8] utilized traditional ML methods such as logistic regression and random forest to detect diseases on apple leaves, thus supporting the utility of classical machine learning methods in plant disease diagnostics. Though effective and resource-efficient, their dependence on manual feature extraction and limited adaptability in more complex cases doesn't suffice, hence driving the transition towards deep learning solutions.

2.1.1. Naïve Bayes:

When you use Naïve Bayes, you will use, probabilistic machine-learning algorithms based on Bayes' Theorem; it is simple and powerful. Naive Bayes has efficient implementation and works well on large data as well as small data, however, the assumption that the features are conditional independent given the class-label, is not true in the real world but nevertheless they produce good results. Naive Bayes is a popular classification method because it is fast, efficient, and works very well on small data.
An automated disease detection and classification of the apple, that is one of the most susceptible crops to diseases like apple scab, apple rot, and apple blotch. The pre-processing stages are the most vulnerable times to these diseases as they are usually not detected until post-processing,  resulting in major economic losses. The proposed methodology classifies the disease by means of texture features using the Gray Level Co-occurrence Matrix (GLCM) followed by classification through Naive Bayes class. The results show that classification model was able to classify without any misclassification with an accuracy above over 96.43%. This demonstrates the accuracy, automation in apple disease identification and potential of GLCM extraction and Naive Bayes in overall disease management [9]. The application of the Naive Bayes algorithm to classify different apple fruit varieties in a non-destructive manner for challenges faced in manual sorting such as high cost, subjectivity, and inconsistency. Image acquisition segmentation, preprocessing, and classification using Naive Bayes for apple varieties were encompassed in this research. The prototype system developed in MATLAB had an accuracy of 91%, a sensitivity of 77%, a precision of 100% and a specificity of 80%. Naive Bayes was found to be better than MLP-Neural, fuzzy and principal components analysis by the comparison with other classification methods and on the accuracy. According to the study researchers Naive Bayes is an efficient, non-destructive way of identifying apple varieties [10]. The automatic detection and classification of apple diseases like apple scab, Apple Rott and apple blotch that cause huge economic loss. This emphasizes the need for accurate diagnosis prior to the implementation of management practices. A classification accuracy of 96.43% was achieved using the proposed method featuring texture-based GLCM (Gray Level Co-occurrence Matrix) extraction with Naive Bayes classification. We were then able to show that it is possible to use this method for robust distinction between healthy and diseased apples, thereby providing an effective means for early disease detection in apple production [11].
Again, another of research abstract on the application of Naive Bayes to apple fruit variety classification as a solution to overcome high cost, subjectivity, and inconsistency in manual sorting. The system, developed in MATLAB 2015a, performed image acquisition, pre-processing, segmentation, and classification of apple varieties. The results revealed the accuracy of 91%, a sensitivity of 77%, as well as the precision and a specificity of 100% and 80%, respectively. Naive Bayes outperformed methods such as MLP-Neural, fuzzy logic, and principal components by comparison. The study found that Naive Bayes is a simple, unharmful, but effective algorithm for apple type classification [12].

2.1.2. Support Vector Machine (SVM):

The Support Vector Machine (SVM) is widely used supervised machine learning model for classification tasks. It functions by identifying a hyperplane that separates different classes of data points with the greatest margin. SVM works particularly well in high-dimensional spaces and is still effective in cases where the number of dimensions is greater than the number of samples.
Discriminating for example, one of the tasks where SVM showed good performance was discriminating diseased vs. healthy plant leaves based on the visual features. This could be trained on texture, color, and shape features to classify apple leaf diseases, grapevine diseases, and citrus plant infections, for example. Due to its generalization ability and robustness against overfitting, it is an algorithm most suitable for small and medium-sized agricultural data.
a novel technique to classify apple leaves (healthy and diseased-rot leaves leaves) using more on shape features based on two classification techniques, Gradient Boosting and Support Vector Classifier (SVC). A total of 1813 apple leaf images were identified and divided into a 70% training set and a 30% test set. The findings show that the new algorithm predicts better than traditional methods, with Gradient Boosting hitting only 87% and SVC at 91% accuracy. Both techniques are compared using a confusion matrix, which clearly illustrates the superiority of SVC. This classification is useful to quickly identify diseased leaves to minimize the spread of infection, thus increasing crop yield. The model is evaluated primarily against key performance metrics such as accuracy, precision, and recall [13].
Black rot and Cedar apple rust have affected apple yield every year. Its impact is tremendous both on the apple industry as well as economy of the country. In this paper, we proposed a system to identify diseases occurring in the infected apple leaves by integrating principles of machine learning and image processing. This method is capable of classifying infected and uninfected apple leaves efficiently. Preprocessing the image is the first step towards the identification of the figure, several image processing techniques are involved here, such as the Otsu Thresholding Algorithm and Histogram Equalization. The infected area is classified using the image segmentation to separate regions, and a Multiclass SVM identifies the disease type from the original leaf image with 96% accuracy among 500 images. It also shows that what portion of the total area of that diseased apple leaf image [14] that is infected. Another focuses on the automated detection of three apple diseasesAlternaria, apple black spot, and apple leaf miner pest—using image processing techniques. The research employs Artificial Neural Networks (ANNs) and Support Vector Machines (SVMs) for disease classification. After collecting leaf samples and capturing images under controlled lighting, k-means clustering was used to identify infected regions in the images. Feature extraction followed, and the results showed that SVMs outperformed ANNs in terms of disease classification accuracy. This approach offers a more rapid, cost-effective alternative to traditional manual disease detection, especially in large farms [15]. Detection of leaf diseases is one of the biggest causes of crop loss in agricultural countries such as India, where almost 70% of the population relies on farming. Historically, disease detection has been reliant on the visual observations of agricultural professionals and traditional methods, which are expensive and labour-intensive. An automated system is proposed in this paper that utilizes image processing for the early detection of plant diseases to increase total crop yield. The system uses RGB to HSV color space conversion and extracts the mean, standard deviation of color features, and also a bitmap for local color features. Healthy areas (green pixels) are masked to extract the features of texture, and then, K-means clustering is used for segmentation. Then, the texture features are extracted by the Gray Level Co-occurrence Matrix (GLCM). Then, trees are classified as diseased or healthy using a Support Vector Machine (SVM), with precision and recall being used to evaluate performance. This provides a quicker, more cost-efficient inspection method [16].
The challenge of identifying diseases in fruit, which takes expertise and experience. India is the second-largest producer of fruits but manual inspection of fruits for any disease detection leads to inconsistency and a lack of uniformity. As a case study, we introduce an image classification-based approach for accurate disease identification, applied specifically to apple fruit disease detection. Infected portions of fruit are detected using K-Means clustering segmentation, followed by feature extraction of color, texture and shape-based features from segmented images. Healthy or infected fruits are classified using Support Vector Machine (SVM), and a Multiclass SVM classifier is used for the type of disease. This automated process enhances efficiency,  accuracy, and consistency, replacing manual inspection. In this way, we show that our approach can perform with a 99% classification accuracy, as we can determine in the experimental results [17]. The other one deals with leveraging computer vision techniques for automating the plant disease identification in agriculture and horticulture. It demonstrates the necessity of early identification of plant diseases for proper crop protection. Color and texture features of plant disease images are utilized to build algorithms. Those features are utilized to train Support Vector Machine (SVM) and Artificial Neural Network (ANN) classifiers. A method for the identification and classification of plant diseases is proposed with a reduced set of features. These results demonstrate that the SVM classifier is superior compared to the ANN, thus making it more appropriate for detecting and classifying plant diseases in agricultural and horticultural crops [18].

2.1.3 K-Means Clustering:

In unsupervised machine learning, it is used to group unlabelled data into K clusters using data points similar to the other points in the same cluster, the K-means clustering algorithm. K-means algorithm. You initialize K centroids and assign a data point to the closest centroid (We have K clusters). In the case of agricultural disease detection, K-means clustering is widely used in the segmentation of images, particularly to extract the affected part of the leaves of the plants. It aids in distinguishing healthy leaf regions from diseased ones despite having color or texture under preprocessing before classification. It is very simple and efficient for a large-scale image dataset in precision agriculture.
The identification of leaf spot disease beginning on apple trees in Batu City, Indonesia with a prominent apple industry. It has been noted that early diagnosis of this disease is very important for the quality and quantity of apples. This would help in segmenting the infected and healthy leaves from the images captured by a drone for the apple sorting process. The first stage converts images from RGB to CIE Lab before applying the Perona-Malik diffusion filter over the reddish-greenish component to extract noise. It is then segmented using a variational level set method. The experimental results indicate that the proposed method segments the leaf spot disease well in most of the test images.[19]. Machine Learning Algorithm-Based Classification of Apple Fruit Disease. An Image Processing-Based Solution: The Death of Apple | Major Economic Loss in Agriculture. It is a three-step process that involves (1) K-means-based image segmentation, (2) extraction of state-of-the-art features from the segmented images and (3) classification of the images using Multi-class SVM. The experimental results show that the preproposed method can increase the accuracy of disease detection and classification considerably, achieving a 93% classification accuracy and providing a reliable method for automating apple fruit disease diagnosis [20].
A different study's goal is to design a software solution using image-processing techniques for automatic detection and classification of plant leaf diseases which will be cost-effective as compared to expert observations especially in the developing nations Time series theory using statistical methodologies via neural nets is an involved task typically consisting of four components, starting with colour transformation of the RGB image of liquid manure (leaves) based on an independent colour space (Part 1) and subsequent segmentation of the images (Part 2) through the K-means clustering technique (segmenting pixels via a clustered pixel), subsequent calculation of the metrics of the texture features in the segmented (infected segments) regions (Part 3) taken through a pre-trained neural net for class assumption (Part 4). Testing was performed using leaf images from the Al-Ghor area in Jordan, and promising results were attained with 93% precision in disease detection and classification. According to the paper, the output showed that the neural network classifier was able to learn features and was capable of classifying leaf diseases accurately, while K-means clustering was able to segment RGB images effectively and hence could be used for deployment [21].

2.2. Deep Learning:

With automatic feature extraction and accurate classification from raw image data, deep learning would certainly bring a revolutionary approach to plant disease detection. For a short note,  if we specifically consider apple leaf disease recognition, deep learning models, especially Convolutional Neural Networks (CNNs), have outperformed classical methods in capturing complex spatial features and learning disease-specific patterns directly from raw images of the leaves. Traditional machine learning algorithms heavily depend on handcrafted features, while deep learning models learn features and relationships in an incremental hierarchical manner through several layers that help models generalize better and avoid oversaturation. In tasks with large datasets, where manual analysis is not possible,  these techniques become particularly useful. For example, deep learning techniques-based frameworks have shown good performance in detecting multiple types of diseases on apple leaves with different backgrounds, which becomes a powerful tool for disease management. Several recent studies also improved the accuracy of models by employing architectures such as enhanced feature fusion networks and a tuned CNN model used for agriculture datasets. Alongside this discourse, Zhang et al. (2021) [22], Doutoum and Tugrul (2025) [23], Banja et al. (2025) [24], Alsayed et al. (2021) [25], and Luo et al. (2021[26]),  all of which confirm the applicability of deep learning for classifying apple leaf disease but with different methodologies and model innovations.

2.2.1. Convolution Neural Network (CNN):

In apple leaf disease detection, CNNs are critical in learning the complex patterns and textures related to different diseases directly from leaf images. They remove the need for detailed feature extraction and improve accuracy to a great extent. CNNs are well known method for agricultural diagnostics [197] as it is robust, scalable and performs high accuracy even for subtle disease symptoms.
Efficient apple leaf disease (ALD) identification in a wild environment using an enhanced convolutional neural network, EfficientNet-MG. The model integrates both shallow and deep convolutional layers by improving the traditional EfficientNet structure with the MSFF manner and GELU activation function. This demonstrates the large advantage we obtained with our proposed method when evaluated on the AppleLeaf9 dataset with 99.11% accuracy and a reduced parameter size of 8.42 million from five classical CNN models. Consequently, EfficientNet-MG provides a lightweight, accurate, and robust option for the identification of ALD in smart agriculture [27]. CNN-based deep learning method for the automatic identification of leaves with cardiovascular diseases. This model also fits data augmentation methods like zoom, rescale and horizontal flip, based on this model and also the image basic processing for the clarity of patterning. The plant leaf images dataset is used to train the CNN, where the loss function is categorical cross-entropy and the optimizer used is Adam. For both of the training and screening stages, the model gives its accuracy of resolution, supporting its validation for a system for the prevention of spoilers in real-time in agriculture [28].
Detecting diseases in the early stage in apple leaves to reduce damage to the plant health and farm production. The apple-leaf classification model is an ensemble of several pre-trained models: DenseNet121, EfficientNetB7, and EfficientNet NoisyStudent, separating healthy leaves and apple scab, apple cedar rust, and different diseases. These images in association with image augmentation techniques used to beef up the training corpus and further improve the accuracy of the model. Identifying multiple detection-of-disease on the leaves via the validation dataset gave an accuracy of 90%, while the total proposed model gave an accuracy of 96.25% with the validation dataset [29]. This architecture enables real-time and accurate detection of pests and diseases, thus making it possible to be employed at a large scale in agriculture. Apple Leaf Disease Recognition by A CNN With Hybrid Attention and BiLSTM. The ResNet18 architecture is designed along with the incorporated hybrid attention module and an improved classifier structure to yield the AppleNet model. Experimental results demonstrate that AppleNet can obtain a recognition accuracy of 94.66%, which is 2.47% higher than ResNet18. The modifications are slightly increasing the additional training time of the model, except ablation experiments validate that effective. The AppleNet outperforms other advanced models with respect to recognition accuracy and training time as well. These results underscore the promise of deep learning in rapid and smart plant disease identification [30].
The other method is an ensemble model combining pre-trained DenseNet121, EfficientNetB7, and EfficientNet NoisyStudent together to identify diseases on the apple tree leaves. The model will classify leaves into groups: healthy,  apple scab, apple cedar rust, and multiple diseases. Dataset augmentation using image processing methods for improving accuracy. It gives 96.25% accuracy on the validation dataset and can also classify multiple leaf diseases at 90% accuracy. The method, however, has shown significant results in different metrics and provides reliable early disease detection and accurate plant health monitoring in agriculture [31]. The performance of Convolutional Neural Networks (CNNs) on apple leaf disease detection in its early stage. The model processes images through filtering, compression, and generation techniques, using data from the Plant Village dataset containing both healthy and diseased apple leaf images to simulate a large training set. The trained CNN model gives a remarkably high test accuracy of 98.54% across all classes, which has great potential for early disease detection and proper disease management in the apple plantations [32]. Deep Learning Based Model to Detect Diseases in Apple Trees Using Multilayer CNN. Trained on the FGVC8 dataset headlined in the Plant Pathology 2021 Kaggle competition, the model exceeds performance over Decision Trees, Logistic Regression and Random Forests in traditional machine learning algorithms. Our proposed CNN model with an accuracy of 91%, a precision of 89%, a recall of 85% and an F1-score of 88.34%. These findings established the advantage of deep learning compared with traditional methods in the early detection of plant diseases, which is crucial to guarantee high production yield in agriculture [33].
Another CNN model for recognizing different apple leaf diseases including Scab, Black Rot, and Cedar Rust from the free-access PlantVillage dataset, new to the public along with a confusion matrix for better classification analysis. The used data augmentation techniques (shift, shear, scaling,  zoom and flip) to increase the dataset without the need for more images. Achieving an accuracy of 98% for classifying, the proposed CNN also has a lower storage and computational need than existing deep CNN models. This renders them especially appropriate for implementation on hand-held devices, presenting an effective solution for the real-time detection of diseases in agriculture [34]. There is also research based on deep learning and how it would be useful in agriculture, especially in plant disease detection using leaf images. It examines the pre-processing methods, CNN architectures, frameworks, and optimization algorithms utilized in previous studies. We also discuss datasets and performance metrics for model evaluation and utilize this to conduct a comparative overview of different strategies. The results showcase the pros and cons of various models and provide useful references for researchers focusing on deep learning plant disease recognition and classification problems. This survey is useful to help choose the right models for a given dataset and experimental conditions [35].
an enhanced 15-layer Convolutional Neural Network (CNN) for apple leaf disease automated detection and classification. The model utilizes a CNN base specifically for the Single Shot Detector (SSD) algorithm, which improves the precision of the accurate identification of diseases. The proposed model exhibited an accuracy of 96.62%, which was superior to the accuracy of other similar models when benchmarked against AlexNet and ResNet-50 models, thereby providing a potential solution for enhancing fruit quality and minimizing human error in disease detection [36]. On the significance of precise diagnosis of apple leaf diseases to minimize yield loss and economic losses. Apples are a vital crop, being rich in antioxidants and low in calories. But the diseases that take a toll on them can cause huge economic losses. From image datasets, abundant information can be used by computer vision applications as an effective tool to classify diseases accurately. Thus, this study focused on a systematic review of the deep learning approaches modelled for detecting apple leaf diseases by analyzing 45 articles published from 2016 to 2024, presenting the state-of-the-art of current works, trends,  and future research directions in apple leaf disease detection using DL approaches[37].difficulties of fine-grained classification, unbalanced data distribution and color distortion in the image acquisition environment challenge in both crop disease detection and crop baking stage judgment tasks. Our proposed solution basically embeds a self-attention mechanism inside the recurrent GAN to obtain higher levels of image perception and information capture. When the self-attention module is included in CycleGAN, the model is able to better seize the internal correlation from the image data. A new enhanced loss function is also proposed to improve the model's performance. The experimental results have also greatly changed the image quality, in which the peak signal-to-noise ratio increased by 2.13%, and 3.55%, respectively for tobacco and tea leaf disease images, and the structural similarity index increased by 1.16%, 2.48% [38], another model of Deep Convolutional Neural Network (DCNN) for detecting apple leaf diseases from PlantVillage dataset. We apply image data augmentation and annotation approaches to improve the accuracy of the model. This model (with an accuracy of 99.31% with minimal training time) outperformed all other models (AlexNet, VGG-16, InceptionV3, MobileNetV2, ResNet50, and DenseNet121) listed in Table 1. Moreover, the model showed a rapid testing period of 5.1 ms per image, making it capable of real-time detection of disease. The model achieved higher precision, recall, and F1-score than other models and Grad-CAM visualization was used to confirm the reliability of the model [39].
Another study highlights shortcomings of existing plant disease detection methods—like high computational costs, low accuracy and dependence on expert knowledge—and proposes a neural network (NN)-based system for detection of apple leaf diseases, specifically apple black spots, Alternaria and Minoz blight. This technique is a combination of the use of image processing techniques for extracting the features and using traditional NN to classify the features extracted. This system is shown to achieve enhanced accuracy and efficiency as compared to the conventional approaches that depend on handcrafted features and simplistic classifiers. Highly accurate and competitive scores are reported regarding this solution, proving to be suitable for applications in smart agriculture that require timely and accurate identification of each respective disease, as supported by experimental results [40]. An enhanced version of the AlexNet architecture, a CNN model with a lightweight structure but with high accuracy, to identify five diseases of apple leaves. Notable innovations are dilated convolution for enlarged receptive fields with fewer parameters, parallel convolutions for multi-scale feature extractions, and a shortcut connection using 3×3 convolutions to deal with nonlinearities. Channel attention mechanism improves channel feature representation while reducing the influence of background noise. Also, instead of fully connected layers, global pooling is used to keep the features intact but shrink the model. With a recognition accuracy of 97.36% while occupying 5.87 MB, the proposed model outperforms five others in terms of robustness and suitability for resource-constrained applications [41].
Tugrul et al. (2022) describe 100 studies on the detection of plant leaf disease through the use of Convolutional Neural Networks (CNNs). Their paper emphasizes the power of CNNs for early and accurate disease detection, and more generally for precision agriculture based on image data.” It addresses issues like species separation and lack of data, and compares different CNN architectures, stating that Deep-CNNs (DCNNs) are the most effective. The review also lists CNN strengths—automation, scalability—and weaknesses like intensive data and compute demands. Future directions involve applying the learning on more lightweight-CNN structures, hybrid models, and ensuring diverse datasets to further its practical deployment in agriculture [42]. Liu et al. developed a deep convolutional neural network based on a modified AlexNet to detect four common apple leaf diseases (Mosaic, Rust, Brown spot, and Alternaria). The model was trained on 13689 images, achieving an accuracy of 97.62%, while reducing more than 51 million parameters compared to regular AlexNet. They also applied synthetic image generation for robustness, yielding a 10.83% performance gain. High accuracy and fast convergence of the model indicate its usefulness in practical disease control applications for apple cultivation [43]. A two-stage deep learning-based system to detect apple-leaf disease in real-time from a dataset including ~9,000 expert-annotated RGB fungal-infected leaf images covering major foliar diseases. The first stage uses a lightweight classification model to classify leaves as healthy, diseased or damaged, and the second stage, activated only when a disease is present, localizes the disease symptoms. Both models utilize transfer learning and data augmentation methods such as rotation, noise, and cut-out to increase robustness and prevent overfitting. It was reported that the system was able to detect diseases with an accuracy of 88% and a mean Average Precision (mAP) of 42% and was able to identify diseases even on small spots, which shows the potential of this system to be used as a practical tool for farmers [44].
Plant disease detection and classification using image analysis, specifically using involution neural network and self-attention mechanism, on several crops such as apple, grape, corn, etc. The method is based on visual characteristics influenced by various pathogens and uses digital leaf images from the PlantVillage dataset. This model, in contrast to crop-specific models, can generalize across 8 crops comprising a total of 23 disease classes. This approach reduces reliance on expert diagnosis and manual pathogen analysis, providing a much more robust and scalable solution. Experimental results demonstrate its effectiveness with an average classification accuracy of 98.73% (κ = 98.04) compared to other comparative methods [45]. There is also a model based on ensemble modelling for thoroughly and automatically detecting apple leaf diseases to help monitor large areas of farms and minimize economic loss by maintaining better health of crops. This ensemble contains DenseNet121 and EfficientNetB7; NoisyStudent EfficientNet to classify leaf images into four classes: healthy, apple scab, apple cedar rust, and multiple diseases. Multiple image augmentation methods were performed over the dataset to diversify, which in turn helped in increasing accuracy. On the validation set the model reached 96.25% for accuracy and detected leaves of plants with different diseases with an accuracy of up to 90% which suggests that this method can effectively be used for actual deployment in agriculture [46].
In order to address the shortcomings in the speed and accuracy of traditional detection methods, the second research work proposed research on a high-precision apple leaf disease detection method based on the optimized Faster R-CNN. Improvements were made to multi-scale feature extraction by integrating Res2Net and a feature pyramid network in the process, advancing RoIPool to RoIAlign to enable precise extraction of proposals, and soft non-maximum suppression was applied to extracted proposals for precise target detection. Experimental results on an annotated apple leaf disease dataset show that the average precision of the proposed model reaches to 63.1%, which outperforms the existing traditional object detection methods. This method has great practical value in real-time agricultural disease monitoring [47]. The accuracy of CNNs for the to-early disease classification of apple leaves, which is quite difficult even for experts. The researchers expanded the dataset and optimized training using image filtering, compression, and generation techniques on the PlantVillage dataset, which consists of 2,561 labelled images. The CNN model showed a high overall accuracy of 98.54% with good performance on all the different disease and healthy classes. The findings further validate the system as a trustworthy tool for the early detection and prevention of disease in apple farming [48]. VGG16-based model for identification of apple leaf diseases (scab, frogeye spot, and cedar rust). The model adopts a global average pooling layer to substitute the fully connected layer, which helps reduce parameters, and inserts a batch normalization layer to improve the convergence rate. Transfer learning is used to reduce the time required for training. The proposed model has an accuracy of 99.01%, accounting for an 89% reduction of model parameters and a 6.3% increase in recognition accuracy compared to classical VGG16. It is a faster and more accurate solution for apple leaf disease detection [49]. New method for the identification of diseases in apple tree leaves using a CNN-based Inception-v3 model. It uses Canny edge detection and watershed transformation on the given image to get accurate segmentation of the image resulting in better recognition of the diseased parts. Use of real-world collected data also makes the model more relevant in real-world application scenarios. The model was implemented through exploratory data analysis and stratified 5-fold cross-validation which resulted in a precision of 84.60%, recall of 87.40%, F1-score of 85.00% and accuracy of 94.76%. The above results showed the effectiveness of this approach in apple leaf disease classification, outpacing current state-of-the-art [50].
The difficulty lies in diagnosing and classifying apple crop diseases in remote areas with limited internet connectivity. In particular, it suggests a lightweight Deep Neural Network (DNN)-based solution that would also run on edge devices including mobile phones, Raspberry Pi, and Jetson Nano. Several DNN models (such as Basic CNN, AlexNet, and EfficientNet Lite) were used to evaluate various performance metrics, efficiency, and resources. With transfer learning,  the best-performing model, EfficientNet DNN, has been chosen with a test accuracy of 85%. This solution also leads to an economical and comprehensive tool for farmers from developing regions to monitor crops in terms of health without demanding heavy computations [51].
Identify apple leaf disease at an early stage using CNNs. The model uses image filtering, compression, and generation techniques to augment a dataset of healthy and diseased apple images provided by the Plant Village dataset. The trained CNN model, with an accuracy of 98.54% on the dataset, provides the identification of all diseased and healthy samples on the basis of 2,561 labelled images and can create a useful system for early disease detection in apple trees [52]
Another study,  however, has found that early, accurate detection of apple leaf diseases can help reduce yield loss and economic loss. Apples are among the most common fruit trees and important fruits, and they are highly susceptible to diseases that affect their production. Deep learning approaches combined with large public image datasets have become powerful tools in computer vision applications for the detection and classification of diseases. A total of 45 published papers (2016-2024) were systematically reviewed, focusing on the state of the art and existing opportunities in apple leaf illness detection and classification [53].
The introduction to Federated Learning (FL) and how it works with the Internet of Things (IOT). FL tackles important issues including privacy of data, bandwidth restrictions, and device heterogeneity, which are essential in metadata scenarios. Specifically, this paper discusses the reasons why IoT devices should move to adopt FL,  the techniques that allow it to do so, the challenges in IoT, and the various applications in which FL is showing to be useful. It also reviews potential future research directions and open issues, serving as a roadmap for advancing FL in the context of IoT [54].
Ripe apple detection and the ability to reach the fruit without obstacles are needed for robotic harvesting in vertically trained tree modern orchards. Machine vision,  employing a low-cost Kinect V2 sensor, was established to detect, and the filtering of background objects was performed with respect to depth features, respectively. Day and night images were collected at the Scifresh apple orchard, which had dense foliage. To detect Original-RGB and Foreground-RGB images,  two Faster R-CNN-based architectures were used, namely ZFNet and VGG16. Results indicated that the best mean average precision (AP) was 0.893 (with VGG16) for Foreground-RGB and a 2.5% increase in accuracy of fruit detection with respect to the Original-RGB images. The results indicate enhanced accuracy of detection with a depth filter with minimal compromise to speed. This approach can be useful for robotic harvesting in fruiting-wall apple orchards [55].
The present study used convolutional neural networks (CNN) to investigate the effectiveness of transfer learning for the early detection of apple scab. Although datasets of infected fruits and leaves were obtained, system data acquisition was slow and time-consuming, as the appearance of scab was inconsistent and appeared at different times. To mitigate these challenges, transfer learning is employed, improving the efficiency of model training. It compared learning from scratch with transfer learning, passing the data through statistical analysis to quantify the effect. The results indicated that transfer learning significantly improved the model performance, and transfer learning is a more effective method for apple scab detection and supported the trend of smart horticulture for increasing production while reducing the use of pesticides [56].
Agriculture-based economy (Data Ratings): India being an agriculture-based country, depends on Agriculture for nearly 80% of the rural population. While advances have been made in developing agricultural machinery and soil analysis, technology still needs to improve crop yield and enhance disease identification and the efficient use of pesticides and fertilizers. Apple, a main crop, suffers from many diseases that can decrease yield. Timely detection and treatment can lead to greater crop yield, lower expense, and more affordable apples for all the entire population. A tailored deep convolutional neural network (CNN) is proposed in this study to identify foliar diseases in apple trees based on leaf images. The dataset includes 3642 images of apple leaves from four categories (i.e., Healthy, Multiple Disease, Rust, and Scab). CNN at 30 epochs reported accuracy scores of 0.9376 and 0.9290 on the train and validation sets respectively. These results suggest that the model can accurately predict foliar diseases in apple leaves. This model can also become part and parcel of a mobile application as well as of IoT devices for automatic disease detection in apple gardens in the modern era [57].
Smart and precise horticulture aims to improve yield and product quality, decrease pesticide use, and ultimately enhance food security on our planet. This research is on detecting apple scab in the initial stages with the help of mobile phones and artificial intelligence, i.e., convolutional neural networks (CNN). Two data types were collected both_images from_scab-infected fruits and leaves from apple trees. As data acquisition is often of high dimensionality and time-consuming, and scab appears as a probabilistic, and not deterministic, process, transfer learning was applied as an efficient training method. The best datasets for apple scab detection in transfer learning were determined in this research along with their effect versus training from scratch. Statistical analysis confirmed that transfer learning substantially enhanced the performance of CNN, with a significance level of 0.05 [58]. As the fourth most produced fruit worldwide, apples are important in agriculture, but recognizing diseases of apples is a major issue, causing economic losses and food safety problems. They proposed the Dual-Branch Model for complex spatio-temporal graph time series (DBCoST), which integrates Convolutional Neural Networks (CNN) and Swin Transformer. CNNs learn local features, while transformers learn global information, combining the best of both worlds. The features fusion module (FFM), which uses the residual module and the improved Squeeze-and-Excitation (SE) attention mechanism, is proposed to further effectively fuse and retain local and global information. The model also deals with difficulties such as overlapping apple branches and leaves, as well as occluded fruits, all of which complicate spot diseases. Experimental results demonstrate DBCoST achieves superior accuracy (97.32%), recall (97.33%), precision (97.40%), and F1 score (97.36%) compared to other models, with maximum disease-specific accuracy exceeding 96% for each category. This model signifies superior performance in confirming apple leaf diseases and achieving balanced accuracy [59]. Fruits are an important part of our daily diet, but diseased fruits pose a great risk to human health as well as to the Farmer's economy. The CNN model in this study is developed to identify diseases of apple fruit using the PlantVillage dataset. Especially after testing configurations, the proposed model is shown in Figure 2 and consists of three convolution layers and three max pooling layers followed by two densely connected layers. Additionally, more conventional machine learning algorithms, along with the pre-trained models VGG16 and InceptionV3, were examined for comparative purposes. The results indicate that the proposed CNN model surpasses the state-of-the-art pre-trained models and traditional methods in terms of accuracy, specificity, F1 score, AUC-ROC curve, and execution time. With a state-of-the-art accuracy of 99% while utilizing only 20% of the space of pre-trained models and performing inference in less than 1 second, while pre-trained models demand a minimum of 30 seconds [60]. Conventional approaches for leaf disease classification depend on hand-crafted features such as color and texture, which might not capture discriminative characteristics. To address these shortcomings, deep learning models like VGG and ResNet have been utilized in which spatial attention has yet to be incorporated to detect the correct regions, the so-called diseased lesions on the leaves. Yu et al. introduce here a new architecture based on a Region-of-Interest-Aware Deep Convolutional Neural Network (ROI-aware DCNN) to promote feature discriminability and hence, improve classification performance. The model is divided into two subnetworks, the ROI subnetwork for extracting features from the leaf and diseased areas and the classification subnetwork for enhanced accuracy. Here, the ROI subnetwork is trained on a new image set, where background, leaf, and disease areas are to be segmented. The two subnetworks are subsequently connected and trained in an end-to-end manner. Experimental findings prove that the ROI-aware DCNN model achieves better accuracy (discriminative power) and classification performance than conventional models such as VGG, ResNet, and SqueezeNet for detecting leaf diseases [61]. They are widely grown but very prone to disease, so early reporting is important to prevent damage. CKD is critical in the production process for fruit trees such as apples. The dataset used in the study is called Plant Village, which contains healthy and infected images of apple leaves. The training dataset was optimized using various image filtering, compression,  and generation techniques. The CNN model obtained a high overall accuracy of 98.54%, successfully classifying different classes of disease based on a total of 2561 labelled images [62].
Figure 3. K-means clustering.
Figure 3. K-means clustering.
Preprints 159380 g003
Figure 4. Working of CNN.
Figure 4. Working of CNN.
Preprints 159380 g004
Diseases of Apple trees have long been a threat to orchard farmers, and deep learning-based approaches are the emerging solutions for detecting such diseases. In this paper, a DF-Tiny-YOLO model is proposed to generate a deep learning-based target detection system to detect apple leaf diseases quickly and accurately. Those changes include: the DenseNet-based feature reuse strategy to enhance feature propagation and detection accuracy; Resize and Reorganization (Reorg) to reduce computation and enhance feature fusion, and a 1x1 and 3x3 convolution kernel structure for dimensionality reduction. Based on the evaluations on 1,404 images corresponding to four common apple leaf diseases, it can be seen that DF-Tiny-YOLO had a mAP of 99.99%, an intersection over union (IoU) of 90.88% and a detection speed of 280 FPS. It performs better than the Tiny-YOLO and YOLOv2 models and gives a quick and effective solution for disease identification [63]. Employing Convolutional Neural Networks (CNNs) for automatic detection and classification of diseases, nutrient deficiencies, and herbicide damage on an apple tree leaf image. The aim is to eliminate dependency on any one expert, an approach that has cost and scale limitations. CNNs were trained on a newly established dataset containing six known disorders spanned by 2,539 images. These revealed their ability to match or exceed experts for disease and damage detection, with the CNN model producing an accuracy of 97.3% over a hold-out set [64]. A method for effective classification of common apple leaf diseases, specifically Mosaic, Rust, Brown spot, and Alternaria leaf spot using deep CNNs (Convolutional Neural Networks). Using complex image preprocessing and without guaranteeing high recognition rates, existing research has limitations that the proposed method overcomes. Using a dataset consisting of 13,689 images of diseased apple leaves, the model is trained to detect these diseases. (Here, the model achieves an overall accuracy of 97.62% from the experimental results, while the model parameters are reduced by 51,206,928 compared with the standard AlexNet.) It also shows the effectiveness of the model on disease control with high accuracy achieving even faster convergence with a 10.83% accuracy improvement when generating pathological images [65]. A Multi-model LSTM-based Pre-trained Convolutional Neural Networks (MLP-CNNs) based model to identify plant diseases and pests, mainly focusing on apple trees. Such as the hybrid model in this case, built by Long Short-Term Memory (LSTM) networks along with pre-trained CNNs like AlexNet, GoogleNet, and DenseNet201 to extract features using transfer learning. Then, the extracted deep features are passed into the LSTM layer to build a robust detection model. The final class labels are obtained by applying a majority voting classifier arriving from individual predictions at the three LSTM layers. It consists of 29 different apple disease and pest image datasets from Turkey, and the performance of the model is measured, showing that the ensemble model provides comparable or better results than other pre-trained deep architectures [66].
A deep learning method for real-time detection of five common apple leaf diseases: Alternaria leaf spot, Brown spot, Mosaic,  Grey spot, Rust. A novel apple leaf disease dataset (ALDD) based on data augmentation and image annotation is presented. Improving the convolutional neural network model, the INAR-SSD is proposed by adding the GoogLeNet Inception structure and Rainbow concatenation. The INAR-SSD model detects 78.80% on a hold-out testing dataset from a dataset of 26,377 images, which demonstrates the model’s capability for effective apple disease detection [67]. The study introduces a refined VGG16-based model that focuses on detecting three prevalent diseases affecting apple leaves, namely:  scab, frogeye spot, and cedar rust. Instead of a fully connected layer it replaces it with a global average pooling layer which reduces the number of parameters and faster convergence by adding one batch normalization layer. Transferred learning is implemented to reduce training time. The experimental results indicate that the advanced model can superiorly hit an accuracy of 99.01% while saving 89% of model parameters, 6.3% of accuracy, and 0.56% of training time in contrast to the original VGG16. This model provides a more effective and precise way of identifying the disease in apple leaves [68]. A new convolutional neural network (CNN)-based approach for diagnosing and classifying diseases in apple leaves. Overall, the model is designed to achieve a high level of accuracy in sea-full detection, despite its smaller dataset size, leveraging contrast stretching pre-processing and (FCM) clustering. The model achieved 98% accuracy with 400 apple leaf images used in training and validation (200 healthy, 200 diseased). The system's capacity to archive such high-performance rates, using such small quantities of data is due to the novel pre-processing and clustering methodologies employed, providing a more efficient alternative for the identification of apple leaf disease[69], Fast and Accurate Model for Detection of Apple Leaf Diseases DF-Tiny-YOLO: A Deep Learning-Based Model for the Fast and Accurate Detection of Apple Leaf Diseases It uses DenseNet as its backbone to reuse features to improve the feature propagation and detection accuracy followed by Resize and Reorg and convolution kernel compression to optimize the models. The model was evaluated on 1,404 images with mAP: 99.99%, IoU: 90.88%, and a detection speed of 280 FPS. DF-Tiny-YOLO achieves a great improvement in the accuracy and speed of detection compared to Tiny-YOLO and YOLOv2 [70]. A deep learning-based convolutional neural network (CNN) involving the use of batch normalization for the classification of plant diseases, with the objective of creating an automatic diagnostic system for leaf diseases. The proposed model is end-to-end capable, accurate, inexpensive, and also user-friendly in terms of disease detection with leaf images. Using a public dataset of 20,654 images of 15 plant diseases, the model produced a testing accuracy of 96.4%, with an associated testing loss of 0.168, showing that it is well suited to multiclass plant disease classification generalization and has applications in real-world agricultural systems [71].
Multiclass apple classification model using pre-trained DenseNet121, EfficientNetB7, and EfficientNet NoisyStudent in Keras: An ensemble to classify apple tree leaves into healthy, apple scab, apple cedar rust, or multiple diseases Dataset augmentation technique is used in this model to increase the accuracy. On the validation program of a dataset, the output showed an accuracy of 96.25% and the accuracy for multiple disease leaves was observed as 90%. The proposed model shows good performance in multiple metrics and is likely to play an important role in the timely and accurate detection of plant disease in the agriculture sector [72]. A novel Faster R-CNN approach to apple leaf disease detection, fixing certain drawbacks of traditional methods. They combined Res2Net with feature pyramid networks and replace RoIPool with RoIAlign to improve the accuracy of extraction from candidate regions. Moreover, soft non-maximum suppression is utilized for accurate detection. The proposed better technique scored results of average precision of 63.1% thus scoring better than other object detection methods. Experimental results showed that this approach provides a suitable option for apple leaf disease recognition and can be used in real agriculture [73].

2.2.2. Deep CNN Models:

Deep Convolutional Neural Networks (Deep CNNs) have been refined with new architectures, such as ResNet, MobileNet, V-Net, BAMNet, and HASS-Net, that exploit abstract spatial hierarchies present in images and have been utilized for apple leaf disease detection. The residual connections in ResNet enable deeper networks to be trained by addressing issues related to vanishing gradients, thus making it suitable to be trained on a large-scale dataset comprising of leaf images. For this reason, they chose MobileNet for our real-time field monitoring system since it is designed for efficient computation on mobile and edge devices, balancing between accuracy and resource consumption. In particular, V-Net and BAMNet improve image segmentation and classification, respectively, of leaf diseases by leveraging new attention mechanisms and multi-scale feature fusion, further enhancing the accuracy and robustness of detection. HASS-Net incorporates hybrid attention mechanisms that allow the extraction of more specialized features which further enhance the accuracy of the disease detection. These models greatly cut down on the human intervention required at this stage which serves as a timely intervention in the detection of diseases and allows better management of crops. Inferring their effectiveness is necessarily subject to the quality of training data and the computational resources deployed to implement them.
A novel approach that uses ResNet50-based Deep Learning Convolutional Neural Network (DLCNN) and Ant Colony Optimization (ACO) for apple leaf disease detection. The approach provides better feature extraction and selection, thus resulting in better performance than other methods. The model has been exhaustively tested on an indigenously built dataset showing its efficacy. This can act as a trustable solution for precision farming of apple orchards and ultimately contribute to the quality and quantity of crops [74].
The rapid development of deep learning has been beneficial to the automatic detection of apple leaf disease,  however, the large number of model parameters and time-consuming annotation in the commonly-used deep learning algorithms have become the butt of criticism. MobileNet and DeepLabv3+ were implemented in the first stage of leaf segmentation with a precision and MIoU of 99.40% and 99.20% respectively. In the second stage, by utilizing channel and spatial attention to enhance feature extraction capabilities in disease spot segmentation, the accuracy can be achieved with 98.66% under supervised learning and 96.56% under semi-supervised learning using 40% labelled data. The 22MB model is deployed on a WeChat mini-program to support efficient and reliable disease detection for agricultural practitioners [75]. Automatically Monitoring System of Codling Moth Detection Based on Smart Trap and On-Site Analytical Model. For rural deployment, the trap only transmits detection results, effectively saving energy. Using 430 labelled sticky pad images (8142 moths, 5458 other insects and 8177 objects) the model reached >99% accuracy. This system facilitates the sustainable production of apples, as well as tailored pest management [76]. Machine and deep learning models for apple leaf disease classification (leaf healthy, grey spot, black star, cedar rust). The researchers then combined image segmentation with an SVM classifier using state-of-the-art CNN architectures like ResNet and VGG. Maxi-VGG-364 yielded the second highest accuracy of 95% in another pasta study that proposed a pathology detection system. ResNet-18 achieved the highest accuracy of 98.5% among the models, with fewer layers, showing good performance in disease identification [77]. One such approach is to develop a modified deep learning model to classify apple leaf diseases by combining the Convolutional Block Attention Module (CBAM) with the ResNet-101 architecture. Our proposed ResNet-101+CBAM model shows superior accuracy in diagnosing the four types of diseases, including 'Brown spot', 'Grey spot', 'Health', and 'Rust' over the previous ResNet-101 baseline. While the benefits were minimal for 'Powdery mildew' due to the garment's features being similar, effective operations improved total performance, confirming the model's application ability in precision agriculture [78].
an AI model for apple leaf disease detection which is named as 'Orchard Guard', by using transfer learning with models such as AlexNet, DenseNet121, ResNet-50, and MobileNetV2. Hyperparameter optimization was done on the MobileNetV2 model, and the best model when applied on 3175 images of 4 classes (Apple Scab, Black Rot, Cedar Rust and Healthy) had an accuracy of 99.36%. The system has realistic potential for amenity disease control in the field [79]. MCDCNet: A Multi-scale Constrained Deformable Convolution Network for Detecting Apple Leaf Diseases with Different Scales and Deformable Geometries. To improve the discriminability of features, a dual-branch convolutional structure is used and introduces geometry-sensitive detection with deformable convolutions with offset intervals. Feature fusion module: combine outputs for robust classification. MCDCNet since it achieves 66.8% accuracy, surpassing the SOTA models by a margin of 3.85% on complex natural environments [80]. A deep learning-based segmentation model that used DenseNet121 ImageNet weights + added a top layer that improved apple leaf disease detection. Similar visual symptoms contribute to this challenge in early detection. This method reaches a high accuracy of 99.06% for the exact identification of the diseases of the apple compared with classical ML models of Seg+DenseNet121 [81]. BAM-Net, a ConvNext-T-based deep learning model with bilateral filter MSRCR preprocessing step and aggregate coordinate attention for complex orchard apple leaf disease identification. To enhance disease feature discrimination, they incorporate a multi-scale feature refinement module (MFRM) into our model. Based on the BAM-Net accuracy of 95.64% on a custom dataset, BAM-Net validated strong generalization performance on PlantVillage data, demonstrating the robustness of BAM-Net in crop disease detection scenarios [82]. In their model processing, another study proposes an advanced object detection model called HSSNet, specifically developed for tiny targets, especially tomato and apple leaf diseases, under complex natural conditions. The model is built on YOLOv7-tiny architecture and employs H−SimAM attention mechanism to emphasize on foreground, SP-BiFormer Block to enhance tiny target detection performance, and SIOU loss to decrease prediction deviation. On the TTALDD-4 dataset, HSSNet surpassed standard detectors not just in accuracy (mAP: 85.04, AR: 67.53) but also in speed (83 FPS), making it a real-time solution for accurate apple leaf disease detection [83]. Using the EfficientNet-B0 architecture, but changing the depth-wise convolutions to groups of convolutions to help correctly identify patterns in leaf images. As a result, it overcomes the limitations of traditional deep learning models and shows outstanding performance with 100% accuracy both in training and testing while exhibiting incredible generalization and detection ability [84].
The proposed method is a lightweight CNN model that has a high computational demand issue and also overfitting. Using PlantVillage data and augmentation techniques, the model accurately predicts Scab, Black rot and Cedar rust with a 98% success ratio. Its fewer layers and lesser storage needs make it suitable for deployment on resource-challenged devices like mobile phones [85]. BCTNet, for instance, was the second architecture that was then used for multi-scale apple leaf spots detection in natural environments. It uses a Bole convolution module, a cross-attention module and a bidirectional transposition feature pyramid network to improve the extraction of high-level feature maps and low-level feature maps and reduce interference from background. It reaches 85.23% accuracy and a detection speed of 33 FPS. The proposed method exceeds the performance of existing models and can lead to the automation of apple disease detection and optimized pesticide usage. This data can be found on [86],  available on GitHub. There are also lightweight deep learning models to detect apple leaf diseases in natural surroundings. The structure of YOLOX-Nano is further refined with several advanced techniques, such as asymmetric ShuffleBlock and CSP-SA module for enhanced feature extraction and blueprint-separable convolution (BSConv) to improve parameter utilization. The YOLOX-ASSANano model with only 0.83 MB parameters deserves 91.08% mAP in a custom multi-scene apple leaf disease dataset and 58.85% mAP in the PlantDoc dataset at a speed of 122 FPS. The fast and real-time apple disease detection provided a practical implementable solution and it may also be applicable for other plant diseases [87].
Another paper suggests a novel ResNeXt depth model for the detection of fungal diseases of apple crops. Transfer learning approaches, namely Inception-v7 and ResNet, were implemented with the dataset consisting of 9395 images with four disease types. But those models were not sufficient. As for the submitted ResNeXt model, with improved preprocessing and segmentation, which assists in dealing with imbalanced data and focusing on the area of the disease, it yielded an accuracy of 98.94%, recall (99.2%), precision (99.4%), and F1 score (99.2%). A study indicates DL techniques can be utilized in early crop disease detection to improve the yield and quality [88].

2.2.3. Region-Based Convolutional Neural Network (R-CNN):

R-CNN (Regions with Convolutional Neural Network) is based on the implementation of two deep learning practices (region proposal networks and CNN) for object detection. It generates prospective regions, classifies them, and refines object localization. The accuracy of object detection and localization is likely to improve significantly through this way, even in agricultural disease detection tasks.
Detection of Apple Leaf Diseases Using Faster R-CNN With a Combination of Feature Pyramid Networks and Multiscale Feature Extraction to Improve the Detection Performance for Small Objects in a Complex Background. It adopts Res2Net and promotes feature pyramid networks for effective feature extraction and replaces RoIPool with RoIAlign to enhance region accuracy. Soft non-maximum suppression for additional detection refinement. The method provided a 63.1% mean precision, better than other object detection methods. The findings underscore its promise for practical applications in agriculture, providing an accurate and efficient method for detecting and managing apple leaf disease [89]. A Hybrid Contrast Stretching Method for Real-time Apple Leaf Disease Classification System. Infected areas are detected by the MASK R-CNN, and features are extracted by pre-trained CNN models. This paper proposes a new feature selection method, it is called as Kapur's entropy with MSVM (EaMSVM) for classification. On applying the ensemble subspace discriminant analysis (ESDA) classifier based on the Plant Village dataset, the system accuracy results reached 96.6%. The outcomes illustrate the performance of the recommended authorship verification framework in relation to previous methodologies [90].
a multi-class apple detection method based on the Faster Region-Convolutional Neural Network (Faster R-CNN) on dense-foliage fruiting-wall trees. Their approach can classify apples into four classes: Non-occluded, Leaf-occluded, branch/wire-occluded, and fruit-occluded. On 800 images augmented to 12800, the mean average precision scores were 0.909, 0.899, 0.858, and 0.848 for each category respectively, and an overall metric of 0.879. Results showed that the process on average consumed 0.241 seconds, thus providing a solid solution to reach effective fruit detection, while enhancing optimal robotic picking strategies through its implementation [91,92].

2.2.4. Transfer-Learning:

The simple explanation for transfer learning is, a machine learning model developed for a specific task can be reused as the starting point for a model on a second task. It uses previously trained models, adapting them for the task with a relatively small dataset. For example,  in agricultural disease detection, it facilitates the use of large image dataset-trained models in customized applications, such as leaf disease classification.
Figure 5. Working of Transfer learning.
Figure 5. Working of Transfer learning.
Preprints 159380 g005
In this study, they propose a deep learning based multi-class classification model called AppleNet for real-world apple leaf disease classification. Using transfer learning, the ResNet-50 model, pretrained on ImageNet, was used to extract visual features corresponding to a novel dataset consisting of 2,897 images acquired from natural images accompanied by background noise conditions, of specific apple cover surfaces. This results in increased robustness while reducing training costs. The classification accuracy attained by AppleNet was 96.00%, which surpassed the performance of other pretrained and baseline models by an average margin of 21.54%. In precision horticulture, the model performs excellently well as it detects plenty of apple diseases based on color, texture,  and shape [93].
In a study, we proposed a Fine-tuned EfficientNetB3 for early recognition of eleven apple foliar diseases using a dataset of 23,187 RGB images. The EfficientNetB3 model surpassed on precision, recall, accuracy, and F1-score compared with InceptionResNetV2, ResNet50, AlexNet, and VGG16—all fine-tuned adding extra layers. The model yielded 86% precision, 88% recall, and 86% F1-score at 32 batch sizes  and 10 epochs with Adam, SGD, and Adagrad optimization. This method allows for accurate, rapid diagnosis of foliar diseases, aiding in sustainable apple crop management and improved yield [94]. A reliable deep learning-based framework for plant leaf disease recognition by taking advantage of a combination of transfer learning along with handcrafted features. The first is to extract a deep feature descriptor and then fuse it with the traditional local texture features for better representation. Hence, centre loss is used to learn feature discriminability by minimizing intra-class and maximizing inter-class variations. The proposed method has validation on two Apple Leaf datasets and one Coffee Leaf dataset showing classification accuracies of 99.79%, 92.59%, and 97.12%, respectively. The achieved results ascertain that the framework is capable of capturing fine-grained features that can be used for the accurate identification of plant diseases [95]. There are also studies that present a methodology for apple leaf disease classifications based on a transfer learning approach by adopting the EfficientNetV2S architecture. The model captures deep features and makes accurate predictions through a classifier block. While the class imbalance is taken care of with runtime data augmentation hyperparameters related to resolution, learning rate, and epochs are also tuned well. When evaluated on the PlantVillage apple leaf disease subset, the model attained 99.21% comprising of existing methods with its effectiveness, demonstrating in the automated diagnosis of the disease of the foliage [96]. They also propose a transfer learning approach for early plant disease detection with test varied sources images to validate model generalizability. The optimizers then used a similar architecture and the training setup based on these parameters,  showing that the best performing optimizer was Adagrad giving 92% training, 91% validation, and 91% testing accuracy. The model is deployed in a mobile app and works well in field conditions, focusing on a real on-ground precise farming use case [97]. AppleNet, a deep learning-based multi-class classification model designed for apple disease detection. Leveraging transfer learning with a ResNet-50 architecture pretrained on ImageNet, AppleNet extracts robust features and achieves high accuracy (96%) on a custom dataset of 2,897 real-world images with background noise. Hyperparameters were fine-tuned for optimal performance. Comparative evaluations with baseline and other pretrained models reveal AppleNet’s superior performance, with an average accuracy gain of 21.54%, highlighting the benefits of transfer learning and realistic data collection in plant disease diagnosis [98].
An early apple scab detection using mobile phones and CNN-based artificial intelligence to support smart horticulture goals of increased yield and reduced pesticide use. Two datasets comprising images of infected apples were used, and due to the time-consuming nature of data collection and variability in scab appearance, transfer learning was employed. The research compared transfer learning to training from scratch, and statistical analysis at a 0.05 significance level confirmed the superior performance of transfer learning in enhancing CNN accuracy for scab detection applications [99]. A parallel framework for real-time apple leaf disease detection, addressing challenges of low-contrast imagery that hinder accurate classification. A hybrid contrast stretching method enhances image visibility, followed by infected region detection using MASK RCNN. Simultaneously, pre-trained CNN models extract features from enhanced images, and Kapur’s entropy with MSVM (EaMSVM) selects optimal features. The framework is evaluated on the Plant Village dataset, achieving a top classification accuracy of 96.6% using an ensemble subspace discriminant analysis (ESDA) classifier. Comparative analysis confirms the framework's superiority over existing approaches in disease identification [100]. Another research paper provides a systematic review study on 45 articles (2016–2024) dedicated to apple leaf disease detection using computer vision and deep learning. It demonstrates how automated diagnosis by image function can overcome diseases with similar symptoms to minimize pesticide application and reduce crop loss. This work analyzes datasets, deep learning strategies, and state-of-the-art architectures, and outlines the main advances and gaps in research in this field [101].
Three types of leaves created by VGG-16 fine-tuning for eight distinct grape leaf diseases (Black Measles, Leaf Blight, Phyllosticta Blight& Scab, Black rot and Cedar rust) are proposed in this study. The approach uses transfer learning by keeping most layers of the pretrained model fixed,  only training the final layer with a 98.9 % reduction in training duration. The proposed method can quickly and accurately identify and classify diseases with an accuracy of 97.87% classification accuracy, which is faster and more accurate compared to manual diagnosis of apple and grape leaf diseases [102]. The baseline model behind this study is a custom-built model with pre-trained networks (ResNet18, AlexNet, GoogLeNet, and VGG16) which will classify the apple tree leaves into healthy, black rot, apple cedar rust, and apple scab. They are applying image enhancement techniques to increase the model’s performance and get 97.25% accuracy on the validation dataset. The findings illustrate how the model can be useful for real-time surveillance of plant health, presenting a viable and cost-effective approach for large-scale agricultural disease monitoring [103]. Partial Survey of Apple Surface Defect Detection via Weight Contrast Transfer MobileNetV3 Model. In this study, a new dataset with thermal, infrared and visible apple surface defects was built, and a substantial performance in the model was achieved. Compared to MobileNetV3, the WC-MobileNetV3 yielded improvements of 16% in accuracy, 14.68% in precision, 14.4% in recall and 15.39% in F1-score. WC-MobileNetV3 significantly outperformed other classical neural networks such as AlexNet, ResNet50, DenseNet169, and EfficientNetV2 on all performance metrics. The proposed method offered a high accuracy and efficient detection for online apple grading [104]. They propose a deep convolutional neural network (DCNN)-based approach to diagnose and identify apple tree leaf disease (ATLDs) at the early stage. We've created a model that combines DenseNet and Xception architectures, where fully connected layers are replaced with global average pooling. Using Support Vector Machine you classify the diseases of apple leaves. The model, trained on images from both laboratories and fields, reaches an accuracy of 98.82%, which is superior to the accuracies achieved by Inception-v3, MobileNet, VGG-16 and Xception. The proposed model converges faster, has fewer parameters and can produce a more robust solution, which is promising for its use in intelligent apple cultivation systems to control ATLDs [105], with a 99.21% accuracy,  surpassing existing models. Our approach employs transfer learning, whereby the EfficientNetV2S architecture is fine-tuned to obtain analogous features to enhance apple leaf disease classification. Data augmentation techniques are used to combat the class imbalance issue, allowing for better generalization. The study systematically investigates hyperparameters such as input resolution, learning rate, and epochs to achieve the best performance. These findings indicate that the proposed model could be integrated into apple disease detection systems and thus it could be considered as an efficient and effective method for automatic apple leaf disease classification which could greatly increase productivity and disease management in apple cultivation [106]. Another research proposed a novel automatic fire blight disease detection and segmentation system for infected apple orchard areas based on intelligent sensing technologies and deep learning approaches. The sensitivity of different processing indices to fire blight infection was instead achieved by an analysis of UAV multispectral images and ground camera RGB images, revealing that RVI had the highest sensitivity to fire blight infection. A Mask Region-Convolutional Neural Network (Mask R-CNN) model trained on 880 images and tested on 110, achieved a precision of 92.8% and a recall of 91.2%, thus proving its efficacy in detecting and segmenting infected canopies. These findings show the promise of this approach for site-specific application with minimal intervention in apple orchards [107].
An improved lightweight MobileNetV2 model for crop disease identification is proposed for optimal accuracy and efficiency on mobile and edge devices with limited computing resources. This model improves on MobileNetV2 with an improved Bottleneck structure, reduces operation channels through point-by-point convolution, and introduces the RepMLP (re-parameterized multilayer perceptron) module used to capture long-distance dependencies. Moreover, we applied a channel-attention mechanism and Hardswish activation function to improve recognition accuracy. Model accuracy on the PlantVillage dataset resulting from the experimental evaluation is 99.53%, which is 0.3% higher than the original ResNet50 model with reduced parameters of 59% compared to ResNet50 and helps improve the inference speed by 8.5%. This work is useful for deploying crop disease identification on edge and mobile devices [108]. The study "Apple Leaf Disease Detection using DenseNet121 Transfer Learning Model" introduces an apple leaf disease detection approach employing the Deep Convolutional Neural Network (CNN) architecture of the DenseNet121 model, which is trained on a large-scale dataset and fine-tuned for the classification of diseases. By leveraging knowledge from a broader dataset, transfer learning improves the reliability and accuracy of the model in diagnosing diseases. Liu et al. [109] proposed an approach based on YOLO v4, but using fewer layers to achieve better disease detection in apple leaves while maintaining a low computational cost and training time—an efficient solution for practical applications in agriculture. There is this research work that investigates the implementation of deep convolutional neural networks (CNNs) in detecting and identifying plant diseases using plant leaves. To overcome the high computational cost of standard CNN models, depth-wise separable convolution was introduced, reducing both the number of parameters and computation requirements of the model. The models trained on 14 plant species and 38 disease classes and produced high discoveries in classification accuracies with 98.42% for InceptionV3, 99.11% for InceptionResNetV2, 97.02% for MobileNetV2, and 99.56% for EfficientNetB0. The results are better than traditional approaches based on a handcrafted set of features, along with training time and accuracy. Because MobileNetV2 optimizes its parameters for mobile device deployment, it can be considered a promising candid ate for real-time disease detection in agricultural systems [110]. Another paper carried out as part of the European Union Horizon 2020 OPTIMA project, aims at automated detection of downy mildew, apple scab and Alternaria leaf blight through the application of deep convolutional neural networks (CNNs) on RGB images. The work will help to improve disease detection in commercial fields within an integrated pest management (IPM) framework. The model had its performance evaluated in open-set (field) and closed-set (controlled) scenarios. In contrast, the closed-set setting resulted in higher accuracy: 66.3% (downy mildew), 45.1% (apple scab) and 42.1% (Alternaria) F1 scores. However, in the open-set scenario, the F1 scores for downy mildew, apple scab, and Alternaria fell substantially to 34.8%, 5.5%, and 4.2%, respectively. Our study emphasized open-set evaluations, which are critical for a realistic pilot of model performance in conditions closer to the real world. The dataset can be found online for researchers [111]. The Deep Leaf Disease Prediction Framework (DLDPF) proposed in this paper is a hybrid method for leaf disease detection that combines Convolutional Neural Networks (CNNs) with transfer learning using pre-trained models such as AlexNet and GoogLeNet. The CIDCNN-TL algorithm is used in the framework with the classification algorithm implemented by Keras and TensorFlow. The DLDPF is compared with deep learning models such as AlexNet,  GoogLeNet, VGGNet16, and ResNet20 using apple leaf datasets. The experimental results also suggest that DLDPF outperforms these models for automatic leaf disease prediction, demonstrating its potential use in improving precision agriculture in India [112].
Here, a transfer learning method is implemented based on the VGG19 model on apple leaf diseases classification, yielding a high accuracy of 98.71% on the validation data. The PCB is a great solution for precision agriculture, as it is a fine-tuning of the hanging pre-trained model, which is advantageous in the early detection of diseases. Such development not only aids sustainable orchard management but also alleviates economic losses and improves disease management. In the future, the aim might be to optimise the model further and evaluate the model under dynamic agricultural situations, encouraging better crop management strategies [113].
A deep learning-based multi-class classification model for apple plant disease detection named AppleNet is proposed. This model uses ResNet50 with Transfer Learning, which allows for Pixel-wise training and Extraction of over a million features, as it builds on top of a feature extractor. Using a dataset of 2,897 real real-world images augmented with data augmentation, AppleNet achieved a classification accuracy of 96.00%. An accuracy improvement of 21.54% over the other pre-trained and baseline models was reported. The efficiency of transfer learning is shown in this study and provides a better solution for effective and precise detection of apple diseases in the real field [114]. Here we compared three deep learning models, ReXNet-150, EfficientNet and ResNet-18 for apple leaf disease detection. Notably, ReXNet-150 achieved these scores using only 1,730 high-definition images with transfer learning: F1-score = 0.988, precision = 0.989, recall = 0.989. In the case of EfficientNet and ResNet-18, F1-scores were 0.966 and 0.977 respectively. The ReXNet-150 model can be embedded into a TensorFlow Lite-based Android application that can detect diseases in real-time, making this model deployable for agricultural purposes with an accuracy of 98.9%. This implementation highlights the potential for mobile deployment in precision agriculture, with subsequent efforts aimed at multi-crop detection and real-time video analysis [115].

2.2.5. Lightweight Models:

Light-weight models are those that can work on the device hardware, where computational or memory capabilities are not the best by reducing the number of parameters and computations. These models, including Mobile Net and Squeeze Net, achieve a trade-off between performance and efficiency, which suits real-time applications in domains like agriculture, where devices with limited computing resources are used for disease detection [198].
we were the first to adopt the GSE_YoloV5s model for the detection of apple leaf disease. Using Ghost Bottleneck instead of CSP Bottleneck and an attention SE module, the proposed approach helps reduce the computational load while improving the small lesion detection. It outperforms our baseline YoloV5s model in both speed and accuracy with 40% fewer parameters (AP = 83.4%) [116].
Agriculture is a lifeline service to mankind,  providing food and money to people. Various diseases affect different fruits and crops. Mosaic contents, Frogeye, etc, are some of the pathological conditions that may occur with apple tree leaves that influence their production. Under infection that can reduce apple growth and thus affect the assets of the country, image processing is used to detect apple leaf disease when the apple crop gets damaged. This strategy allows for a clear separation between infected and healthy apple leaves. Often, people tried to scan those diseases with their eyes. At times, bad decisions are made because the leaves look alike. They particularly provide misleading results and lead to delays in achievement. They cannot rely on the results in time. Additionally, manpower is needed to identify these leaf diseases visually. In their paper, they propose the CNN model and an algorithm to identify those types of diseases in crop leaves. We trained a sample we had created to analyze the infected leaf and identify the disease on the leaf. We use the Inception v3 algorithm [117]. The EADD-YOLO model to identify a diverse range of apple leaf diseases, as well as to overcome issues such as the large number of parameters, the slow detection speed and the low performance for tiny, dense spots. Meanwhile, a Shufflenet inverted residual module constructs a light-weight model aimed to decrease parametric and floating-point operations (FLOPs), improving computational efficiency at a little cost of detection precision. To enhance feature extraction and fusion, a depth-wise convolution-based feature-learning module was introduced to the neck network. Moreover, a coordinate attention module draws attention to key locations to enhance detection performance for objects of different sizes in different scenes. CIoU loss is replaced by SIoU loss function to improve bounding box localization precision [118]. Another study examines the spatiotemporal relationship between eco-environmental quality and land-use carbon emissions (LCE) in Qingdao City, China, during the period from 2005 to 2019. The study develops an improved Land Cover Ecological (LCE) model, the remote sensing ecological index (RSEI) and decoupling theory integrated with Google Earth Engine (GEE) and GIS platforms, using raster datasets for carbon emissions, remotely sensed environmental variables and land-use changes. The main findings are as follows: 1) from 2005 to 2019, RSEI increased from 0.4365 to 0.5378; 2) LCE increased from 4.028 million tonnes in 2005 to 7.929 million tonnes in 2019; 3) four decoupling states were identified: weak decoupling, expansive negative decoupling, strong decoupling, and strong negative decoupling. These trends indicate the need for space-managed policies to balance carbon emission reductions and sustainable development [119]. An apple leaf disease detection model, ALAD-YOLO with high efficiency and accuracy, taking into account the trade-off between detection accuracy and speed. In a dataset of 2,748 diseased apple leaf images under different environmental conditions, we incorporate three optimizations into the model, including the basic blocks of Mobilenet-V3s, coordinate attention (CA) in the backbone, and group convolutions in the SPPCSPC module. These adaptations contribute to reducing the model footprint with preserved detection performance. Experimental results show that the proposed ALAD-YOLO can achieve a mAP of 90.2% (9.3% higher than YOLOv5s) with the floating-point operation (FLOPs) reduced to 6.1 G, which is a conversion per FLOPs with efficiency reduced by 9.7 G compared to YOLOv5s. Their speed and accuracy trade-off makes the model a good candidate for apple leaf disease detection and potentially other applications [120]. The Convolutional Neural Network (CNN) uses an AlexNet-based model for detecting five diseases on the leaves of an apple tree. The model uses dilated convolution to extract coarse-grained disease features while also having wide receptive fields and few parameters. At the same time, a number of 3 × 3 convolution shortcut connections are added to handle more nonlinearities. To enhance the fitting of channel features and minimize the impact of the background noise, the attention mechanism is applied to the output modules. Moreover, this model employs global pooling instead of two fully connected layers, which can reduce the number of model parameters and retain the integrity of the features. Size 5.87 MB, recognition accuracy 97.36%. This method is also practical, light and can detect apple leaf diseases with a very high accuracy; when compared with five other models [121].
Design of a new lightweight convolutional neural network for fast and efficient classification of apple leaf diseases (RegNet). The dataset consists of 2,141 images, including five diseases (rust, scab, ring rot, panonychus ulmi) and healthy leaves, and the dataset is compared with state-of-the-art CNNs, including world models ShuffleNet, EfficientNet-B0, MobileNetV3, and even Vision Transformer. With a learning rate of 0.0001, RegNet-Adam obtains a remarkable 99.8% accuracy on the validation set and 99.23% on the test set, surpassing all other pre-trained models. By utilizing transfer learning, the study shows that the suggested method facilitates rapid and precise detection of apple leaf diseases [122]. A light-weight CNN model, named MEAN-SSD, which is more suitable for real-time detection of apple leaf diseases on mobile devices. It is trained on a custom dataset, APPLDisease5, of simple and complex background images of 5 common apple leaf diseases: Alternaria blotch, Brown spot, Mosaic, Grey spot and Rust. To solve this contradiction, the MEAN (Mobile End AppleNet block) module is proposed, which decreases the number of operations while rebuilding the normal 3 × 3 convolution to require less space. The Apple-Inception module, the inspiration for GoogLeNet's Inception module, substitutes the MEAN block for all 3×3 convolutions. MEAN-SSD reaches detection performance at 83.12% mAP and 12.53 FPS from experimental results, proving the suitability of mobile deployment to the model in efficient and precise disease detection [123]. Another work uses a lightweight YOLOv8n-based model to detect diseases on apple leaves, with an emphasis on mobile and embedded devices. The major updates involve substituting normal Conv layers with GhostConv and partial C2f structure with C3Ghost to lower the model’s parameter number while increasing its performance. Specifically, this model introduces a Global Attention Mechanism (GAM) mechanism to enhance lesion detection, and a significantly improved Bi-Directional Feature Pyramid Network (BiFPN) spanning across sub-networks to optimize the fusion of features between the two networks,  allowing for the integration of useful features from other layers, which is a common challenge within the environment for small lesions. Experiments yield 32.9% reduction of computational complexity, 39.7% reduction of size of model (3.8 M), and 3.4% performance improvement (mAP@0. 5 of 86.9%. As compared to YOLOv7-Tiny, YOLOv6, YOLOv5s or YOLOv3-Tiny models, the proposed YOLOv8n–GGi model outperforms in terms of detection accuracy, model size and overall sandwich performance, thus making it ideal to use in real-time apple disease detection for mobile and embedded devices [124].
Thus, this paper presents a mobile device-based one-stage detection model, namely, MGA-YOLO, for apple leaf disease on-site diagnosis in real-time. It adopts the Ghost module to lessen FLOPs and parameters, combines CBAM to upgrade feature extraction and employs Mobile Inverted Residual Bottleneck Convolution. A new dataset, ALDOD (8,838 images), enables training in a complex background. MGA-YOLO (mAP: 94.0% with augmentation, model size: 10.34 MB, FPS: 12.5 on mobile devices), highly suitable for in-field disease detection [125]
In this study, LAD-Net, a lightweight and real-time model, was proposed for the early detection of apple leaf diseases, including aphids, rust, and powdery mildew on mobile devices. It employs AD Convolution to decrease parameters and LAD-Inception for improved multiscale feature extraction. The overall performance is high, even with a tiny dimension (1.25MB) and fast detection (15.2ms on Huawei P40), producing 98.58% accuracy using LR-CBAM and global pooling, promoting LAD-Net to practical usage in mobile-based detection [126]
The research proposes a lightweight apple leaf disease identification model, RepDI according to the training data (the data only contains the image of the apple leaf), which uses structural reparameterization and depth-wise separable convolution to optimize the model for CPU devices. Hence, it is designed with a multi-branch structure to parallel dilated attention for extracting challenging large venue attentions. When tested on the difficult Real-ALD dataset, RepDI achieved 98.92% accuracy; it also presents the fastest CPU inference compared with other lightweight models, marking it as a strong contender for its use in real-world agriculture applications [127]. The main contribution of this study is an optimized RegNet model for the recognition of similar diseases that occur simultaneously on the same leaf. Through the manipulation of training strategies, the use of data augmentation, and variations in the complexity of image backgrounds, the results indicate that the use of transfer learning, accompanied by offline expansion provides substantial effective performance on the classification. The two datasets resulted in an accuracy of 93.85% and 99.23% in the prediction of the image of the scene, showing promising generalization and robustness capabilities in complex field contexts [85]. They propose a model, called ALAD-YOLO, which is based on the YOLOv5s model and achieves lightweight and accurate apple leaf disease detection. The model is implemented with MobileNet-V3s blocks, coordinate attention, and efficient convolution modules (group, depth-wise, Ghost) to decrease model size and computational complexities using data (2,748 images) that are recorded under challenging environmental conditions. With only 6.1 GFLOPs, ALAD-YOLO achieved an accuracy of 90.2%, far better than that of YOLOv5s in terms of precision and efficiency [129]. To deploy models on mobile devices, the ELM-YOLOv8n model proposes major innovations for apple leaf disease detection in complex environments. Instead, it integrates the Fasternet Block into the YOLOv8n architecture to reduce parameters and computational load, incorporates Efficient Multi-Scale Attention (EMA) to improve feature extraction in complex backgrounds, and a DESCS-DH is introduced to improve the edge details capture across scales. The model improves the localization of small targets by using the NWD loss function instead of CIoU. Achieving 96.7% mAP@0. 5 and 94.0% F1-score, greatly outperforms the baseline YOLOv8n with a reduction of parameters and computation with 44.8% and 39.5% respectively [130]. YOLOv8-GSSW Model: A Lightweight Enhanced YOLOv8n for Apple Leaf Disease Detection. By incorporating the GSConv module in the backbone, it follows a reduction in computation complexity and parameter amounts by 15.6%. The Slim-Neck architecture with SA attention enhances channel assignment and correlation. With sample imbalance and convergence problems in CIoU loss, we have replaced it with WIoU. It is a small model with 2.7M parameters and a size of 5.4MB, achieving 87.3% mAP while remaining suitable for deployment on edge devices [131].
To improve the precision of small apple leaf spot detection in complex orchard environments, YOLOv5-Res and lightweight YOLOv5-Res4 models-based detection methods are proposed in this study. The ResBlock module integrates Inception and ResNet ideas to achieve multiscale feature extraction, and the C4 module brings lower parameters and enhances small-target detection. Results show mAP0. 5 achieve +2.8%(YOLOv5-Res), +2.2%(YOLOv5-Res4) improvement with model sizes~10.8MB, 2.4MB and 22% 38.3% parameter reduction compared with YOLOv5s, YOLOv5n [132]. Apple-Net offers a novel approach to apple leaf disease detection based on YOLOv5, which introduces a Feature Enhancement Module (FEM) to enhance the multi-scale output and adds Coordinate Attention (CA) to improve the detection efficiency. In contrast, addressing disease diversity and semantic limitations, Apple-Net achieves superior performance, 95.9%mAP@0.5 and 93.1% precision, besting four traditional models and demonstrating robust intelligent agricultural applications [133]. A-Net is an extension of YOLOv5, implementing effective apple leaf disease detection by means of adding the Wise-IoU loss function given the attention and dynamic focusing, and substituting the convolution modules for RepVGG for speed and accuracy improvements. The model suppresses the growth of error weight effectively and obtains 92.7% accuracy and 92.0 mAP@0.5,  which was superior to other detection models and can be used in practical ecology agriculture [134]. To reduce parameter settings and speed up the processing for apple leaf disease detection, an improved TPH-YOLOV5-MobileNetV3 algorithm is constructed by the SimAM attention module and MobileNetV3. The proposed model has the high precision (91.41%), recall (91.94%) and F1 (91.67%) score and mAP (94.26%) with only 4.5M parameters, which allows for faster detection (17.05 ms/img) It shows an efficient balance between recognition accuracy and speed compared with other models, and can be performed for real-time, non-destructive apple leaf disease detection [135]. Integrating Attention Mechanisms and Transformer Encoder in YOLOV5 for better apple leaf disease Detection by YOLOV5-CBAM-C3TR. It achieves a mAP@0. 5 of 73.4%, precision = 70.9%, recall = 69.5%, for diseases such as Alternaria blotch, Grey spot and Rust. It boosts mAP by 8.25% using a few extra parameters over B-TOYOLOV5. It indeed has the potential to be a game-changer in improving disease detection technology for the apple industry, as it can identify similar types of disease in proximity to the orchard therefore the model achieved 93.1% accuracy for Alternaria Blotch and 89.6% accuracy for Grey spot [136]. YOLOv5s is suitable for multi-scale apple leaf disease detection in a complex natural scene. The model adopts BiFPN to utilize multi-scale features with high accuracy as well as a transformer and CBAM attention mechanism to robustly express disease features and reduce background interference. A mAP of 84.3% with a throughput of 8.7 images per second on an octa-core CPU. It improves enormously in speed and accuracy as opposed to SSD, Faster R-CNN, and YOLOx, while also achieving excellent performance in the presence of noise. This lightweight, accurate model is well-suited for deployment on mobile devices, facilitating early intervention for apple leaf diseases [137]. An enhanced Yolov5 framework in this study that performs apple detection on images captured directly from the farms, overcoming the built-in limitations of image noise, blurriness, and complex background. To boost the feature level and to recognize smaller objects, the proposed model applies an adaptive pooling scheme as well as utilizes a set of attribute augmenting models. This method also employs a custom loss function to enhance bounding box accuracy leading to detection results with precision, recall, and F1-score values of 0.97, 0.99, and 0.98, respectively. This approach provides enormous advancement in detecting apples; therefore, it is mostly used for automating the apple harvesting process [138].
FSM-YOLO an enhanced convolutional neural network (CNN) model for detecting apple leaf diseases in unstructured environments. The model introduces the Adaptive Feature Enhancement Module (AFEM) to enhance feature extraction, also adds a Spatial Context-aware Attention (SCAA) module which performs an in-depth modeling of the spatial relationship in the image, and adopts Multi-kernel Mixed Convolution (MKMC) to obtain different scales of features. Compared to YOLOv8s, FSM-YOLO performs better with a 2.7% mAP@0.5, 2.0% in precision, and 4.0% in recall. We show that FSM-YOLO outperforms all existing algorithms on six datasets, demonstrating that it is both robust and suitable for plant disease detection [139]. The implementation of YOLO (You Only Look Once) networks for early disease detection and health monitoring of apple trees infected by Apple scab, Black rot and Cedar apple rust. The paper proposes a system relying on their own custom-made drones to capture video images of the trees. The research also emphasises the need for continuous surveillance for early detection so that losses in agriculture can be reduced. This helps to assess the advantages and disadvantages of approaches for disease detection including YOLOv3[139] and image labelling and data training/testing [140].
The problem of identifying pickable and unpickable apples in images of apple trees for robotic pickers. This paper proposed a lightweight apple detection method based on YOLOv5s. Data: YOLO V7 was trained on data till Oct 2023 and features BottleneckCSP-2, attention module (SE module), better feature map fusion, and effective anchor box sizes The recognition recall, precision, mAP and F1-score, based on experimental results, were 91.48%, 83.83%, 86.75% and 87.49%, respectively. The experimental results showed that, compared with YOLOv5s, YOLOv3, YOLOv4 and EfficientDet-D0, the proposed model could obtain higher mAP and the recognition speed was faster, so it is well applied to real-time targeted detection of apples for robotic systems [141]. After analysing the diseases of accelerated apples, this paper proposes an enhanced apple leaf disease detection model based on YOLOv5, which introduces the Feature Enhancement Module (FEM) and coordinate attention (CA) to expand the diversity of diseases and improve the detection accuracy. This architecture with FEM contributes to enhanced output of multi-scale information, and CA promises effective detection. As evidenced by the experimental results that Apple-Net has a higher mAP@0.5 (95.9%) and precision (93.1%) compared to four classic detection models, demonstrating its high efficacy in apple leaf disease identification [142].
We proposed TPH-YOLOV5-MobileNetV3, which was an enhanced apple leaf disease detection algorithm by improving the dataset and using the SimAM attention module as well as the MobileNetV3 architecture to make the network parameters smaller. This model has 4,537,842 parameters and achieves accuracy, recall, F1 and mAP of 91.41%, 91.94%, 91.67%, and 94.26%, respectively. They also provide a trained model of size 11.8 MB and tested with a timing of 17.05 ms/img. This approach also reduces the number of parameters and detection time compared to other models like YOLOV5 and TPH-YOLOV5 at the same time [143].

2.2.6. Hybrid Models

A hybrid model is a model which merges its machine learning or deep learning strengths with others. Hybrid models are also used in agriculture which combines traditional methods (for example, SVM) and deep learning techniques (for example, CNNs, transfer learning, etc). By improving feature extraction and effectively managing diverse datasets, these models MDT balance accuracy and robustness, especially in complex tasks such as the detection of disease. To classify apple leaves with foliar diseases, avoiding disadvantages suffered by standalone CNN and LSTM such as overfitting, class imbalance, and gradient issues. CNN for feature extraction and LSTM for extracting temporal patterns. and achieved 98.00% accuracy, 95.00% specificity, 96.00% sensitivity, and 94.00% AUC on the dataset. CNN-LSTM collectively results reveal a better performance than the individual CNN and LSTM in accurately identifying both disease-positive and disease-negative apple leaf images, suggesting robust CNN-LSTM-based apple disease detection solutions [144].
An innovative apple recognition and 3D localization solution for simultaneous robotic harvesting in cluttered orchard environments. Object detection was carried out in combination with Deep SORT for apple tracking and counting using YOLO variants (YOLOv4,  YOLOv5, YOLOv7) and EfficientDet models applied on images captured with a Realsense D455 RGB-D camera under varying speeds and camera angles. Results show that YOLOv7 achieves the best mAP@0.5 (0.905); EfficientDet had the lowest RMSE (1.54cm) at 15° and 0.098ms⁻¹. It helps improve apple localization and count accuracy based on field dynamic conditions with robotic arm deployment [145]. Another study proposes MEAN-SSD, an a real-time CNN-based model customized for mobile devices, which is specifically designed to detect apple leaf diseases. Focusing on five common diseases (Alternaria blotch, Brown spot, Mosaic, Grey spot, and Rust), the AppleDisease5 dataset was used to train the model, which contains both simple and complex backgrounds. Notable developments of the new architecture include the MEAN block for eliminating standard 3×3 convolution to promote efficiency and the Apple-Inception module, containing GoogLeNet’s Inception structure augmented with MEAN blocks. MEAN-SSD attains 83.12% mAP and 12.53 FPS, suitable for accurate, real-time mobile deployment in agricultural environments [146]. Another method that focuses on a deep learning-based apple surface anthracnose lesion detection method. CycleGAN and traditional augmentation techniques are considered to perform data expansion to compensate for limited data caused by sporadic disease occurrence. The YOLO-V3 model is adopted with DenseNet to optimize the utilization of low-resolution feature layers, in this way, the feature utilization and detection accuracy can be improved. Experimental results show that the improved model surpasses my Faster R-CNN with VGG16 and the original YOLO-V3 of the same epoch, achieving real-time detection. We have proposed a very robust and effective approach for early orchard-based anthracnose detection using optical sensors [147]. We will get an overview of the existing AI-based methods for plant disease detection and classify them as machine learning or deep learning techniques. It describes their capabilities, shortcomings, and the datasets metrics used for evaluation. The paper, however,  cites challenges that remain, including data scarcity and the ability to deploy decisions in real time. It ends with recommendations for future studies to improve accuracy and application in the agricultural domain [148].
we discuss how ML and DL significantly contribute to the early detection and management of biotic and abiotic stresses in plants. These techniques analyze large datasets (e.g.,  TERRA-REF) that can predict stress response, using high-throughput equipment (UAVs, satellites, and hyperspectral imaging). AI in stress phenotyping, trait identification, and resilience improvement through diagnostic mechanisms are key observations made by this study [149]. Setu et al. (2024) propose an application of a machine learning approach for a combined binary classification application to diabetes using specific pathological data of 768 patients. In summary,  the methodology combines four clustering algorithms—Fuzzy C-Means, K-means, Fuzzy Inference System (FIS), and Support Vector Machine (SVM)—with an entropy-based probability fusion technique. Feature reduction was conducted with MLR before model combination and used logistic regression to account for outliers. The integrated approach achieves about 94% detection accuracy, successfully utilizing various ML methods for enhanced diagnostic efficacy [150]. The focus of this study is to classify apple leaf diseases by applying machine vision and machine learning to extract color and texture features from images. The apple scab, black rot and cedar apple rust were detected with high accuracy 95.83% and 96.68% respectively using the Gaussian process regression SVM classifiers which shows the potential of image-based feature extraction and classification in disease analysis [151].
Table 2. Comparison of Models.
Table 2. Comparison of Models.
Approach Model / Paper Accuracy (% F1-score (% Precision
CNN CNN baseline(various) 91–97 88–96 90-95
YOLOv5 MEAN-SSD, YOLOv5 variations 97.9-99.6 97.7-99.5 97-99.7
Transfer Learning EfficientNet, MobileNet 98.6-99.7 97.7-99.6 97.1-99.8
Hybrid Lightweight Efficient-ECANet, PDICNet 99.71 97.77 99.41
SVM Traditional ML + Features 85-93 80-91 83-90
Naive Bayes Traditional ML 80-88 75-86 78-87
Machine Vision Color/texture/shape-based 75-90 70-88 72-89

3. Machine Vision and Image Processing:

Machine vision plays an indispensable role in agriculture applications, especially for the disease detection, grading and crop management. Camera and sensor capture visual data for Machine Vision and image processing techniques analyze and enhance this data to generate meaningful insights. Thresholding, edge detection, segmentation, and feature extraction assist in identifying specific patterns (such as disease symptoms on leaves) or in classifying the fruits' qualities. Automated systems can now make accurate decisions with this technology, further optimizing farming processes while minimizing manual labour [199].
Figure 6. Workflow of Machine Vision.
Figure 6. Workflow of Machine Vision.
Preprints 159380 g006
How computer-assisted technologies, and specifically, machine learning (ML), deep learning (DL), or other domain-specific approaches, are currently impacting disease detection in apple crops. It shows the evaluations of the methods on metrics such as accuracy, precision, recall, F1-score, speed, and error rate. CNN-based DL models manage to be significantly more effective than the combination of ML and DL as stand by and far outperform on 3D image data in real time. They also brought attention to the little research that has been conducted on post-harvest disease detection, and introduced an area of importance by including biotic and abiotic stressors as influential in disease incidence and detection accuracy for crops such as apple [152]. A novel thermal image-based computer vision method for segmenting unhealthy apple tree leaves using NNs optimized for embedded systems. A comparative study has been performed for standard embedded platforms, GPU speed-up embedded systems, and PCs. The final Intersection over Union (IoU) score for segmentation reported was 0.814, confirming satisfactory segmentation performance. The favourable outcomes demonstrate that fine-tuned NNs running on embedded computing hardware provide a practical, energy-efficient solution for real-time detection of Kiwi fruit disease in precision agriculture applications, and underline the promise of using low-power, field-deployable technologies in orchard settings [153]. Here, we propose an image processing technique for classifying apple diseases based upon color, texture, and shape-based features. K-Means clustering (for infected region segmentation), extraction, and fusion are carried out. We use a multi-class Support Vector Machine (SVM) for classifying apples into blotch, rot, scab or healthy classes. Results also suggest that both combined features outperform individual features, whereas, shape features are less effective by using it standalone. It presented a strong classification scheme that highlights the necessity of combining color and texture characteristics for the identification of apple diseases [154]. A survey of the machine learning-based approaches for detecting apple leaf and fruit diseases. Early detection of disease outbreaks such as black rot, black measles, leaf blight, and mites is critical to prevent economic losses and maintain sustainability in agriculture. It highlights the need for developing specific algorithms for different plant types, different applications should include extensive feature extraction (leaf area, width, length, etc.) to increase accuracy. They have also highlighted the gaps in the current research and proposed a system that would help in future work related to leaf disease detection [155].
For example, A study offers a method that corrects the uneven distribution of brightness in images of fruits caused by lighting or limitations of the vision system. The method converts the light inhomogeneity of spherical objects to a uniform distribution and thus enables easier defective area extraction based on a global threshold. Using 100 images of apples as experimental data, it achieved a classification rate of 94.0%, which highlights the simplicity and effectiveness of these results. Same for spherical fruits, the suggested algorithm can also work on them [156]. A neural network is trained to classify apples as having scab, bitter rot, black rot or being healthy. Providing a low-cost apple disease diagnosis system (neural network). To be more specific, the method uses color and texture features from apple images. Only 60% of the dataset was used for training the multi-layer perceptron neural network, and the rest was used solely for testing. A two-layer network containing eight neurons in each layer produced the highest accuracy of 73.7%, providing a good method for apple disease classification [157]. A 19-layer convolutional neural network (CNN) model for the classification of Marsonina Coronaria and apple scab diseases of apple leaves is proposed in this research. It was trained on a dataset of 50k leaf images from apple farms across Himachal Pradesh and Uttarakhand, with an image augmentation applied to the dataset to further increase the size. The accuracy - which was around 99.2% - was found to be greater than other CNN models with a lesser number of layers and traditional machine learning classifiers such as support vector machine, k-Nearest Neighbour, Random Forest and Logistic regression [158]. DaSNet-V2, a deep learning-based one-stage detector for robotic fruit harvesting in orchards. The model is able to perform the detection of fruits, the instance segmentation of fruits, and the semantic segmentation of branches all under one network architecture. To realize computational efficiency, we integrate a lightweight backbone network (LW-net) into the proposed approach. It was tested on Apple Orchard RGB-D images and reached detection F1 scores of 0.844, instance segmentation of fruits of 0.858, and semantic segmentation of branches of 0.795. The result is visualized in 3D that shows DaSNet-V2 robustness and real-time efficiency in real-world orchard settings [159].
Tons of pest detection and prediction-based covering diseases in detail, the types of crops such infected by pests. This work focuses on the application of machine learning, deep learning, and domain-specific methods for low-level disease detection in apple crops. Numerous methods are proposed around which the authors analyze their strengths and weaknesses, as per performance metrics of accuracy, precision, recall, F1 score, speed and error rate. Machine learning and deep learning have limitations in disease detection, however, the combination of deep learning with convolutional neural networks (CNNs) based on real-time image data with 3D images presents greater potential for the larger society. The work also investigates the biotic and abiotic factors influencing apple disease incidence, providing insights into factors that may enhance disease detection [160]. The proposed method helps apple growers and farmers to detect apple leaf diseases by the use of a deep learning algorithm using Mosaic disease and Black Rot images, which decrease the yield and quality of apples. This method utilizes rotated bounding boxes and proposes a ProbIoU loss function to enhance the precision of model predictions. To resolve class imbalance, the study utilized data augmentation methods after integrating the Plant Village dataset with an orchard on-site dataset from Shandong, China. The EfficientNetV2 architecture, FusedMBConv, S-MBConv, etc., modules reduce the information loss, while the SimAM attention mechanism increases the extraction of the features. Depth-wise separable convolution and CAF could enhance the feature fusion. On the MS COCO dataset, the proposed model gives 93.3% mAP@0. Experimental results validate that our model generates an improved AP score in the MS COCO dataset. 5, 88.7% Precision, 89.6% Recall, showing its potential for early disease detection and better disease management in apple orchards [161].

4. Applications of Advanced Imaging Techniques:

Hyperspectral imaging, 3D imaging, multispectral imaging, and other advanced imaging techniques have brought about profound changes in how agriculture can be monitored, offering detailed information on crop health and environmental circumstances. Hyperspectral imaging goes all the way to the room outside the area storable to the human eye, sensing minute amounts in vegetation very qualified for catching habits in vegetation physiology like disease or too much or too little nutrient uptake. 3D imaging, meanwhile, provides accurate elevation data that can assist with crop monitoring, yield estimation, and even robotic harvesting. Crop monitoring: Multispectral imaging is used to capture data in multiple unique wavelength bands to assess crop health and identify potential stress responses. Coupling these techniques with machine learning or artificial intelligence allows for real-time, accurate decision-making for precision farming. The potential of hyperspectral spectroradiometer data for detecting Venturia inaequalis infections in apple leaves. The objectives were to differentiate infected leaves from healthy ones in both resistant and susceptible cultivars, identify the developmental stage for detection, and pinpoint the spectral regions most effective for differentiation. Using Partial Least Squares Discriminant Analysis (PLSDA) for classification, the study found that hyperspectral data could effectively predict infected leaves. The best results were achieved in the spectral domains around 1500nm and the visible region, particularly during the well-developed infection stage. These findings suggest that hyperspectral data could aid in early stress detection and inform management strategies in apple orchards [162]. The use of hyperspectral imaging combined with machine learning to detect apple proliferation symptoms in leaves. Over two growing seasons (2019-2020), 1160 leaf samples were collected and analyzed using a dual camera setup covering spectral bands from 400 nm to 2500 nm, with PCR analysis for reference data. The research highlights the effectiveness of spectral indices and machine learning models, such as rRBF, achieving 97.1% accuracy for single-variety samples in controlled environments. For multiple varieties, classification accuracy reached 73.1%, with improvements to 75.1% when spatial data was included. Additionally, regression models based on spectral data predicted qPCR results with an RMSE of 14.491 phytoplasma per plant cell. This study demonstrates the potential of hyperspectral imaging and machine learning for early disease detection and management in apple orchards [163]. The use of hyperspectral imaging for detecting water stress in young apple trees (Buckeye Gala) under different water treatments. Spectral data was captured across a 385–1000nm wavelength range using a hyperspectral camera, alongside a spectral vegetation sensor and digital camera. Spectral indices, including Red Edge NDVI at 705 and 750nm and NDVI at 680 and 800nm, were strongly correlated with varying levels of water stress. The findings demonstrate that intelligent optical sensors can effectively detect plant stress and provide decision support for managing stress levels, improving productivity in agricultural settings [164].
In parallel, we have also investigated the use of hyperspectral imagery to detect various stages of Apple Marssonina Blotch (AMB), a fungal disease affecting apple trees. Given AMB's long latency period and similarity to other diseases in its early stages, visible information alone is insufficient for accurate detection. The research applied the orthogonal subspace projection (OSP) unsupervised feature selection method to reduce redundancy in hyperspectral data. Ten optimal spectral bands were selected, predominantly in the near-infrared region, which were then used as input features for classifiers. The results showed that the selected bands and classifiers achieved accuracy between 71.3% and 84.3%, demonstrating the potential of hyperspectral imaging and OSP for effective AMB detection [165]. We demonstrate the potential of shortwave infrared (SWIR) hyperspectral imaging in the early detection of apple scab infection. Hyperspectral imaging: Close-range images were captured to avoid interference at 2–11 days post-inoculation (dpi) with a push-broom short-wave infrared (SWIR) camera (head-on mounted, 99SE.0 PS, 4:00 PM) (B + S, Oberg-raidersbach, Austria). A PLS-DA classification model was built at an advanced infection stage (D11) and applied to previous stages images. The results suggest that hyperspectral data from the 1000-2500 nm spectral range is capable of accurately distinguishing infected from healthy leaves. The water absorption band at 1940 nm is one of the most important spectral features to separate early infections [166]. The development of hyperspectral imaging is an important technology for assessing the quality state and safety of agricultural and horticultural products. Derived from remote sensing, it uses machine vision and point spectroscopy combined to improve image segmentation for the detection of defects and contaminants. Hyperspectral imaging is a non-invasive technology that combines spatial and spectral information to describe both external physical as well as internal biochemical characteristics of food and agricultural products. Hyperspectral imaging has found important applications in precision agricultural systems with respect to classification and chromatic analysis, and for other critical factors in achieving global sustainability; this review explores efforts to address such challenges and looks ahead to possible future directions for hyperspectral imaging [167].
We have also assessed the potential of low-altitude RGB and multispectral images to detect powdery mildew (PM) in apple orchards. UAS-based imagery was acquired by a consumer-grade RGB camera or a five-band multispectral sensor. RGB images were classified using image processing algorithms, which achieved an accuracy of 76.4% in the PM detection task and showed a strong correlation (R² = 0.94) between the PM clusters in the corresponding images and the clusters predicted by the model. Among seven vegetation indices (VIs), certain indices derived from multispectral images were successful in segregating PM-infected and healthy leaves, with Modified Simple Ratio-Red and Optimized Soil Adjusted Vegetation Index identified as controlling VIs. These results show the feasibility of PM detection via high-resolution aerial images, but further work is required to scale detection to individual orchard blocks for targeted treatment [168]. A system that is capable of autonomously navigating the apple orchards and can identify early-stage diseases using hyperspectral, multispectral and visible range scanners. 2D LiDARs and RTK GNSS receivers are utilized to facilitate localization and obstacle detection. The system’s goal is to reduce pesticide use and increase harvest yields. This technique for disease detection utilizes neural networks for plant segmentation and precise identification of disease, providing an effective intervention and management technique in orchards [169]. The dataset, obtained from the H2020 OPTIMA project, holds images and annotations of pictures to help detect diseases on crop types from orchards, vineyards, and fields originating from France, Italy and Spain. These images are related to three diseases of three different crops: apple scab (apple), alternaria (carrot) and downy mildew (grape). The text files associated with the images provide the location information for the bounding boxes of the diseases in a format suitable for object detection in YOLOv5; this allows for rapid training and testing of the trained model for disease detection in agricultural scenarios [170].
Apple scab, caused by Venturia inaequalis, is one of the most important pathogens that impact apple production, resulting in yield losses and considerable fungicide application. “Traditional disease management consists of short-distance and high-frequency spraying of a uniform pesticide on the homes which contributes to environmental contamination. The targeted application of fungicides based on reconnaissance and meteorological data can help in early detection of over-threshold apple scab infections resulting in precise chemical applications, thereby curbing its usage and minimizing any harm to the ecosystem. Remote sensing systems have considerable potential for identifying pathogens in their early stages; however, practical applications specific to apple scab monitoring in commercial plantings has not progressed due to a range of challenges including, but not limited to, large variations in symptoms, illumination conditions, and tree physiologies, as well as other environmental stress factors. This research investigates the use of data derived from multispectral imagery for early apple scab detection in the field under real orchard conditions. [171]
Apple Proliferation is caused by the bacterium Candidatus Phytoplasma mali, responsible for major economic losses in commercial apple production. Routine identification depends on visual inspection by experts and molecular lab techniques, both of which are time and resource-consuming. This study investigated the potential of a non-destructive, in-field spectral signature analysis to identify infected apple trees from healthy ones. Through multivariate statistical analysis, the study was able to distinguish infected and healthy trees according to the spectral signatures of fresh leaves and identify the wavelengths most relevant for differentiation. The performance of the differentiation process was influenced by different components such as sampling date and bacterial colonization behaviour [172]. Apple mosaic and its effects on fruit yield and quality are known to be severe, making rapid monitoring a prerequisite for management. This study employed hyperspectral imaging and simultaneously measured anthocyanin content to characterize the spectral differences of healthy and infected apple leaves. The analysis showed higher reflectance in the range of 500–560 nm as the severity of the disease increased. This resulted in a remarkably improved correlation between spectral reflectance processed by the Gaussian1 wavelet transform and anthocyanin content. The best performance in estimating anthocyanin content that enabled efficient monitoring of the disease was shown by the VPs-XGBoost model. These results provide a basis for large-scale remote sensing monitoring of apple mosaic disease. [173]. Accurate monitoring of nitrogen content in apple trees is fundamental for optimizing growth, fruit quality, and yield (Dordas, 2009). Hyperspectral imaging was employed alongside machine learning (PLSR, SVR, XGBoost) to estimate total nitrogen content in apple leaves. Spectral binning (4, 8, 16 bins) was used to lower computation costs. The SVR-based model performed comparably to full spectrum models, with 4- and 8-binning yielding the best results, without sacrificing prediction quality. When performing at 16-binning, we observed a decrease in performance under the condition of spectral data loss. These results could help achieve optimized nitrogen management in apple production, maximizing both growth and yield, and can be used for the development of multispectral sensors [174].
Here, we present a study that applied near-infrared spectroscopy and multivariate data analysis to discriminate between infected and non-infected apple trees with 'Candidatus Phytoplasma mali'. For 2 years, samples were collected from 3 orchards, and principal component analysis and classification models were developed to analyze spectral data. The model accuracy increased in the vegetation season and showed the best results in fall. The detection of symptoms was highly reliable, even in asymptomatic leaves from infected trees, indicating that spectral analysis is a consistent method for the identification of Apple Proliferation before any symptoms become visible. The differentiation was confirmed by several key wavelengths associated with lowered carbohydrates and nitrogen compounds, indicating that this approach has the potential for future applications related to the modernization of agriculture and smart farms [175]. We used Vis–NIR spectroscopy to detect early plant stress in apple trees under challenge with apple scab, waterlogging, and herbicides. We acquired spectral signatures of leaves spectroradiometer, and established machine-learning models to detect stress and classify stress types, 1–5 d postexposure. The models performed well with an accuracy ranging from 0.94 to 1 to identify the Lauda leaf stress presence early from a large hyperspectral dataset, whilst the relevant key wavelengths were associated with photosynthesis (684 nm) and starch accumulation (1800–1900 nm) [20,21]. The results demonstrate the capacity of spectral technology and machine learning for early plant stress diagnosis, which can help reduce environmental impact by optimising resource use in agricultural settings [176].
The main goal of this paper is to study the significance of remote sensing in the field of land feature analysis especially in the area of land cover mapping using IRS LISS IV multi-spectral data. The study focuses on the time and cost-exigent nature of manual classification and proposes the solution of classifying using algorithms, which also includes supervised, unsupervised, and object-based classifiers. And it compares two supervised classification methods namely, Maximum Likelihood and Support Vector Machine (SVM). In [177], the study agrees with the literature that machine learning methods like SVM outperform Maximum Likelihood in accuracy and that this study is compares SVM results to Maximum likelihood for optical data classification; it also suggests a simple, efficient phase for high-resolution land cover analysis.[177].In orchard environments, occlusion and structure are a challenge for fruit detection and a solution was proposed using an RGB-D camera system in this study. The combined RGB and 3D-based algorithm correctly identified 100% of visible and 82% of partially occluded red and bicoloured apples, with a mean location error of less than 10 mm. The quick processing time (< 1s for 20 apples) benefits the robotic harvesting scenarios [178]. Cucumber and cement mortar were evaluated to improve dielectric permittivity characterization for non-destructive crack detection. Using 143 features in 13 frequencies (100 Hz–3.98 MHz), a total of 36 differential electric features were picked out as crucial indicators. Also, principal component analysis (PCA) and classifiers such as MLP and Fisher Discriminant reached up to 95.4% accuracy. Using MLP or RBF, the low-frequency loss factor achieved 100% accuracy, which suggested the possibility of developing a classified detection system for watercore in a highly efficient and cost-effective manner [179]. A fruit detection algorithm that combines color, depth, and 3D shape information for robotic harvesting. It segments the depth frames from RGB images, runs region growing for clustering, and M-estimator sample consensus to detect the 3D fruit shape. SVM classification with a global point cloud descriptor (GPCD) used coverage to reduce false positives. Its accuracy was maintained at high precision (0.8640 − 0.8880) and high recall (0.7620−0.8890) on pepper, eggplant and guava datasets demonstrating its efficiency and capability in real-time nature [180].
The use of X-ray radiographic imaging to identify internal defects of five different apple cultivars (RD, GD, FJ, GS, BR) from Washington State. Four hundred to seven hundred apples of each cultivar with and without defects (e.g., bruising, rot, insect damage, watercore) were imaged using film- and online line-scanning systems in both axial and radial views. In a paper comparing human inspection, still images showed good recognition (>50% detection, <10% false positives), with specific defects such as rot and water core (protein), showing huge improvement once the apple orientation was controlled to a fixed range. At commercial speeds of inspection, however, recognition accuracy fell off dramatically. Very limited human inspection: When automated machine vision replaces human inspection, the same X-ray inspection method can be applied to defects in fruit and improve efficiency in high-throughput fruit sorting [181].
Deep learning: Internal apple defects were detected through X-ray imaging using Google Net deep learning achieving an accuracy of up to 100% with robust and generalizable detection for practical applications within the agrifood industry [182]. X-ray imaging based on density differences is a non-destructive technique that has been developed to detect mould core disease in apples. An SVM model was trained on features extracted from the fresh internal images. The average detection accuracy was 91.03% for mould core apples and 95.95% for healthy apples using the method [183]. X-ray Dark-Field Radiography for Non-Invasive Detection of Internal Browning in Controlled Atmosphere (CA) Stored 'Braeburn' Apples. It utilized microscale differences to outperform absorption radiography by 10% in the early-stage disorder detection. In the early stages, its diagnostic ability was confirmed to be superior using machine learning analysis, and during later stages performance was similar. This method holds considerable promise as a non-destructive means of assuring fruit quality [184]. Spectral disease indices are published for the detection of wheat leaf rust based on reflectance data (450–1000nm) collected from infected and healthy leaves. The estimated symptom-specific reflectance was achieved via spectral mixture and RGB image analyses. These Specific wavelengths and SVIs have been used to detect with high accuracy (R²=0.94), which allows the different stages of the disease progression to be detected with SVIs derived from 605, 695, and 455nm wavelengths [185]. In another study, multispectral imaging was applied in reflection and fluorescence modes to identify defects of three apple varieties. Artificial neural networks were utilized for two- and multi-class classification of 18 spectral images for each apple. Honeycrisp performed best with combined modes, whereas Redcort and Red Delicious performed better with single-mode imaging. This method allows for non-invasive detection of a multitude of apple disorders [186].
For example, Heyens and Valcke (2004) [187] used fluorescence imaging to investigate early infection patterns of Erwinia amylovora on apple leaves. It enabled non-destructive observation of pathogen-host interactions during the early stages of fire blight infections. Highlights were introduced on the contributions for disease development, as well as the prospects of fluorescence imaging for early detection and management of fire blight in apples. Liu et al. [188] conducted hyperspectral imaging to assess apple mosaic virus (AMV) using a predictive model that estimates leaf chlorophyll content (LCC). Based on 360 infected leaves, optimal wavelengths were chosen through competitive adaptive reweighted sampling. This resulted in an LCC inversion using a Boosting and Stacking model with Rv2R^2_v = 0.9644. With 98.89% accuracy, disease severity was precisely classified, providing a non-destructive and quantitative leaf-level diagnostic approach. Here we present a new strain-sensitive unique NDT technique for composite structures using hyperspectral image analysis with a Band Elimination index. The approach produces matrices for smart damage classification and property prediction by filtering out specific spectral bands and analysing correlations between consecutive thermal images. The results indicate that this can be used when detecting and modelling different types of structural damage for different conditions [189].

5. Future Trends and Research Directions:

5.1. The Convergence of IoT with Edge Computing:

In agricultural disease monitoring, IoT convergence with edge computing [200] is finding new applications in the form of real-time analytics performing in-field as an example of other use cases. It enables always-on environmental sensors, drones, and imaging systems to capture data with minimal latency and no cloud server dependence. Saleheen et al., One recent study [190] proposed an IoT-based smart agriculture monitoring system that highlighted the importance of integrating sensors and processing units to minimize the time of data interpretation and possibly deploy decisions faster and more locally. In addition, Neware and Khan (2018) [191] discussed the software developments relating to satellite data analytics in agriculture, providing another evidence of the potential of integrating remote sensing and IoT. Jiang et al. (2021) [192] emphasized how well deep learning in conjunction with the IoT platforms can produce highly accurate results in the detection of diseases such as apple fruit diseases, illustrating that on-field sensors and real-time disease predictions work hand in hand with prevention regimes.

5.2. Resource-Constrained Environments’ Lightweight Models:

Models must be lightweight and designed for the trade-off of performance and accuracy to facilitate deployment in operational environments of agricultural systems with limited resources. Opt for Lightweight Models — Lightweight models like MobileNet and ShuffleNet are optimized for running on edge devices, like smartphones or low-power embedded systems. As pointed out by Neware and Khan in (2018) [193], these models are especially beneficial in the scenario of satellite- or drone-based image acquisition, where infrastructure limitations create a need for low-complexity models. Low-memory and low-energy-use lightweight models ease in-field or in-laboratory on-the-spot disease detection and classification for farmers in sparse or low-income segments.

5.3. Explainable AI for Better Decision-Making:

Explainable AI (XAI) is important in precision agriculture as it also helps stakeholders understand and trust model predictions. Ryo (2022) [194] reflects on how explainable models facilitate better model interpretation in the case of complex agricultural datasets. Casement et al. (AI, 2024) [195] explain that XAI improves decision-making by providing insight into the reasoning behind AI predictions. Additionally, Ali et al. (2023) [196] consider trustworthiness a key outcome of explainability, something salient in areas like agriculture where automated insights lead to actionable decisions. XAI frameworks help not just in making these systems accountable but also narrow the gap between what professionals (farmers and agronomists) understand and what AI systems do.

6. Conclusion:

We present a systematic review evaluating the recent trends in the state-of-the-art techniques used in the detection and classification of apple leaf diseases, involving ML, DL, transfer learning, lightweight architectures, and image processing techniques. The deep learning models were able to achieve the best performance as compared to other machine learning models, with both accuracy and F1-scores in the range of 95% or better, especially by CNNs and object detection frameworks based on YOLO. They learn the spatial hierarchies and feature abstractions required to capture subtle, early disease symptoms found in complex orchard environments.
Specifically, transfer learning approaches using pre-trained models such as ResNet, DenseNet,  MobileNetV2, and VGG19 have demonstrated robust generalization ability to varied conditions, especially when the amount of labelled data is limited. The lightweight models MEAN-SSD, BAMNet, MCDCNet and BCTNet were addressed on account of their favourable trade-off between computational cost and accuracy, making them suitable for real-time deployment on edge devices contending in the field.
Simplicity, interpretability, or limited resource constraints are often more important for certain applications, where traditional machine learning (ML) methods like Support Vector Machines (SVM), Naive Bayes, k-means clustering, and others still have their value. However, in terms of handling high-dimensional image data and complex visual features, so far their performance comes short of DL-based methods. Moreover, machine vision systems and advanced imaging techniques—hyperspectral, multispectral, and X-ray imaging in particular—have contributed significantly to non-invasive disease monitoring and early stress detection, thus providing complementary value to AI-based methods.
Conclusively, CNN and YOLO architectures dominate in terms of prediction accuracy and robustness followed by using transfer learning and attention mechanisms (CBAM, BAM, H-SimAM) which have been shown to improve performance further. Future works need to focus on Explainable AI (XAI) to enhance the transparency and trust mechanism of the classifier, construction of hybrid models to increase detection robustness, and scalable as well as cost-effective solutions of the architecture by leveraging IoT and edge computing for in-field diagnostics. Thus, the integration of AI and precision agriculture has the potential to transform disease management in apple farming through early diagnosis, targeted responses, and sustainable yield improvement.

References

  1. Dharm, Padaliya & Pandya, Parth & Bhavik, Patel & Vatsal, Salkiya & Yuvraj, Solanki & Darji, Mittal. (2019). A Review of Apple Diseases Detection and Classification. International Journal of Engineering and Technical Research. 8. 382-387.
  2. Zhang, Daping & Yang, Hongyu & Cao, Jiayu. (2021). Identify Apple Leaf Diseases Using a Deep Learning Algorithm. arXiv:10.48550/arXiv.2107.12598.
  3. Al-Wesabi, Fahd & Albraikan, Amani & Hilal, Anwer & Eltahir, Majdy & Hamza, Manar & Zamani, Abu. (2022). Artificial Intelligence Enabled Apple Leaf Disease Classification for Precision Agriculture. Computers, Materials and Continua. 70. 6223-6238. [CrossRef]
  4. S. Kumar, R. Kumar and M. Gupta, "Analysis of Apple Plant Leaf Diseases Detection and Classification: A Review," 2022 Seventh International Conference on Parallel, Distributed and Grid Computing (PDGC), Solan, Himachal Pradesh, India, 2022, pp. 361-365. [CrossRef]
  5. Singh, Swati & Gupta, Sheifali. (2018). Apple Scab and Marsonina Coronaria Diseases Detection in Apple Leaves Using Machine Learning. International Journal of Pure and Applied Mathematics. 1151-1166.
  6. Sarker, I.H. Machine Learning: Algorithms, Real-World Applications and Research Directions. SN COMPUT. SCI. 2, 160 (2021). [CrossRef]
  7. Meshram, V., Patil, K., Meshram, V., Hanchate, D., & Ramkteke, S. D. (2021). Machine learning in agriculture domain: A state-of-art survey. Artificial Intelligence in the Life Sciences, 1, 100010. [CrossRef]
  8. Sandhu, K. K. Sandhu, K. K. (2021). Apple leaves disease detection using machine learning approach. International Journal of Computer Science and Information Technology Research, 9(1), 127-135. Retrieved from www.researchpublish.com.
  9. Sugiarti, Y., Supriyatna, A., Carolina, I., Amin, R., & Yani, A. (2021, September). Model naïve Bayes classifiers for detection apple diseases. In 2021 9th International Conference on Cyber and IT Service Management (CITSM) (pp. 1-4). IEEE.
  10. Miriti, E. (2016). Classification of selected apple fruit varieties using Naive Bayes (Doctoral dissertation, University of Nairobi).
  11. Sumanto, Sumanto & Sugiarti, Yuni & Supriyatna, Adi & Carolina, Irmawati & Amin, Ruhul & Yani, Ahmad. (2021). Model Naïve Bayes Classifiers For Detection Apple Diseases. 1-4.
  12. Misigo, Ronald & Kirimi, Evans. (2016). CLASSIFICATION OF SELECTED APPLE FRUIT VARIETIES USING NAIVE BAYES. Indian Journal of Computer Science and Engineering (IJCSE). 7. 13.
  13. Aravind, K.R.N.V.V.D. & Shyry, S.Prayla & Felix, A Yovan. (2019). Classification of Healthy and Rot Leaves of Apple Using Gradient Boosting and Support Vector Classifier. International Journal of Innovative Technology and Exploring Engineering. 8. 2868-2872.
  14. S. Chakraborty, S. Paul and M. Rahat-uz-Zaman, "Prediction of Apple Leaf Diseases Using Multiclass Support Vector Machine," 2021 2nd International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST), DHAKA, Bangladesh, 2021, pp. 147-151. [CrossRef]
  15. Omrani, E., Khoshnevisan, B., Shamshirband, S., Saboohi, H., Anuar, N. B., & Nasir, M. H. N. M. (2014). Potential of radial basis function-based support vector regression for apple disease detection. Measurement, 55, 512-519.
  16. Sivakamasundari, G., & Seenivasagam, V. (2018). Classification of leaf diseases in apple using support vector machine. International Journal of Advanced Research in Computer Science, 9(1), 261-265.
  17. Alagu, S., & BhoopathyBagan, K. Apple Fruit disease detection and classification using Multiclass SVM classifier and IP Webcam APP. International Journal of Management, Technology, and Engineering, Volume Number (Issue Number), Page Range. ISSN, 2249-7455.
  18. Pujari, D., Yakkundimath, R., & Byadgi, A. S. (2016). SVM and ANN based classification of plant diseases using feature reduction technique. IJIMAI, 3(7), 6-14.
  19. Anam, S. (2020, June). Segmentation of leaf spots disease in apple plants using particle swarm optimization and K-means algorithm. In Journal of Physics: Conference Series (Vol. 1562, No. 1, p. 012011). IOP Publishing.
  20. Tiwari, R., & Chahande, M. (2021). Apple fruit disease detection and classification using k-means clustering method. In Advances in Intelligent Computing and Communication: Proceedings of ICAC 2020 (pp. 71-84). Springer Singapore.
  21. Al Bashish, D., Braik, M., & Bani-Ahmad, S. (2011). Detection and classification of leaf diseases using K-means-based segmentation and. Information technology journal, 10(2), 267-275.
  22. Zhang, Daping & Yang, Hongyu & Cao, Jiayu. (2021). Identify Apple Leaf Diseases Using Deep Learning Algorithm. arXiv:10.48550/arXiv.2107.12598.
  23. Doutoum AS, Tugrul B. 2025. A systematic review of deep learning techniques for apple leaf diseases classification and detection. PeerJ Computer Science 11:e2655. [CrossRef]
  24. Banjar, A., Javed, A., Nawaz, M. et al. E-AppleNet: An Enhanced Deep Learning Approach for Apple Fruit Leaf Disease Classification. Applied Fruit Science 67, 18 (2025). [CrossRef]
  25. Alsayed, Ashwaq & Alsabei, Amani & Arif, Muhammad. (2021). Classification of Apple Tree Leaves Diseases using Deep Learning Methods. International Journal of Computer Network and Information Security. 21. 324. [CrossRef]
  26. Y. Luo, J. Sun, J. Shen, X. Wu, L. Wang and W. Zhu, "Apple Leaf Disease Recognition and Sub-Class Categorization Based on Improved Multi-Scale Feature Fusion Network," in IEEE Access, vol. 9, pp. 95517-95527, 2021. [CrossRef]
  27. Yang, Q., Duan, S., & Wang, L. (2022). Efficient Identification of Apple Leaf Diseases in the Wild Using Convolutional Neural Networks. Agronomy, 12(11), 2784. [CrossRef]
  28. Pradhan, Priyanka & Kumar, Brajesh & Mohan, Shashank. (2022). Comparison of various deep convolutional neural network models to discriminate apple leaf diseases using transfer learning. Journal of Plant Diseases and Protection. 129. [CrossRef]
  29. Bansal, P., Kumar, R., & Kumar, S. (2021). Disease Detection in Apple Leaves Using Deep Convolutional Neural Network. Agriculture, 11(7), 617. [CrossRef]
  30. Ni, J. (2024). Smart agriculture: An intelligent approach for apple leaf disease identification based on convolutional neural network. Journal of Phytopathology, 172(4). [CrossRef]
  31. Srivastav, Somya & Guleria, Kalpna & Sharma, Shagun. (2024). Apple Leaf Disease Detection using Deep Learning-based Convolutional Neural Network. 1-5. [CrossRef]
  32. Kaur, Arshleen & Chadha, Raman. (2023). An Optimized Ant Gardient Convolutional Neural Network for Disease Detection in Apple Leaves. 1-8. [CrossRef]
  33. B. Biswas and R. K. Yadav, "Multilayer Convolutional Neural Network Based Approach to Detect Apple Foliar Disease," 2023 2nd International Conference for Innovation in Technology (INOCON), Bangalore, India, 2023, pp. 1-5. [CrossRef]
  34. V. K. Vishnoi, K. Kumar, B. Kumar, S. Mohan and A. A. Khan, "Detection of Apple Plant Diseases Using Leaf Images Through Convolutional Neural Network," in IEEE Access, vol. 11, pp. 6594-6609, 2023. [CrossRef]
  35. Firdous, Saba & Akbar, Shahzad & Hassan, Syed Ale & Khalid, Aima & Gull, Sahar. (2023). Deep Convolutional Neural Network-based Framework for Apple Leaves Disease Detection. 1-6. [CrossRef]
  36. C. Thakur, N. Kapoor and R. Saini, "A Novel Framework of Apple Leaf Disease Detection using Convolutional Neural Network," 2023 International Conference on Inventive Computation Technologies (ICICT), Lalitpur, Nepal, 2023, pp. 491-496. [CrossRef]
  37. Tanwar, V.K., Sharma, B., & Anand, V. (2023). A Sophisticated Deep Convolutional Neural Network for Multiple Classification of Apple Leaf Diseases. 2023 International Conference on Research Methodologies in Knowledge Management, Artificial Intelligence and Telecommunication Engineering (RMKMATE), 1-6.
  38. Chen, Y., Pan, J. & Wu, Q. Apple leaf disease identification via improved CycleGAN and convolutional neural network. Soft Comput 27, 9773–9786 (2023). [CrossRef]
  39. Mahato, D.K., Pundir, A. & Saxena, G.J. An Improved Deep Convolutional Neural Network for Image-Based Apple Plant Leaf Disease Detection and Identification. J. Inst. Eng. India Ser. A 103, 975–987 (2022). [CrossRef]
  40. Sharma, V., Verma, A., Goel, N. (2022). A Modified Feature Optimization Approach with Convolutional Neural Network for Apple Leaf Disease Detection. In: Abraham, A., et al. Innovations in Bio-Inspired Computing and Applications. IBICA 2021. Lecture Notes in Networks and Systems, vol 419. Springer, Cham. [CrossRef]
  41. Fu L, Li S, Sun Y, Mu Y, Hu T, Gong H. Lightweight-Convolutional Neural Network for Apple Leaf Disease Identification. Front Plant Sci. 2022 May 24;13:831219. [CrossRef] [PubMed] [PubMed Central]
  42. Tugrul, B., Elfatimi, E., & Eryigit, R. (2022). Convolutional Neural Networks in Detection of Plant Leaf Diseases: A Review. Agriculture, 12(8), 1192. [CrossRef]
  43. Liu, B., Zhang, Y., He, D., & Li, Y. (2017). Identification of apple leaf diseases based on deep convolutional neural networks. Symmetry, 10(1), 11.
  44. Khan, A. I., Quadri, S. M. K., Banday, S., & Shah, J. L. (2022). Deep diagnosis: A real-time apple leaf disease detection system based on deep learning. computers and Electronics in Agriculture, 198, 107093.
  45. V. K. Vishnoi, K. Kumar, B. Kumar, S. Mohan and A. A. Khan, "Detection of Apple Plant Diseases Using Leaf Images Through Convolutional Neural Network," in IEEE Access, vol. 11, pp. 6594-6609, 2023. [CrossRef]
  46. Bansal, P., Kumar, R., & Kumar, S. (2021). Disease detection in apple leaves using deep convolutional neural network. Agriculture, 11(7), 617.
  47. Gong, X., & Zhang, S. (2023). A High-Precision Detection Method of Apple Leaf Diseases Using Improved Faster R-CNN. Agriculture, 13(2), 240. [CrossRef]
  48. Baranwal, S., Khandelwal, S., & Arora, A. (2019, February). Deep learning convolutional neural network for apple leaves disease detection. In Proceedings of international conference on sustainable computing in science, technology and management (SUSCOM), Amity University Rajasthan, Jaipur-India.
  49. Yan, Q., Yang, B., Wang, W., Wang, B., Chen, P., & Zhang, J. (2020). Apple leaf diseases recognition based on an improved convolutional neural network. Sensors, 20(12), 3535.
  50. Parashar, N., & Johri, P. (2024). Enhancing apple leaf disease detection: A CNN-based model integrated with image segmentation techniques for precision agriculture. International Journal of Mathematical, Engineering and Management Sciences, 9(4), 943.
  51. Agarwal, M., Kaliyar, R. K., & Gupta, S. K. (2022, July). Differential Evolution based compression of CNN for Apple fruit disease classification. In 2022 International Conference on Inventive Computation Technologies (ICICT) (pp. 76-82). IEEE.
  52. Çetiner, İ. (2025). AppleCNN: A new CNN-based deep learning model for classification of apple leaf diseases. Gümüşhane Üniversitesi Fen Bilimleri Dergisi, 15(1), 51-63.
  53. Kaur, A., Kukreja, V., Aggarwal, P., Thapliyal, S., & Sharma, R. (2024). Amplifying apple mosaic illness detection: Combining CNN and random forest models. In 2024 IEEE International Conference on Interdisciplinary Approaches in Technology and Management for Social Innovation (IATMSI), Gwalior, India (pp. 1-5). [CrossRef]
  54. S. Mehta, V. Kukreja and R. Gupta, "Empowering Precision Agriculture: Detecting Apple Leaf Diseases and Severity Levels with Federated Learning CNN," 2023 3rd International Conference on Intelligent Technologies (CONIT), Hubli, India, 2023, pp. 1-6. [CrossRef]
  55. Fu, Longsheng & Majeed, Yaqoob & Zhang, Xin & Karkee, Manoj & Zhang, Qin. (2020). Faster R–CNN–based apple detection in dense-foliage fruiting-wall trees using RGB and depth features for robotic harvesting. Biosystems Engineering. 197. 245-256.
  56. Kodors, Sergejs & Lacis, Gunars & Sokolova, Olga & Zhukov, Vitaliy & Apeinans, Ilmars & Bartulsons, Toms. (2021). Apple scab detection using CNN and Transfer Learning. Agronomy Research. 19. 507-519. [CrossRef]
  57. K. Sujatha, K. Gayatri, M. S. Yadav, N. C. Sekhara Rao and B. S. Rao, "Customized Deep CNN for Foliar Disease Prediction Based on Features Extracted from Apple Tree Leaves Images," 2022 International Interdisciplinary Humanitarian Conference for Sustainability (IIHC), Bengaluru, India, 2022, pp. 193-197. [CrossRef]
  58. Ziyi Yang and Minchen Yang "Apple leaf scab recognition using CNN and transfer learning", Proc. SPIE 13486, Fourth International Conference on Computer Vision, Application, and Algorithm (CVAA 2024), 134860D (9 January 2025). [CrossRef]
  59. Si, H., Li, M., Li, W., Zhang, G., Wang, M., Li, F., & Li, Y. (2024). A Dual-Branch Model Integrating CNN and Swin Transformer for Efficient Apple Leaf Disease Classification. Agriculture, 14(1), 142. [CrossRef]
  60. Agarwal, M., Kaliyar, R.K., Singal, G., & Gupta, S.K. (2019). FCNN-LDA: A Faster Convolution Neural Network model for Leaf Disease identification on Apple's leaf dataset. 2019 12th International Conference on Information & Communication Technology and System (ICTS), 246-251.
  61. Yu, Hee-Jin & Son, Chang-Hwan & Lee, Dong. (2020). Apple Leaf Disease Identification Through Region-of-Interest-Aware Deep Convolutional Neural Network. Journal of Imaging Science and Technology. 64. [CrossRef]
  62. Baranwal, Saraansh & Khandelwal, Siddhant & Arora, Anuja. (2019). Deep Learning Convolutional Neural Network for Apple Leaves Disease Detection. SSRN Electronic Journal. [CrossRef]
  63. Di, Jie & Li, Qing. (2022). A method of detecting apple leaf diseases based on improved convolutional neural network. PLOS ONE. 17. [CrossRef]
  64. Garcia Nachtigall, Lucas & Araujo, Ricardo & Nachtigall, Gilmar. (2016). Classification of Apple Tree Disorders Using Convolutional Neural Networks. 472-476. [CrossRef]
  65. Liu, B., Zhang, Y., He, D., & Li, Y. (2018). Identification of apple leaf diseases based on deep convolutional neural networks. Symmetry, 10(1), 11. [CrossRef]
  66. türkoğlu, Muammer & Hanbay, Davut & Sengur, Abdulkadir. (2022). Multi-model LSTM-based convolutional neural networks for detection of apple diseases and pests. Journal of Ambient Intelligence and Humanized Computing. 13. [CrossRef]
  67. P. Jiang, Y. Chen, B. Liu, D. He and C. Liang, "Real-Time Detection of Apple Leaf Diseases Using Deep Learning Approach Based on Improved Convolutional Neural Networks," in IEEE Access, vol. 7, pp. 59069-59080, 2019. [CrossRef]
  68. Yan, Q., Yang, B., Wang, W., Wang, B., Chen, P., & Zhang, J. (2020). Apple leaf diseases recognition based on an improved convolutional neural network. Sensors, 20(12), 3535. [CrossRef]
  69. Yadav, D., Akanksha, Yadav, A.K. (2020). A novel convolutional neural network based model for recognition and classification of apple leaf diseases. Traitement du Signal, Vol. 37, No. 6, pp. 1093-1101. [CrossRef]
  70. Di, J., & Li, Q. (2022). A method of detecting apple leaf diseases based on improved convolutional neural network. PLOS ONE, 17(2), e0262629. [CrossRef]
  71. Albogamy, F. R. (2021). A Deep Convolutional Neural Network with Batch Normalization Approach for Plant Disease Detection. International Journal of Computer Science and Network Security, 21(9), 51–62. [CrossRef]
  72. Srinidhi, V & Sahay, Apoorva & Deeba, K.. (2021). Plant Pathology Disease Detection in Apple Leaves Using Deep Convolutional Neural Networks: Apple Leaves Disease Detection using EfficientNet and DenseNet. 1119-1127. [CrossRef]
  73. Sun, H., Xu, H., Liu, B., He, D., He, J., Zhang, H., & Geng, N. (2021). MEAN-SSD: A novel real-time detector for apple leaf diseases using improved light-weight convolutional neural networks. Computers and Electronics in Agriculture, 189, 106379. [CrossRef]
  74. Yatoo, A. (2024). Empowering Precision Agriculture: A Novel ResNet50 based PDICNet for Automated Apple Leaf Disease Detection. Journal of Electrical Systems, 20(7s), 2211–2220. [CrossRef]
  75. Yongjun Ding, Wentao Yang, Jingjing Zhang, An improved DeepLabV3+ based approach for disease spot segmentation on apple leaves, Computers and Electronics in Agriculture, Volume 231, 2025, 110041, ISSN 0168-1699. [CrossRef]
  76. Čirjak, D., Aleksi, I., Lemic, D., & Pajač Živković, I. (2023). EfficientDet-4 Deep Neural Network-Based Remote Monitoring of Codling Moth Population for Early Damage Detection in Apple Orchard. Agriculture, 13(5), 961. [CrossRef]
  77. X. Li and L. Rai, "Apple Leaf Disease Identification and Classification using ResNet Models," 2020 IEEE 3rd International Conference on Electronic Information and Communication Technology (ICEICT), Shenzhen, China, 2020, pp. 738-742. [CrossRef]
  78. Lin, Renyi. (2024). Apple leaf diseases recognition based on ResNet-101 and CBAM. Applied and Computational Engineering. 51. 256-266. [CrossRef]
  79. Banarase, S., & Shirbahadurkar, S. (2024). The Orchard Guard: Deep Learning powered apple leaf disease detection with MobileNetV2 model. Journal of Integrated Science and Technology, 12(4), 799. [CrossRef]
  80. Bin Liu, Xulei Huang, Leiming Sun, Xing Wei, Zeyu Ji, Haixi Zhang, MCDCNet: Multi-scale constrained deformable convolution network for apple leaf disease detection, Computers and Electronics in Agriculture, Volume 222, 2024, 109028, ISSN 0168-1699. [CrossRef]
  81. Nain, S., Mittal, N., Jain, A. (2024). Recognition of Apple Leaves Infection Using DenseNet121 with Additional Layers. In: Sharma, D.K., Peng, SL., Sharma, R., Jeon, G. (eds) Micro-Electronics and Telecommunication Engineering. ICMETE 2023. Lecture Notes in Networks and Systems, vol 894. Springer, Singapore. [CrossRef]
  82. Gao, Y., Cao, Z., Cai, W., Gong, G., Zhou, G., & Li, L. (2023). Apple Leaf Disease Identification in Complex Background Based on BAM-Net. Agronomy, 13(5), 1240. [CrossRef]
  83. Gao, X., Tang, Z., Deng, Y., Hu, S., Zhao, H., & Zhou, G. (2023). HSSNet: A End-to-End Network for Detecting Tiny Targets of Apple Leaf Diseases in Complex Backgrounds. Plants, 12(15), 2806. [CrossRef]
  84. Bhat, I.R., & Wani, M.A. (2023). Modified Grouped Convolution-Based EfficientNet Deep Learning Architecture for Apple Disease Detection. 2023 International Conference on Machine Learning and Applications (ICMLA), 1465-1472.
  85. Bi, C., Wang, J., Duan, Y. et al. MobileNet Based Apple Leaf Diseases Identification. Mobile Netw Appl 27, 172–180 (2022). [CrossRef]
  86. Yukai Zhang, Guoxiong Zhou, Aibin Chen, Mingfang He, Johnny Li, Yahui Hu, A precise apple leaf diseases detection using BCTNet under unconstrained environments, Computers and Electronics in Agriculture, Volume 212, 2023, 108132, ISSN 0168-1699. [CrossRef]
  87. Liu, S., Qiao, Y., Li, J., Zhang, H., Zhang, M., & Wang, M. (2022). An improved lightweight network for real-time detection of apple leaf diseases in natural scenes. Agronomy, 12(10), 2363. [CrossRef]
  88. Upadhyay, Nidhi & Gupta, Neeraj. (2024). Diagnosis of fungi affected apple crop disease using improved ResNeXt deep learning model. Multimedia Tools and Applications. 83. 1-20.
  89. Gong, X., & Zhang, S. (2023). A High-Precision Detection Method of Apple Leaf Diseases Using Improved Faster R-CNN. Agriculture, 13(2), 240. [CrossRef]
  90. Zia Ur Rehman, Muhammad & Khan, M. & Ahmed, Fawad & Damaševičius, Robertas & Naqvi, Syed & Nisar, Muhammad & Javed, Kashif. (2021). Recognizing apple leaf diseases using a novel parallel real-time processing framework based on MASK RCNN and transfer learning: An application for smart agriculture. IET Image Processing. 15. [CrossRef]
  91. Gao, F., Fu, L., Zhang, X., Majeed, Y., Li, R., Karkee, M., & Zhang, Q. (2020). Multi-class fruit-on-plant detection for apple in SNAP system using Faster R-CNN. Computers and Electronics in Agriculture, 176, 105634. [CrossRef]
  92. Alwaseela Abdalla et al., "Fine-tuning convolutional neural network with transfer learning for semantic segmentation of ground-level oilseed rape images in a field with high weed pressure," Computers and Electronics in Agriculture, vol. 167, 2019, 105091. [CrossRef]
  93. Assad, A., Bhat, M. R., Bhat, Z. A., Ahanger, A. N., Kundroo, M., Dar, R. A., ... & Dar, B. N. (2023). Apple diseases: detection and classification using transfer learning. Quality Assurance and Safety of Crops & Foods, 15(SP1), 27-37.
  94. Sulaiman, A., Anand, V., Gupta, S., Alshahrani, H., Reshan, M. S. A., Rajab, A., ... & Azar, A. T. (2023). Sustainable apple disease management using an intelligent fine-tuned transfer learning-based model. Sustainability, 15(17), 13228.
  95. Fan, X., Luo, P., Mu, Y., Zhou, R., Tjahjadi, T., & Ren, Y. (2022). Leaf image-based plant disease identification using transfer learning and feature fusion. Computers and Electronics in agriculture, 196, 106892.
  96. Rawat, P., & Singh, S. K. (2024, February). Apple leaf disease detection using transfer learning. In 2024 International Conference on Integrated Circuits and Communication Systems (ICICACS) (pp. 1-6). IEEE.
  97. Özden, C. (2021). Apple leaf disease detection and classification based on transfer learning. Turkish Journal of Agriculture and Forestry, 45(6), 775-783.
  98. Bhat, M. R., Assad, A., Dar, B. N., Ahanger, A. N., Kundroo, M., Dar, R. A., ... & Bhat, Z. A. (2023). Apple diseases: detection and classification using transfer learning. QUALITY ASSURANCE AND SAFETY OF CROPS & FOODS, 15, 27-37.
  99. Kodors, S., Lacis, G., Sokolova, O., Zhukovs, V., Apeinans, I., & Bartulsons, T. (2021). Apple scab detection using CNN and Transfer Learning.
  100. Rehman, Z. U., Khan, M. A., Ahmed, F., Damaševičius, R., Naqvi, S. R., Nisar, W., & Javed, K. (2021). Recognizing apple leaf diseases using a novel parallel real-time processing framework based on MASK RCNN and transfer learning: An application for smart agriculture. IET Image Processing, 15(10), 2157-2168.
  101. Kumar, A., Nelson, L., & Gomathi, S. (2024, January). Transfer learning of vgg19 for the classification of apple leaf diseases. In 2024 2nd International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT) (pp. 1643-1648). IEEE.
  102. Nagaraju, Y., Swetha, S., & Stalin, S. (2020, December). Apple and grape leaf diseases classification using transfer learning via fine-tuned classifier. In 2020 IEEE International Conference on Machine Learning and Applied Network Technologies (ICMLANT) (pp. 1-6). IEEE.
  103. Wani, O. A., Zahoor, U., Shah, S. Z. A., & Khan, R. (2024). Apple leaf disease detection using transfer learning. Annals of Data Science, 1-10.
  104. Si, H., Wang, Y., Zhao, W., Wang, M., Song, J., Wan, L., ... & Sun, C. (2023). Apple surface defect detection method based on weight comparison transfer learning with MobileNetV3. Agriculture, 13(4), 824.
  105. Chao, X., Sun, G., Zhao, H., Li, M., & He, D. (2020). Identification of apple tree leaf diseases based on deep learning models. Symmetry, 12(7), 1065.
  106. Su, J., Zhang, M., & Yu, W. (2022, April). An identification method of apple leaf disease based on transfer learning. In 2022 7th International Conference on Cloud Computing and Big Data Analytics (ICCCBDA) (pp. 478-482). IEEE.
  107. Mahmud, M. S., He, L., Zahid, A., Heinemann, P., Choi, D., Krawczyk, G., & Zhu, H. (2023). Detection and infected area segmentation of apple fire blight using image processing and deep transfer learning for site-specific management. Computers and Electronics in Agriculture, 209, 107862.
  108. Jesupriya, J., Mageswari, P. U., & Alli, A. (2025, January). Deep Learning-Based Transfer Learning with MobileNetV2 for Crop Disease Detection. In 2025 International Conference on Intelligent and Innovative Technologies in Computing, Electrical and Electronics (IITCEE) (pp. 1-9). IEEE.
  109. Singh, R., Sharma, N., & Gupta, R. (2023, November). Apple leaf disease detection using densenet121 transfer learning model. In 2023 International Conference on Research Methodologies in Knowledge Management, Artificial Intelligence and Telecommunication Engineering (RMKMATE) (pp. 1-5). IEEE.
  110. Hassan, S. M., Maji, A. K., Jasiński, M., Leonowicz, Z., & Jasińska, E. (2021). Identification of plant-leaf diseases using CNN and transfer-learning approach. Electronics, 10(12), 1388.
  111. Polder, G., Blok, P. M., van Daalen, T., Peller, J., & Mylonas, N. (2025). A smart camera with integrated deep learning processing for disease detection in open field crops of grape, apple, and carrot. Journal of Field Robotics. [CrossRef]
  112. Reddy, T. Reddy, T. & Rekha, K. (2021). Deep Leaf Disease Prediction Framework (DLDPF) with Transfer Learning for Automatic Leaf Disease Detection. 1408-1415. [CrossRef]
  113. Kumar, A., Nelson, L., & Gomathi, S. (2024). Transfer learning of VGG19 for the classification of apple leaf diseases. In 2024 2nd International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT), Bengaluru, India (pp. 1643-1648). [CrossRef]
  114. Ozden, Cevher. (2021). Apple leaf disease detection and classification based on transfer learning. TURKISH JOURNAL OF AGRICULTURE AND FORESTRY. 45. 775-783. [CrossRef]
  115. Santoso, C., Singadji, M., Purnama, D., Abdel, S., & Kharismawardani, A. (2024). Enhancing Apple Leaf Disease Detection with Deep Learning: From Model Training to Android App Integration. Journal of Applied Data Sciences, 6(1), 377-390. [CrossRef]
  116. Wang, Yunlu & Sun, Fenggang & Wang, Zhijun & Zhou, Zhongchang & Lan, Peng. (2022). Apple Leaf Disease Identification Method Based on Improved YoloV5. [CrossRef]
  117. Rishitha, T., Krishna Mohan, G. (2023). Apple Leaf Disease Prediction Using Deep Learning Technique. In: Jacob, I.J., Kolandapalayam Shanmugam, S., Izonin, I. (eds) Data Intelligence and Cognitive Informatics. Algorithms for Intelligent Systems. Springer, Singapore. [CrossRef]
  118. Zhu, S., Ma, W., Wang, J., Yang, M., Wang, Y., & Wang, C. (2023). EADD-YOLO: An efficient and accurate disease detector for apple leaf using improved lightweight YOLOv5. Frontiers in Plant Science, 14, 1120724. [CrossRef]
  119. Vivek Sharma, Ashish Kumar Tripathi, Himanshu Mittal, "DLMC-Net: Deeper lightweight multi-class classification model for plant leaf disease detection," Ecological Informatics, Vol. 75, 2023, 102025, ISSN 1574-9541. [CrossRef]
  120. Xu W and Wang R (2023) ALAD-YOLO:an lightweight and accurate detector for apple leaf diseases. Front. Plant Sci. 14:1204569. [CrossRef]
  121. Fu, L., Li, S., Sun, Y., Mu, Y., Hu, T., & Gong, H. (2022). Lightweight-convolutional neural network for apple leaf disease identification. Frontiers in Plant Science, 13, 831219.
  122. Li, L., Zhang, S., & Wang, B. (2021). Apple leaf disease identification with a small and imbalanced dataset based on lightweight convolutional networks. Sensors, 22(1), 173.
  123. Sun, H., Xu, H., Liu, B., He, D., He, J., Zhang, H., & Geng, N. (2021). MEAN-SSD: A novel real-time detector for apple leaf diseases using improved light-weight convolutional neural networks. Computers and Electronics in Agriculture, 189, 106379.
  124. Gao, L., Zhao, X., Yue, X., Yue, Y., Wang, X., Wu, H., & Zhang, X. (2024). A Lightweight YOLOv8 Model for Apple Leaf Disease Detection. Applied Sciences, 14(15), 6710.
  125. Wang, Y., Wang, Y., & Zhao, J. (2022). MGA-YOLO: A lightweight one-stage network for apple leaf disease detection. Frontiers in plant science, 13, 927424.
  126. Zhu, X., Li, J., Jia, R., Liu, B., Yao, Z., Yuan, A., ... & Zhang, H. (2022). Lad-net: A novel light weight model for early apple leaf pests and diseases classification. IEEE/ACM transactions on computational biology and bioinformatics, 20(2), 1156-1169.
  127. Zheng, J., Li, K., Wu, W., & Ruan, H. (2023). RepDI: A light-weight CPU network for apple leaf disease identification. Computers and Electronics in Agriculture, 212, 108122.
  128. Wang, B., Yang, H., Zhang, S., & Li, L. (2024). Identification of Multiple Diseases in Apple Leaf Based on Optimized Lightweight Convolutional Neural Network. Plants, 13(11), 1535.
  129. Xu, W., & Wang, R. (2023). ALAD-YOLO: An lightweight and accurate detector for apple leaf diseases. Frontiers in Plant Science, 14, 1204569.
  130. Wang, G., Sang, W., Xu, F., Gao, Y., Han, Y., & Liu, Q. (2025). An enhanced lightweight model for apple leaf disease detection in complex orchard environments. Frontiers in Plant Science, 16, 1545875.
  131. Zeng, W., Pang, J., Ni, K., Peng, P., & Hu, R. (2024). Apple leaf disease detection based on lightweight YOLOv8-GSSW. Applied Engineering in Agriculture, 40(5), 589-598.
  132. Sun, Z., Feng, Z., & Chen, Z. (2024). Highly Accurate and Lightweight Detection Model of Apple Leaf Diseases Based on YOLO. Agronomy, 14(6), 1331.
  133. Zhu, R., Zou, H., Li, Z., & Ni, R. (2022). Apple-Net: A model based on improved YOLOv5 to detect the apple leaf diseases. Plants, 12(1), 169.
  134. Liu, Z., Li, X. An improved YOLOv5-based apple leaf disease detection method. Sci Rep 14, 17508 (2024). [CrossRef]
  135. Li, Fengmei & Zheng, Yuhui & Liu, Song & Sun, Fengbo & Bai, Haoran. (2024). A Multi-objective Apple Leaf Disease Detection Algorithm Based on Improved TPH-YOLOV5. Applied Fruit Science. 66. 1-17. [CrossRef]
  136. Lv, M., & Su, W.-H. (2024). YOLOV5-CBAM-C3TR: an optimized model based on transformer module and attention mechanism for apple leaf disease detection. Frontiers in Plant Science, 14. [CrossRef]
  137. Li, H., Shi, L., Fang, S., & Yin, F. (2023). Real-Time Detection of Apple Leaf Diseases in Natural Scenes Based on YOLOv5. Agriculture, 13(4), 878. [CrossRef]
  138. Praveen Kumar S, Naveen Kumar K, Drone-based apple detection: Finding the depth of apples using YOLOv7 architecture with multi-head attention mechanism, Smart Agricultural Technology, Volume 5, 2023, 100311, ISSN 2772-3755. [CrossRef]
  139. Yan, Chunman & Yang, Kangyi. (2024). FSM-YOLO: Apple leaf disease detection network based on adaptive feature capture and spatial context awareness. Digital Signal Processing. 155. 104770. [CrossRef]
  140. Mathew, Midhun & Mahesh, Therese Yamuna. (2021). Determining The Region of Apple Leaf Affected by Disease Using YOLO V3. 1-4. [CrossRef]
  141. Yan, Bin & Pan, Fan & Lei, Xiaoyan & Liu, Zhijie & Yang, Fuzeng. (2021). A Real-Time Apple Targets Detection Method for Picking Robot Based on Improved YOLOv5. Remote Sensing. 13. 1619. [CrossRef]
  142. Chunman Yan, Kangyi Yang, FSM-YOLO: Apple leaf disease detection network based on adaptive feature capture and spatial context awareness, Digital Signal Processing, Volume 155, 2024, 104770, ISSN 1051-2004. [CrossRef]
  143. Li, F., Zheng, Y., Liu, S. et al. A Multi-objective Apple Leaf Disease Detection Algorithm Based on Improved TPH-YOLOV5. Applied Fruit Science 66, 399–415 (2024). [CrossRef]
  144. A. Haruna, I. A. Badi, L. J. Muhammad, A. Abuobieda and A. Altamimi, "CNN-LSTM Learning Approach for Classification of Foliar Disease of Apple," 2023 1st International Conference on Advanced Innovations in Smart Cities (ICAISC), Jeddah, Saudi Arabia, 2023, pp. 1-6. [CrossRef]
  145. Abeyrathna, R.M.Rasika D. & Nakaguchi, Victor & Minn, Arkar & Ahamed, Tofael. (2023). Apple Position Estimation for Robotic Harvesting Using YOLO and Deep-SORT Algorithms.
  146. Sun, Henan & Xu, Haowei & Bin, Liu & He, Dongjian & He, Jinrong & Zhang, Haixi & Geng, Nan. (2021). MEAN-SSD: A novel real-time detector for apple leaf diseases using improved light-weight convolutional neural networks. Computers and Electronics in Agriculture. 189. 106379. [CrossRef]
  147. Tian, Yunong & Yang, Guodong & Wang, Zhe & Li, En & Liang, Zize. (2019). Detection of Apple Lesions in Orchards Based on Deep Learning Methods of CycleGAN and YOLOV3-Dense. Journal of Sensors. 2019. [CrossRef]
  148. Mahmoud, Yasmin & Sakr, Nehal & Elmogy, Mohammed. (2023). Plant Disease Detection and Classification Using Machine Learning and Deep Learning Techniques: Current Trends and Challenges. 197-217. [CrossRef]
  149. Gou, C., Zafar, S., Hasnain, Z., Aslam, N., Iqbal, N., Abbas, S., Li, H., Li, J., Chen, B., Ragauskas, A. J., & Abbas, M. (2024). Machine and Deep Learning: Artificial Intelligence Application in Biotic and Abiotic Stress Management in Plants. Frontiers in bioscience (Landmark edition), 29(1), 20. [CrossRef]
  150. Hasan, S., Mahbub, R., & Islam, M. (2022). Disease detection of apple leaf with combination of color segmentation and modified DWT. Journal of King Saud University - Computer and Information Sciences, 34(9), 7212–7224. [CrossRef]
  151. A. Bracino, R. S. Concepcion, R. A. R. Bedruz, E. P. Dadios and R. R. P. Vicerra, "Development of a Hybrid Machine Learning Model for Apple (Malus domestica) Health Detection and Disease Classification," 2020 IEEE 12th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM), Manila, Philippines, 2020, pp. 1-6. [CrossRef]
  152. Sharma, M., & Jindal, V. (2023). Approximation techniques for apple disease detection and prediction using computer enabled technologies: A review. Remote Sensing Applications: Society and Environment, 32, 101038.
  153. Logashov, D., Shadrin, D., Somov, A., Pukalchik, M., Uryasheva, A., Gupta, H. P., & Rodichenko, N. (2021, June). Apple trees diseases detection through computer vision in embedded systems. In 2021 IEEE 30th International Symposium on Industrial Electronics (ISIE) (pp. 1-6). IEEE.
  154. Dubey, Shiv Ram & Jalal, Anand. (2016). Apple disease classification using color, texture and shape features from images. Signal Image and Video Processing. 10. 819-826. [CrossRef]
  155. A. Gargade and S. A. Khandekar, "A Review: Custard Apple Leaf Parameter Analysis and Leaf Disease Detection using Digital Image Processing," 2019 3rd International Conference on Computing Methodologies and Communication (ICCMC), Erode, India, 2019, pp. 267-271. [CrossRef]
  156. Li, J., Huang, W., & Guo, Z. (2013). Detection of defects on apple using B-spline lighting correction method. In PIAGENG 2013: Image Processing and Photonics for Agricultural Engineering (Vol. 8761, p. 87610L). SPIE. [CrossRef]
  157. Hossein Azgomi, Fatemeh Roshannia Haredasht, Mohammad Reza Safari Motlagh, "Diagnosis of some apple fruit diseases by using image processing and artificial neural network," Food Control, Vol. 145, 2023, 109484, ISSN 0956-7135. [CrossRef]
  158. Singh, Swati & Gupta, Isha & Gupta, Sheifali & Koundal, Deepika & Aljahdali, Sultan & Mahajan, Shubham & Pandit, Amit. (2021). Deep Learning Based Automated Detection of Diseases from Apple Leaf Images. Computers, Materials & Continua. 71. 1849-1866. [CrossRef]
  159. Kang, H., & Chen, C. (2020). Fruit detection, segmentation and 3D visualisation of environments in apple orchards. Computers and Electronics in Agriculture, 171, 105302. [CrossRef]
  160. Manish Sharma, Vikas Jindal, "Approximation techniques for apple disease detection and prediction using computer enabled technologies: A review," Remote Sensing Applications: Society and Environment, Vol. 32, 2023, 101038, ISSN 2352-9385. [CrossRef]
  161. Qiu, Z., Xu, Y., Chen, C., Zhou, W., & Yu, G. (2024). Enhanced Disease Detection for Apple Leaves with Rotating Feature Extraction. Agronomy, 14(11), 2602. [CrossRef]
  162. Delalieux, Stephanie & van Aardt, Jan & Keulemans, Wannes & Coppin, Pol. (2005). Detection of biotic stress (Venturia inaequalis) in apple trees using hyperspectral analysis.
  163. Knauer, U., Warnemünde, S., Menz, P., Thielert, B., Klein, L., Holstein, K., Runne, M., & Jarausch, W. (2024). Detection of Apple Proliferation Disease Using Hyperspectral Imaging and Machine Learning Techniques. Sensors (Basel, Switzerland), 24(23), 7774. [CrossRef]
  164. Kim, Yunseop & Glenn, D.M. & Park, Johnny & Ngugi, Henry & Lehman, Brian. (2011). Hyperspectral image analysis for water stress detection of apple trees. Computers and Electronics in Agriculture - COMPUT ELECTRON AGRIC. 77. 155-160. [CrossRef]
  165. Shuaibu, Mubarakat & Lee, W. S. & Schueller, John & Gader, Paul & Hong, Young & Kim, Sangcheol. (2018). Unsupervised hyperspectral band selection for apple Marssonina blotch detection. Computers and Electronics in Agriculture. 148. 45-53. [CrossRef]
  166. N. Gorretta, M. Nouri, A. Herrero, A. Gowen and J. -M. Roger, "Early detection of the fungal disease "apple scab" using SWIR hyperspectral imaging," 2019 10th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS), Amsterdam, Netherlands, 2019, pp. 1-4. [CrossRef]
  167. Ye, X., Abe, S. & Zhang, S. Estimation and mapping of nitrogen content in apple trees at leaf and canopy levels using hyperspectral imaging. Precision Agric 21, 198–225 (2020). [CrossRef]
  168. Chandel, Abhilash & Khot, Lav & Sallato, Bernardita. (2021). Apple powdery mildew infestation detection and mapping using high-resolution visible and multispectral aerial imaging technique. Scientia Horticulturae. 287. 110228. [CrossRef]
  169. Karpyshev, Pavel & Ilin, Valery & Kalinov, Ivan & Petrovsky, Alexander & Tsetserukou, Dzmitry. (2021). Autonomous Mobile Robot for Apple Plant Disease Detection based on CNN and Multi-Spectral Vision System. 157-162. [CrossRef]
  170. Blok, P. M., Polder, G., Peller, J., & van Daalen, T. (2022). OPTIMA - RGB colour images and multispectral images (including LabelImg annotations) (Version 1) [Data set]. Zenodo. [CrossRef]
  171. Alexander J. Bleasdale, J. Duncan Whyatt, Classifying early apple scab infections in multispectral imagery using convolutional neural networks, Artificial Intelligence in Agriculture, Volume 15, Issue 1, 2025, Pages 39-51, ISSN 2589-7217. [CrossRef]
  172. Barthel D, Cullinan C, Mejia-Aguilar A, Chuprikova E, McLeod BA, Kerschbamer C, Trenti M, Monsorno R, Prechsl UE, Janik K. Identification of spectral ranges that contribute to phytoplasma detection in apple trees - A step towards an on-site method. Spectrochim Acta A Mol Biomol Spectrosc. 2023 Dec 15;303:123246. Epub 2023 Aug 8. [CrossRef] [PubMed]
  173. Jiang, D., Chang, Q., Zhang, Z., Liu, Y., Zhang, Y., & Zheng, Z. (2023). Monitoring the Degree of Mosaic Disease in Apple Leaves Using Hyperspectral Images. Remote Sensing, 15(10), 2504. [CrossRef]
  174. Jang, S., Han, J., Cho, J., Jung, J., Lee, S., Lee, D., & Kim, J. (2024). Estimation of Apple Leaf Nitrogen Concentration Using Hyperspectral Imaging-Based Wavelength Selection and Machine Learning. Horticulturae, 10(1), 35. [CrossRef]
  175. Barthel, D., Dordevic, N., Fischnaller, S., Kerschbamer, C., Messner, M., Eisenstecken, D., Robatscher, P., & Janik, K. (2021). Detection of apple proliferation disease in Malus × domestica by near infrared reflectance analysis of leaves. Spectrochimica Acta Part A: Molecular and Biomolecular Spectroscopy, 263, 120178. [CrossRef]
  176. Prechsl, U.E., Mejia-Aguilar, A. & Cullinan, C.B. In vivo spectroscopy and machine learning for the early detection and classification of different stresses in apple trees. Sci Rep 13, 15857 (2023). [CrossRef]
  177. Neware, Rahul. (2019). Comparative Analysis of Land Cover Classification Using ML and SVM Classifier for LISS-iv Data. [CrossRef]
  178. Nguyen, T. T., Vandevoorde, K., Wouters, N., Kayacan, E., De Baerdemaeker, J. G., & Saeys, W. (2016). Detection of red and bicoloured apples on tree with an RGB-D camera. Biosystems Engineering, 146, 33-44.
  179. Yin, Z., Zhao, C., Zhang, W., Guo, P., Ma, Y., Wu, H., ... & Lu, Q. (2025). Nondestructive detection of apple watercore disease content based on 3D watercore model. Industrial Crops and Products, 228, 120888.
  180. Lin, G., Tang, Y., Zou, X., Xiong, J., & Fang, Y. (2020). Color-, depth-, and shape-based 3D fruit detection. Precision Agriculture, 21, 1-17.
  181. Schatzki, T. F., Haff, R. P., Young, R., Can, I., Le, L. C., & Toyofuku, N. (1997). Defect detection in apples by means of X-ray imaging. Transactions of the ASAE, 40(5), 1407-1415.
  182. Tempelaere, A., Van Doorselaer, L., He, J., Verboven, P., Tuytelaars, T., & Nicolai, B. (2023). Deep Learning for Apple Fruit Quality Inspection using X-Ray Imaging. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 552-560).
  183. Liangliang, Y. A. N. G., & Fuzeng, Y. A. N. G. (2011). Apple internal quality classification using X-ray and SVM. IFAC Proceedings Volumes, 44(1), 14145-14150.
  184. He, J., Van Doorselaer, L., Tempelaere, A., Vignero, J., Saeys, W., Bosmans, H., ... & Nicolai, B. (2024). Nondestructive internal disorders detection of ‘Braeburn’apple fruit by X-ray dark-field imaging and machine learning. Postharvest Biology and Technology, 214, 112981.
  185. Delalieux, S., Auwerkerken, A., Verstraeten, W. W., Somers, B., Valcke, R., Lhermitte, S., ... & Coppin, P. (2009). Hyperspectral reflectance and fluorescence imaging to detect scab induced stress in apple leaves. Remote sensing, 1(4), 858-874.
  186. Ariana, D., Guyer, D. E., & Shrestha, B. (2006). Integrating multispectral reflectance and fluorescence imaging for defect detection on apples. Computers and electronics in agriculture, 50(2), 148-161.
  187. Heyens, K., & Valcke, R. (2004, July). Fluorescence imaging of the infection pattern of apple leaves with Erwinia amylovora. In X International Workshop on Fire Blight 704 (pp. 69-74).
  188. Liu, Y., Zhang, Y., Jiang, D., Zhang, Z., & Chang, Q. (2023). Quantitative assessment of apple mosaic disease severity based on hyperspectral images and chlorophyll content. Remote Sensing, 15(8), 2202.
  189. Baranowski, P., Mazurek, W., Wozniak, J., & Majewska, U. (2012). Detection of early bruises in apples using hyperspectral data and thermal imaging. Journal of Food Engineering, 110(3), 345-355.
  190. M. M. U. Saleheen, M. S. Islam, R. Fahad, M. J. B. Belal and R. Khan, "IoT-Based Smart Agriculture Monitoring System," 2022 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET), Kota Kinabalu, Malaysia, 2022, pp. 1–6. [CrossRef]
  191. Neware, Rahul & Khan, Amreen. (2018). SOFTWARE DEVELOPMENT FOR SATELLITE DATA ANALYTICS IN AGRICULTURE AREAS.
  192. Jiang, He & Li, Xiaoru & Safara, Fatemeh. (2021). IoT-based Agriculture: Deep Learning in Detecting Apple Fruit Diseases. Microprocessors and Microsystems. 104321. [CrossRef]
  193. R. Neware and A. Khan, "Survey on Classification Techniques Used in Remote Sensing for Satellite Images," 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 2018, pp. 1860–1863. [CrossRef]
  194. Ryo, M. (2022). Explainable artificial intelligence and interpretable machine learning for agricultural data analysis. Artificial Intelligence in Agriculture, 6, 257–265. [CrossRef]
  195. Coussement, K., Abedin, M. Z., Kraus, M., Maldonado, S., & Topuz, K. (2024). Explainable AI for enhanced decision-making. Decision Support Systems, 184, 114276. [CrossRef]
  196. Ali, S., Abuhmed, T., El-Sappagh, S., Muhammad, K., Alonso-Moral, J. M., Confalonieri, R., Guidotti, R., Del Ser, J., Díaz-Rodríguez, N., & Herrera, F. (2023). Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Information Fusion, 99, 101805. [CrossRef]
  197. El Sakka, M., Ivanovici, M., Chaari, L., & Mothe, J. (2025). A Review of CNN Applications in Smart Agriculture Using Multimodal Data. Sensors, 25(2), 472. [CrossRef]
  198. Kondaveeti, Hari & Vatsavayi, Valli Kumari & Mangapathi, Srileakhana & Yasaswini, Reddy. (2023). Lightweight Deep Learning: Introduction, Advancements, and Applications. [CrossRef]
  199. Wiley, Victor & Lucas, Thomas. (2018). Computer Vision and Image Processing: A Paper Review. International Journal of Artificial Intelligence Research. 2. 22. [CrossRef]
  200. Isaac Christopher, L. (2023). The Internet of Things: Connecting a Smarter World.
Figure 2. Working of SVM.
Figure 2. Working of SVM.
Preprints 159380 g002
Table 1. Traditional Machine Learning Techniques with Accuracy, Advantages and Limitations.
Table 1. Traditional Machine Learning Techniques with Accuracy, Advantages and Limitations.
Algorithm used Feature type used Accuracy Advantages Limitations
Naive Bayes Texture (GLCM) 91 - 96.43 Simple, fast, effective on small data Assumes feature independence
Support Vector Machine (SVM) Texture, color, shape 87 - 96 Robust, good generalization Computationally expensive for large datasets
k-Nearest Neighbors (k-NN) Texture, color 90 Simple, interpretable Sensitive to noise, slow on large data
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated