Results and Discussion
We built our neural network using the transfer learning method. Five different transfer learning methods were put forth by us. such as MobileNetV2, ResNet50, VGG-19, and MobileNetV3 and DenseNet201
Table 03.
Accuracy comparison of several Model.
Table 03.
Accuracy comparison of several Model.
| Pre-trained Models |
Test Accuracy |
| ResNet-50 |
96.49% |
| VGG-19 |
87% |
| MobileNet-v2 |
86.50% |
| MobileNet-v3 |
89% |
| DenseNet201 |
95.49% |
In this instance, ResNet50 fared better than those models, attaining an accuracy of 96.49%.
Figure 12 shows the training and evaluation accuracy/loss for the ResNet50 model. The blue line represents training ability, while the orange line shows evaluation results.
Figure 07 shows the confusion matrix, showing that most illnesses were properly identified. However, some misclassifications happened — e.g., powdery_mildery was twice mistaken as golmachi, and healthy was mixed with powdery_mildery and golmachi. Despite these small mistakes, total performance was good. The Precision, Recall, and F1-score measures show the model’s high precision, with class names 0 to 4 representing powdery_mildery, healthy, golmachi, bacterial_canker, and anthracnose respectively.
Table 04.
ResNet50’s Classification report.
Table 04.
ResNet50’s Classification report.
| |
Precision |
Recall |
F1-score |
Support |
| 0 |
0.93
|
0.95
|
0.94
|
40 |
| 1 |
1.00
|
0.93
|
0.96
|
40 |
| 2 |
0.91
|
0.97
|
0.94
|
40 |
| 3 |
1.00
|
1.00
|
1.00
|
40 |
| 4 |
1.00
|
0.97
|
0.99
|
40 |
| Accuracy |
|
|
0.96
|
200 |
| Macro avg |
0.97
|
0.97
|
0.97
|
200 |
| Weighted avg |
0.97
|
0.96
|
0.97
|
200 |
Figure 13 shows the training and evaluation accuracy/loss for the MobileNetV2 model. The blue lines show training measures, while the orange lines indicate evaluation results.
Figure 08 shows the confusion matrix, showing that MobileNetV2 misclassified several data. For instance, powdery_mildery was often forecast as golmachi, and healthy leaves were mixed with powdery_mildery and golmachi. Similarly, anthracnose was mistaken as golmachi and bacterial_canker. Overall, the model failed compared to ResNet-50. Precision, Recall, and F1-score numbers show lower classification accuracy, with class names 0 to 4 representing powdery_mildery, healthy, golmachi, bacterial_canker, and anthracnose respectively.
Table 05.
MobileNetV2 Classification report.
Table 05.
MobileNetV2 Classification report.
| |
Precision |
Recall |
F1-score |
Support |
| 0 |
0.79
|
0.82
|
0.80
|
40 |
| 1 |
1.00
|
0.80
|
0.89
|
40 |
| 2 |
0.71
|
0.88
|
0.79
|
40 |
| 3 |
0.93
|
1.00
|
0.96
|
40 |
| 4 |
0.97
|
0.82
|
0.89
|
40 |
| Accuracy |
|
|
0.86
|
200 |
| Macro avg |
0.88
|
0.86
|
0.87
|
200 |
| Weighted avg |
0.88
|
0.86
|
0.87
|
200 |
Figure 14 shows the training and validation accuracy/loss for the VGG-19 model, where blue lines represent training data and orange lines indicate validation data.
Figure 09 shows the confusion matrix, showing bad classification ability. Notable mistakes include misclassifying golmachi as powdery_mildery (11 times), and several healthy samples as powdery_mildery or golmachi. Powdery_mildery and anthracnose were also mixed. Overall, the model did significantly worse than ResNet-50. Precision, Recall, and F1-score measures confirm this, with labels 0 to 4 representing powdery_mildery, healthy, golmachi, bacterial_canker, and anthracnose respectively.
Table 06.
VGG-19 Classification report.
Table 06.
VGG-19 Classification report.
| |
Precision |
Recall |
F1-score |
Support |
| 0 |
0.69
|
0.95
|
0.80
|
40 |
| 1 |
1.00
|
0.80
|
0.89
|
40 |
| 2 |
0.89
|
0.62
|
0.74
|
40 |
| 3 |
0.93
|
1.00
|
0.96
|
40 |
| 4 |
0.93
|
0.97
|
0.95
|
40 |
| Accuracy |
|
|
0.87
|
200 |
| Macro avg |
0.89
|
0.87
|
0.87
|
200 |
| Weighted avg |
0.89
|
0.87
|
0.87
|
200 |
Figure 15 shows the training and validation accuracy/loss for the MobileNetV3 model, with blue lines showing training measures and orange lines indicating validation results.
Figure 10 shows the confusion matrix, showing several misclassifications. For example, powdery_mildery was mixed with healthy, golmachi, and bacterial_canker; healthy leaves were misclassified as powdery_mildery and golmachi; and anthracnose was often forecast as golmachi. Overall, the model did worse than ResNet-50. The Precision, Recall, and F1-score numbers indicate this, with labels 0 to 4 corresponding to powdery_mildery, healthy, golmachi, bacterial_canker, and anthracnose respectively.
Table 07.
MobileNetV3 Classification report.
Table 07.
MobileNetV3 Classification report.
| |
Precision |
Recall |
F1-score |
Support |
| 0 |
0.82
|
0.80
|
0.81
|
40 |
| 1 |
0.97
|
0.85
|
0.91
|
40 |
| 2 |
0.77
|
0.93
|
0.84
|
40 |
| 3 |
0.93
|
0.97
|
0.95
|
40 |
| 4 |
1.00
|
0.90
|
0.95
|
40 |
| Accuracy |
|
|
0.89
|
200 |
| Macro avg |
0.90
|
0.89
|
0.89
|
200 |
| Weighted avg |
0.90
|
0.89
|
0.89
|
200 |
Figure 16 shows the training and evaluation accuracy/loss for the DenseNet201 model. Blue lines show training measures, while orange lines represent validation data.
Figure 11 shows the confusion matrix, showing several misclassifications—such as powdery_mildery identified as golmachi, and healthy samples mislabeled as powdery_mildery or golmachi. Anthracnose was also confused with golmachi. Overall, DenseNet201 did not perform as well as ResNet-50. Precision, Recall, and F1-score results support this, with class names 0 to 4 representing powdery_mildery, healthy, golmachi, bacterial_canker, and anthracnose respectively.
Table 08.
DenseNet201 Classification report.
Table 08.
DenseNet201 Classification report.
| |
Precision |
Recall |
F1-score |
Support |
| 0 |
0.93
|
0.97
|
0.95
|
40 |
| 1 |
1.00
|
0.82
|
0.90
|
40 |
| 2 |
0.87
|
1.00
|
0.93
|
40 |
| 3 |
1.00
|
1.00
|
1.00
|
40 |
| 4 |
1.00
|
0.97
|
0.99
|
40 |
| Accuracy |
|
|
0.95
|
200 |
| Macro avg |
0.96
|
0.95
|
0.95
|
200 |
| Weighted avg |
0.96
|
0.95
|
0.95
|
200 |