Preprint
Article

Diagnosis of Parkinson’s Disease Using Convolutional Neural Network by Hand Drawing Images

Altmetrics

Downloads

236

Views

99

Comments

0

This version is not peer-reviewed

Submitted:

22 December 2023

Posted:

26 December 2023

You are already at the latest version

Alerts
Abstract
Neurodegenerative illnesses, such as Parkinson’s disease (PD), have a substantial impact on the overall well-being of those who are affected. This study investigates and contrasts the capabilities of convolutional neural networks (CNN) in detecting Parkinson’s disease (PD) by utilising hand-drawn images alongside wave and spiral images as input data. This study employs pre-trained CNN models, specifically MobileNet, ResNet50, EfficientNet-B1, and InceptionV3, to classify Parkinson’s disease (PD). The findings demonstrate that MobileNet surpasses other architectural designs, as evidenced by the F1-Score of the four classes: Spiral Normal (0.87), Spiral Parkinson (0.86), Wave Normal (0.97), and Wave Parkinson (0.97). MobileNet has also shown a remarkable accuracy of 0.92 in diagnosing Parkinson’s disease. The result demonstrates the efficacy of MobileNet in extracting features from images. The results of this study enhance the application of deep learning methods in the early detection of PD, as well as help indicate the effectiveness of patient therapy and exercise, promising better patient outcomes through timely intervention and treatment.
Keywords: 
Subject: Public Health and Healthcare  -   Nursing

1. Introduction

Parkinson's disease (PD) is a neurological disorder that is usually characterized by symptoms including memory loss, cognitive decline, muscle weakness, nervousness, and trembling [1, 2]. The exact etiology of various cognitive rigidity syndromes, such as trauma, inflammation, tumors, and drug use, remains uncertain. Meanwhile, the pathogenesis of Parkinson's disease is known for certain [3]. In addition, Parkinson's symptoms can be caused by the influence of certain chemicals, thus indicating that the surrounding environment also influences the development of this disease [4]. This study will take use of the fact that tremors and muscle rigidity, the two most prevalent signs of Parkinson's disease, have an immediate effect on how drawn by hand spirals and waves appears visually [5, 6].
Spiral and wave hand drawing have been proposed as non-invasive tests that can measure motor dysfunction in PD [7, 8]. The identification of PD using hand-drawn tasks is crucial and straightforward for diagnosis because both sensory and motor symptoms might be present. Because of their sluggish motions and poor brain-hand synchronization, people with Parkinson's disease (PD) frequently draw spirals and waves that are not precisely spiral or wave shapes [9]. The speed and pressure of the pen used to design spirals are found to be lower among persons with Parkinson's disease who have a more severe form of the disease [10]. Furthermore, spiral drawing has been utilized in order to evaluate the effect that therapy has on the execution of motor functions [11]. The use of wave handwriting analysis has been proposed as a complementary approach to standard clinical evaluations, offering the potential to support earlier diagnosis of PD by identifying subtle signs and manifestations of the disease [12]. Additionally, the diagnosis of Parkinson's disease is strictly clinical and does not require the assistance of a laboratory. It is often made using the eyes, hearing, and hands [13]. However, advancements in technology have facilitated the development of computer-based systems that leverage handwriting patterns as potential biomarkers for modeling PD, especially artificial intelligence (AI).
A large amount of assistance is provided by AI in the detection of diseases based on age-related activities. Hand sketching and deep learning have gained significant interest in recent years for the classification of Parkinson's disease. Convolutional neural networks (CNN) have shown great potential in several medical applications, such as accurately diagnosing Parkinson's disease [14-17]. This illustrates the extensive scope of deep learning techniques in the context of identifying and classifying Parkinson's illness. CNN has been utilized extensively to the classification of hand gestures and motions and it has proven a high level of accuracy in discriminating between drawings created by individuals dealing with Parkinson's disease and those created by individuals who are in good condition. Research has developed a CNN-based spiral image classifier to detect early-stage PD with 85% accuracy [18]. However, there are few comparisons between spiral and wave categories classified by artificial intelligence. This is important to support clinical diagnosis work for each category.
Furthermore, comparing CNN models is crucial for identifying the most suitable model for diagnosing PD from hand-drawn tasks [19]. The need for diagnostic tools that are accurate and reliable for PD is the reason why this comparison is so important. Furthermore, there is the possibility of differences in model performance depending on specific features of the input data and the level of difficulty of the task [7]. An accurate classification of PD can be achieved by the utilization of these methodologies, which make use of the abilities of deep learning to extract relevant features from hand-drawn data.
The importance of comparing CNN models for medical diagnosis has been brought to light by a number of recently published papers. Specifically, it underlined the significance of changing existing models in order to reduce the amount of time spent on training instances, as well as the utilization of pre-trained CNNs through transfer learning and fine-tuning [20]. Similarly, the clinical value of CNN models was verified by comparing them with established guidelines in plantar pressure detection of foot problems [21]. In alongside this, it emphasizes the importance of conducting broad performance evaluations of deep-learned, hand-crafted, and fused features with deep and traditional models in medical environments [22].
This study compares pre-trained CNN models, MobileNet, ResNet50, EfficientNet B1, and Inception V3 for PD hand-drawn image auto-classification. MobileNet’s efficiency in mobile and embedded vision applications makes it suitable for limited computational resources, such as mobile devices in diagnosis cases [23]. ResNet50 is recognized for its high classification accuracy, which makes it a strong candidate for tasks where precision is crucial, such as medical image classification [24]. In contrast, EfficientNet B1’s demonstrated accuracy and efficiency make it a promising option for Parkinson’s hand-drawn image auto-classification, especially considering its performance compared to other architectures [25]. Inception V3’s efficient use of model parameters makes it a valuable contender, especially in scenarios where computational resources must be utilized optimally, especially for hand-drawn images [26]. Thus, comparing CNN models is crucial for identifying the most suitable model for diagnosing PD from hand-drawn tasks. This comparison allows for the evaluation of model performance, generalizability, and suitability for specific clinical applications, ultimately contributing to the development of accurate and reliable diagnostic tools for PD.

2. Materials and Methods

2.1. Image Dataset

Data used in this study were obtained from the publicly available dataset of Images of healthy patients drawing and Parkinson’s drawing spirals and waves. (https://www.kaggle.com/datasets/kmader/parkinsons-drawings/). The dataset came from the research by Zham, Poonam, et al. (2017) [27]. The dataset includes 55 subjects in four classes: spiral normal, wave normal, spiral parkinson, and wave parkinson. The dataset contains 204 images in total. We divided into training 144 images and validation 60 images. The augmentation technique was used to encounter the minimum data. The image augmentation improves the deep learning model training results [28, 29]. Therefore, this study used data augmentation such as rotation 15°, zoom range 0.2, width shift range 0.2, and height shift range 0.2 to simulate real-life situations of hand drawing images.

2.2. Convolutional Neural Networks

In this study, we used a pre-trained model from ImageNet. Pretrained models are Neural Network models trained on large benchmark datasets. These pre-trained models have learned to extract general features and patterns from diverse visual data during the training process on ImageNet [30]. Pretrained CNNs serve as a valuable resource for transfer learning in computer vision tasks, enabling the reuse of learned visual representations from ImageNet for diverse applications, including medical image analysis, disease diagnosis, and object recognition [31]. We used Resnet50, MobileNet, EfficientNet-B1, and Inception V3. The model’s parameters were 8 batch size, image size 224 x 224, and color mode RGB. All experiments were carried out using a Google Colaboratory with T4 GPU runtime. Furthermore, ResNet50, MobileNet, EfficientNet-B1, and InceptionV3 are popular CNN architectures widely used in various applications, including image classification, medical image analysis, and disease diagnosis [32]. Each of these architectures has unique characteristics and design principles that differentiate them from one another [24].
ResNet50 is a CNN architecture renowned for its utilization of residual blocks. These blocks enable the training of deep neural networks while effectively addressing the issue of disappearing gradients (Figure 1). The model contains 50 layers and is extensively employed for image classification tasks according to its capacity to capture complex elements within images [33]. MobileNet was developed explicitly for mobile and embedded vision applications to offer a compact and highly efficient architecture (Figure 2). It employs depthwise separable convolutions to decrease the computational expense while preserving high precision, rendering it appropriate for contexts with limited resources [34]. EfficientNet-B1 comes to the group of EfficientNet model series created using neural architecture search to enhance accuracy and efficiency over previous Convolutional Neural Networks (Figure 3). These models exhibit balanced scalability across several dimensions, such as depth, width, and resolution, in order to maximize performance [35]. InceptionV3 is a CNN architecture that pioneered the use of inception modules (Figure 4). These modules enable the simultaneous analysis of picture information at various spatial scales. It attained exceptional performance in the ImageNet Large-Scale Visual Recognition Challenge 2014 and has since been extensively utilized in various visual recognition tasks [36].
These network architectures are distinguished based on their design ideas, computational efficiency, and performance characteristics. ResNet50 is renowned for its profound architecture and incorporation of residual connections. MobileNet is recognized for its efficient and lightweight structure. EfficientNetB-1 is celebrated for its well-balanced scaling. InceptionV3 is esteemed for its exceptional simultaneous processing capability. Every architectural network possesses unique advantages and may be suited for classifying PD.
In this study, we used four performance indicators. Accuracy quantifies the proportion of correctly identified examples out of the total instances [37]. Precision is a metric that quantifies the ratio of correct positive predictions to all positive predictions generated by the model. The calculation involves dividing the number of true positives by the sum of true positives and false positives [38]. Recall, referred to as sensitivity, is the ratio of correctly predicted positive instances to the total number of positive instances in the dataset. The calculation involves dividing the number of true positives by the sum of true positives and false negatives [39]. The F1-Score, which is the harmonic mean of accuracy and Recall, offers a balanced measure between these two metrics and is especially valuable when working with imbalanced datasets [40]. The formulas are in the equation below:
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
F 1 S c o r e = 2   x   P r e c i s i o n   x   R e c a l l P r e c i s i o n + R e c a l l
The number of data identified as positive among the data tagged as positive is known as true positive (TP) in Equations (1) and (2). In contrast, the number of data classed as negative among the data labeled as negative is known as true negative. False positives (FP) are data that are classed as positive but marked as negative in the dataset, and false negatives (FN) are classified as negative but are tagged as positive in the dataset.

3. Results

In the study, ResNet50, MobileNet, EfficientNetB-1, and InceptionV3 were selected as the four models trained for multiclass classification in the context of Parkinson's disease (PD) detection. These models were specifically designed to cover the following categories: Spiral Normal, Spiral Parkinson, Wave Normal, and Wave Parkinson, reflecting the diverse manifestations of PD-related motor impairments in handwriting patterns. The comprehensive coverage of these categories allowed for a thorough evaluation of the models' capabilities in accurately distinguishing between normal and Parkinsonian handwriting patterns, thereby contributing to the robustness and reliability of the classification process. The findings from the research encompassed a detailed analysis of the performance of each model across the specified categories, shedding light on their respective strengths and limitations in accurately classifying PD related handwriting patterns. The results provided valuable insights into the efficacy of each model in differentiating between normal and Parkinsonian handwriting, thereby contributing to the advancement of diagnostic methodologies for PD using deep learning-based classification models. The comprehensive evaluation of these findings served to elucidate the potential of deep learning models in effectively capturing the nuanced features of PD related handwriting patterns, ultimately contributing to the development of accurate and reliable diagnostic tools for PD.

3.1. Performances of Deep Learning

The study compared the effectiveness and performance of four classification models using hand-drawn images for PD classification. According to the results presented in Table 1, the classification method demonstrated higher accuracy in the range of 0.80 to 0.92 across the four models. The accuracy graph in PD classification, as depicted in Figure 5, highlighted that MobileNet exhibited superior accuracy compared to the other models. Specifically, the accuracy distance between MobileNet and the models ResNet50, EfficientNet-B1, and InceptionV3 was approximately 0.12.

3.2. Precision, Recall and F1-Score

In Table 2, the study results revealed the precision, recall, and F1-score for the four classes in the four different models, providing valuable insights into the models' performance in classifying Parkinson's disease-related handwriting patterns. The precision scores ranged from approximately 0.74 for ResNet50 to a perfect score of 1.00 for MobileNet, as depicted in Figure 6. This indicates that MobileNet exhibited superior precision in accurately identifying true positive predictions for all classes, reflecting its capability to minimize false positive predictions and enhance the overall precision of PD classification.
Furthermore, the recall scores, as illustrated in Figure 7, spanned from 0.67 for ResNet50 to 1.00 for MobileNet, signifying the models' abilities to effectively capture true positive instances while minimizing false negative predictions. The perfect recall score of 1.00 for MobileNet indicates its proficiency in identifying all relevant instances of the classes, highlighting its robust performance in capturing true positive predictions across the PD-related handwriting patterns.
Additionally, the F1-scores, as shown in Figure 8, demonstrated the comparison of the four classes in the four different models, with values ranging from 0.77 for ResNet50 to 0.97 for MobileNet. The F1-score, which considers both precision and recall, provides a comprehensive assessment of the models' abilities to balance precision and recall, with MobileNet exhibiting a notably high F1-score. This indicates that MobileNet achieved a harmonious balance between precision and recall, resulting in a high overall accuracy in classifying PD-related handwriting patterns.

4. Discussion

Based on our results, a comparison of MobileNet, ResNet50, EfficientNet-B1, and InceptionV3 in classifying PD from the provided image dataset shows that MobileNet outperforms other architectures, achieving an impressive accuracy of 0.92 in PD diagnosis. Furthermore, the results showed that MobileNet is superior in classifying four classes with a range of 0.86-0.97 F1-Score. On the other hand, EfficienNet matches the results from MobileNet in two classes, namely Spiral Normal and Spiral Parkinson, with a range of 0.86-0.87 F1-Score. Furthermore, this research shows the importance of transfer learning on ImageNet, a support and adjustment strategy for training CNN models. These pre-trained can contribute to MobileNet’s high accuracy in PD classification tasks [41]. Additionally, this work was carried out by highlighting the high classification results achieved by pre-trained CNNs, indicating their effectiveness in disease classification tasks [29].
Moreover, the study's incorporation of augmentation techniques, such as rotation by 15°, zoom range of 0.2, width shift range of 0.2, and height shift range of 0.2, contributed to the enhanced performance of the classification models. Augmentation techniques play a crucial role in expanding the diversity and variability of the training dataset, thereby enabling the models to learn robust and generalized features. By introducing variations in the training data through augmentation, the models become more adept at capturing and recognizing patterns, leading to improved accuracy and performance in PD classification. The augmentation techniques effectively enriched the training dataset, enabling the models to better adapt to variations and nuances in the hand-drawn images, ultimately contributing to the higher accuracy observed, particularly in the case of MobileNet [42].
Comparison of MobileNet, ResNet50, EfficientNet B1, and Inception V3 in accurately classifying PD from provided image datasets highlights the potential of MobileNet as a promising architecture for PD diagnosis. It was observed that MobileNet achieved the highest accuracy of 0.92, while ResNet50 only achieved an accuracy of 0.80 (Figure 5). MobileNet is explicitly designed for mobile and embedded vision applications, emphasizing efficiency without compromising performance [43]. On the other hand, ResNet50 is a deep residual network that focuses on residual function learning, making it suitable for complex image recognition tasks [33]. The performance variation between MobileNet and ResNet50 in the context of PD classification aligns with research findings by Thu et al. (2023), which showed that the pre-trained MobileNet outperformed ResNet50 in pedestrian classification [44].
Based on Table 2. MobileNet’s precision, Recall, and F1-Score are above 0.80 in Parkinson’s disease classification from hand image datasets which is caused by several factors. MobileNet’s performance in this context is in line with the success of pre-trained deep learning models in various medical and image classification tasks with CNN. Kaur et al. (2021) explored the CNN model based on Magnetic Resonance Imaging of PD and got 89.23% accuracy [45]. The results indicated the potential of deep learning approaches in accurately identifying PD from such image data. Additionally, the work by Fan & Sun (2022) explored the use of CNN for the early detection of PD using drawing movements and got 85% accuracy. The results may further highlight the applicability of deep learning techniques in this domain [18].
Additionally, MobileNet success in achieving high performance can be attributed to its architecture and feature extraction capabilities [46, 47]. MobileNet can effectively extract relevant features from hand-drawn images and distinguish patterns associated with Parkinson’s disease, which contributes to its high precision and recall scores. Additionally, transfer learning approaches, as discussed in the work of Baghdadi, Nadiah A., et al. (2022) [48], may play an important role in improving the performance of MobileNet for Parkinson’s disease classification. Transfer learning allows a model to leverage knowledge gained from a source task to improve learning in a related target task, which can be especially beneficial when working with limited datasets, such as hand-drawn images [49].
The utilization of the MobileNet model for diagnosing PD from handwriting, particularly spiral and wave patterns, holds significant promise for future applications. MobileNet, characterized by their lightweight and efficient architecture, have been widely recognized for their suitability in embedded vision applications, making them well-suited for processing handwriting data obtained from mobile devices [50]. The efficient nature of MobileNets, achieved through depthwise separable convolutions, enables the development of models that can effectively analyze and classify handwriting patterns associated with PD, thereby contributing to the early detection and monitoring of the disease [51, 52]. Furthermore, the use of MobileNet-based models in conjunction with transfer learning techniques offers the potential to enhance the computational efficiency and accuracy of PD diagnosis from handwriting data, thereby facilitating the integration of this approach into clinical practice [53].
Moreover, the application of MobileNet model for PD diagnosis aligns with the growing interest in leveraging advanced technologies, such as deep learning and artificial intelligence, to develop non-invasive and accessible diagnostic tools for neurodegenerative diseases [34]. By harnessing the computational capabilities of MobileNet, researchers can explore the intricate features of handwriting, including dynamic characteristics and spatial patterns, to identify distinctive markers associated with PD [54]. Additionally, the potential integration of MobileNet models with other modalities, such as speech signals, presents an opportunity to create comprehensive diagnostic frameworks that encompass multiple data sources, thereby enhancing the accuracy and reliability of PD diagnosis [55]. The future utilization of MobileNet models for PD diagnosis from handwriting offers a pathway towards innovative, technology-driven approaches that can revolutionize the early detection and management of neurodegenerative conditions, ultimately improving patient outcomes and quality of care.
According to Figure 8. The similar F1-Scores achieved by MobileNet and EfficientNet-B1 in predicting Spiral Normal and Spiral Parkinson data can be attributed to the effectiveness of the CNN architecture used in these models. The study by Sarvamangala and Kulkarni (2022) highlighted basic CNN design variants in achieving state-of-the-art results in image-based classification tasks [56]. In addition, research from Elfatimi et al. (2022) demonstrated the high classification performance of the MobileNet architecture in a similar image classification task, indicating the effectiveness of this architecture in achieving accurate results [57]. Furthermore, research by Filatov and Yar (2022) shows that the EfficientNet-B1 architecture also performs well in the task of not very different classes, which supports the model’s performance to achieve high accuracy in classifying multi classes [58].
Based on Table 2 and Figure 8. Variations in F1 Scores for the four classes (Normal Spiral, Parkinson’s Spiral, Normal Wave, Parkinson’s Wave) on MobileNet can be attributed to inherent differences in the characteristics and complexity of the classified classes. The F1 score, a harmonious average of precision and Recall, provides a balanced measure of model performance in various classes, namely in the range 0.86-0.97. In the context of skin cancer classification, it was shown that the weighted average F1 Score was 0.83, which highlights the importance of considering F1-Score in multiclass classification tasks [59]. Differences in F1-Score for each class can be influenced by specific features and patterns associated with each class. In the case of PD, the spiral image will look more the same if rotational augmentation is used, in contrast to the wave image, which will be different if the same augmentation technique is used. In this study, we use several augmentation techniques to overcome the insufficient dataset. This affects the CNN performance results with different F1-Scores [29]. However, this research proved that MobileNet is suitable for classification tasks with small amounts of data.
The choice of the most appropriate architecture for PD classification may depend on factors such as the nature of the image data set, the specific features relevant to PD diagnosis, and the computational resources available for model implementation. Therefore, although MobileNet has demonstrated superior performance in the context of the provided image datasets, further research and experiments may be needed to validate its effectiveness across various PD image datasets and clinical settings. The other limitation of this study are the diversity and heterogeneity of PD manifestations and progression across individuals may pose a challenge in developing universally applicable deep learning models. Variability in symptom presentation, disease subtypes, and comorbidities could impact the generalizability of deep learning-based diagnostic systems, potentially leading to limitations in accurately capturing the full spectrum of PD manifestations.

5. Conclusions

This study proposed four CNN models for a suitable model for classifying Parkinson’s disease. MobileNet showed superior results in classifying four classes of hand-drawn images: Spiral Normal, Spiral Parkinson, Wave normal, and Wave Parkinson. Deep learning with MobileNet has the advantage of improving predictions of the Wave Parkinson and Wave Normal. Besides, MobileNet and EfficienNet B-1 have a reliable prediction accuracy of spiral normal and spiral parkinson from hand-drawn images. The effects of the accuracy of the Parkinson’s diagnosis may come from pre-trained CNN models. The precision prediction of Parkinson’s disease can provide information with therapy progression to evaluate the effect of clinical programs on Parkinson’s patients.

Author Contributions

Conceptualization, PA and MA; methodology, PA; writing—original draft preparation, PA and MA; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no specific grant from any funding agency.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Acknowledgments

The authors wish to express gratitude to Prayitno, Fahni Haris, Andika Wisnujati for their assistance.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Balestrino, R. and A. Schapira. "Parkinson disease." European journal of neurology 27 (2020): 27-42. [CrossRef]
  2. Connolly, B. S. and A. E. Lang. "Pharmacological treatment of parkinson disease: A review." Jama 311 (2014): 1670-83. [CrossRef]
  3. Bhat, S. R. Acharya, Y. Hagiwara, N. Dadmehr and H. Adeli. "Parkinson's disease: Cause factors, measurable indicators, and early diagnosis." Computers in biology and medicine 102 (2018): 234-41. [CrossRef]
  4. Goldman, S. M. "Environmental toxins and parkinson's disease." Annual review of pharmacology and toxicology 54 (2014): 141-64. [CrossRef]
  5. Hossain, M. S. and M. Shorfuzzaman. "Metaparkinson: A cyber-physical deep meta-learning framework for n-shot diagnosis and monitoring of parkinson's patients. IEEE Systems Journal. [CrossRef]
  6. Lima, A. A. F. Mridha, S. C. Das, M. M. Kabir, M. R. Islam and Y. Watanobe. "A comprehensive survey on the detection, classification, and challenges of neurological disorders." Biology 11 (2022): 469. [CrossRef]
  7. Varalakshmi, P. T. Priya, B. A. Rithiga, R. Bhuvaneaswari and R. S. J. Sundar. "Diagnosis of parkinson's disease from hand drawing utilizing hybrid models." Parkinsonism & Related Disorders 105 (2022): 24-31. [CrossRef]
  8. Vadalà, M. Vallelunga, L. Palmieri, B. Palmieri, J. C. Morales-Medina and T. Iannitti. "Mechanisms and therapeutic applications of electromagnetic therapy in parkinson’s disease." Behavioral and Brain Functions 11 (2015): 1-12. [CrossRef]
  9. Meiappane, A. and V. S. SR. "Design and implementation of end to end application for parkinson disease categorization." Journal of Coastal Life Medicine 11 (2023): 1556-63.
  10. Kraus, P. H. and A. Hoffmann. "Spiralometry: Computerized assessment of tremor amplitude on the basis of spiral drawing." Movement Disorders 25 (2010): 2164-70. [CrossRef]
  11. Toffoli, S. Lunardini, M. Parati, M. Gallotta, B. De Maria, L. Longoni, M. E. Dell'Anna and S. Ferrante. "Spiral drawing analysis with a smart ink pen to identify parkinson's disease fine motor deficits." Frontiers in neurology 14 (2023): 1093690. [CrossRef]
  12. Angelillo, M. T. Impedovo, G. Pirlo and G. Vessio. "Performance-driven handwriting task selection for parkinson’s disease classification." Presented at AI* IA 2019–Advances in Artificial Intelligence: XVIIIth International Conference of the Italian Association for Artificial Intelligence, Rende, Italy, –22, 2019, Proceedings 18, 2019. Springer, 281-93. 19 November. [CrossRef]
  13. Yaseen, M. U. Identification of cause of impairment in spiral drawings, using non-stationary feature extraction approach. 2012. [Google Scholar]
  14. Masud, M. Singh, G. S. Gaba, A. Kaur, R. Alroobaea, M. Alrashoud and S. A. Alqahtani. "Crowd: Crow search and deep learning based feature extractor for classification of parkinson’s disease." ACM Transactions on Internet Technology (TOIT) 21 (2021): 1-18. [CrossRef]
  15. Wang, W. Lee, F. Harrou and Y. Sun. "Early detection of parkinson’s disease using deep learning and machine learning." IEEE Access 8 (2020): 147635-46. [CrossRef]
  16. Noor, M. B. T. Z. Zenia, M. S. Kaiser, S. A. Mamun and M. Mahmud. "Application of deep learning in detecting neurological disorders from magnetic resonance images: A survey on the detection of alzheimer’s disease, parkinson’s disease and schizophrenia." Brain informatics 7 (2020): 1-21. [CrossRef]
  17. Ardhianto, P. -Y. Liau, Y.-K. Jan, J.-Y. Tsai, F. Akhyar, C.-Y. Lin, R. B. R. Subiakto and C.-W. Lung. "Deep learning in left and right footprint image detection based on plantar pressure." Applied Sciences 12 (2022): 8885. [CrossRef]
  18. Fan, S. and Y. Sun. "Early detection of parkinson’s disease using machine learning and convolutional neural networks from drawing movements." Presented at CS & IT Conference Proceedings, 2022. 12. [CrossRef]
  19. Purushotham, S. Meng, Z. Che and Y. Liu. "Benchmarking deep learning models on large healthcare datasets." Journal of biomedical informatics 83 (2018): 112-34. [CrossRef]
  20. Kumar, A. Kim, D. Lyndon, M. Fulham and D. Feng. "An ensemble of fine-tuned convolutional neural networks for medical image classification." IEEE journal of biomedical and health informatics 21 (2016): 31-40. [CrossRef]
  21. Ardhianto, P. B. R. Subiakto, C.-Y. Lin, Y.-K. Jan, B.-Y. Liau, J.-Y. Tsai, V. B. H. Akbari and C.-W. Lung. "A deep learning method for foot progression angle detection in plantar pressure images." Sensors 22 (2022): 2786. [CrossRef]
  22. Muzammel, M. Salam and A. Othmani. "End-to-end multimodal clinical depression recognition using deep neural networks: A comparative analysis." Computer Methods and Programs in Biomedicine 211 (2021): 106433. [CrossRef]
  23. Yu, W. and P. Lv. "An end-to-end intelligent fault diagnosis application for rolling bearing based on mobilenet." IEEE Access 9 (2021): 41925-33. [CrossRef]
  24. Pusparani, Y. -Y. Lin, Y.-K. Jan, F.-Y. Lin, B.-Y. Liau, P. Ardhianto, I. Farady, J. S. R. Alex, J. Aparajeeta and W.-H. Chao. "Diagnosis of alzheimer’s disease using convolutional neural network with select slices by landmark on hippocampus in mri images. IEEE Access. [CrossRef]
  25. Oloko-Oba, M. and S. Viriri. "Ensemble of efficientnets for the diagnosis of tuberculosis. Computational Intelligence and Neuroscience, 2021. [Google Scholar] [CrossRef]
  26. Mujahid, M. Rustam, R. Álvarez, J. Luis Vidal Mazón, I. d. l. T. Díez and I. Ashraf. "Pneumonia classification from x-ray images with inception-v3 and convolutional neural network." Diagnostics 12 (2022): 1280. [CrossRef]
  27. Zham, P. K. Kumar, P. Dabnichki, S. Poosapadi Arjunan and S. Raghav. "Distinguishing different stages of parkinson’s disease using composite index of speed and pen-pressure of sketching a spiral." Frontiers in neurology (2017): 435. [CrossRef]
  28. Tsai, J.-Y. Y.-J. Hung, Y. L. Guo, Y.-K. Jan, C.-Y. Lin, T. T.-F. Shih, B.-B. Chen and C.-W. Lung. "Lumbar disc herniation automatic detection in magnetic resonance imaging based on deep learning." Frontiers in Bioengineering and Biotechnology 9 (2021): 708137. [CrossRef]
  29. Ardhianto, P. -Y. Tsai, C.-Y. Lin, B.-Y. Liau, Y.-K. Jan, V. B. H. Akbari and C.-W. Lung. "A review of the challenges in deep learning for skeletal and smooth muscle ultrasound images." Applied Sciences 11 (2021): 4021. [CrossRef]
  30. Kornblith, S. Shlens and Q. V. Le. "Do better imagenet models transfer better?" Presented at Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019. 2661-71.
  31. Kim, H. E. Cosa-Linan, N. Santhanam, M. Jannesari, M. E. Maros and T. Ganslandt. "Transfer learning for medical image classification: A literature review." BMC medical imaging 22 (2022): 69. [CrossRef]
  32. Wang, J. Zhu, S.-H. Wang and Y.-D. Zhang. "A review of deep learning on medical image analysis." Mobile Networks and Applications 26 (2021): 351-80. [CrossRef]
  33. Mascarenhas, S. and M. Agarwal. "A comparison between vgg16, vgg19 and resnet50 architecture frameworks for image classification." Presented at 2021 International conference on disruptive technologies for multi-disciplinary research and applications (CENTCON), 2021. IEEE, 1, 96-99. [CrossRef]
  34. Sae-Lim, W. Wettayaprasit and P. Aiyarak. "Convolutional neural networks using mobilenet for skin lesion classification." Presented at 2019 16th international joint conference on computer science and software engineering (JCSSE), 2019. IEEE, 242-47. [CrossRef]
  35. Yadav, P. Menon, V. Ravi, S. Vishvanathan and T. D. Pham. "Efficientnet convolutional neural networks-based android malware detection." Computers & Security 115 (2022): 102622. [CrossRef]
  36. Minarno, A. E. Aripa, Y. Azhar and Y. Munarko. "Classification of malaria cell image using inception-v3 architecture." JOIV: International Journal on Informatics Visualization 7 (2023): 273-78. [CrossRef]
  37. Kawasaki, Y. Uga, S. Kagiwada and H. Iyatomi. "Basic study of automated diagnosis of viral plant diseases using convolutional neural networks." Presented at Advances in Visual Computing: 11th International Symposium, ISVC 2015, Las Vegas, NV, USA, -16, 2015, Proceedings, Part II 11, 2015. Springer, 638-45. 14 December. [CrossRef]
  38. Ajayi, O. G. and J. Ashi. "Effect of varying training epochs of a faster region-based convolutional neural network on the accuracy of an automatic weed classification scheme." Smart Agricultural Technology 3 (2023): 100128. [CrossRef]
  39. Ferentinos, K. P. "Deep learning models for plant disease detection and diagnosis." Computers and electronics in agriculture 145 (2018): 311-18. [CrossRef]
  40. Kamilaris, A. and F. X. Prenafeta-Boldú. "Deep learning in agriculture: A survey." Computers and electronics in agriculture 147 (2018): 70-90. [CrossRef]
  41. Falconí, L. G. Pérez and W. G. Aguilar. "Transfer learning in breast mammogram abnormalities classification with mobilenet and nasnet." Presented at 2019 international conference on systems, signals and image processing (IWSSIP), 2019. IEEE, 109-14. [CrossRef]
  42. Frid-Adar, M. Diamant, E. Klang, M. Amitai, J. Goldberger and H. Greenspan. "Gan-based synthetic medical image augmentation for increased cnn performance in liver lesion classification." Neurocomputing 321 (2018): 321-31. [CrossRef]
  43. Howard, A. G. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto and H. Adam. "Mobilenets: Efficient convolutional neural networks for mobile vision applications." arXiv preprint arXiv:1704. 0486. [Google Scholar] [CrossRef]
  44. Thu, M. Suvonvorn and N. Kittiphattanabawon. "Pedestrian classification on transfer learning based deep convolutional neural network for partial occlusion handling." International Journal of Electrical and Computer Engineering (IJECE) 13 (2023): 2812-26. [CrossRef]
  45. Kaur, S. Aggarwal and R. Rani. "Diagnosis of parkinson’s disease using deep cnn with transfer learning and data augmentation." Multimedia Tools and Applications 80 (2021): 10113-39. [CrossRef]
  46. Nan, Y. Ju, Q. Hua, H. Zhang and B. Wang. "A-mobilenet: An approach of facial expression recognition." Alexandria Engineering Journal 61 (2022): 4435-44. [CrossRef]
  47. Sajid, M. Z. Qureshi, Q. Abbas, M. Albathan, K. Shaheed, A. Youssef, S. Ferdous and A. Hussain. "Mobile-hr: An ophthalmologic-based classification system for diagnosis of hypertensive retinopathy using optimized mobilenet architecture." Diagnostics 13 (2023): 1439. [CrossRef]
  48. Baghdadi, N. A. Malki, S. F. Abdelaliem, H. M. Balaha, M. Badawy and M. Elhosseini. "An automated diagnosis and classification of covid-19 from chest ct images using a transfer learning-based convolutional neural network." Computers in biology and medicine 144 (2022): 105383. [CrossRef]
  49. Gupta, J. Pathak and G. Kumar. "Deep learning (cnn) and transfer learning: A review." Presented at Journal of Physics: Conference Series, 2022. IOP Publishing, 2273, 012029. [CrossRef]
  50. Khasoggi, B. Ermatita and S. Sahmin. "Efficient mobilenet architecture as image recognition on mobile and embedded devices." Indonesian Journal of Electrical Engineering and Computer Science 16 (2019): 389-94. [CrossRef]
  51. Lin, Y. Zhang and X. Yang. "A low memory requirement mobilenets accelerator based on fpga for auxiliary medical tasks." Bioengineering 10 (2022): 28. [CrossRef]
  52. Hartanto, C. A. and A. Wibowo. "Development of mobile skin cancer detection using faster r-cnn and mobilenet v2 model." Presented at 2020 7th International Conference on Information Technology, Computer, and Electrical Engineering (ICITACEE), 2020. IEEE, 58-63. [CrossRef]
  53. Ogundokun, R. O. Misra, A. O. Akinrotimi and H. Ogul. "Mobilenet-svm: A lightweight deep transfer learning model to diagnose bch scans for iomt-based imaging sensors." Sensors 23 (2023): 656. [CrossRef]
  54. Kassani, S. H. H. Kassani, M. J. Wesolowski, K. A. Schneider and R. Deters. "Deep transfer learning based model for colorectal cancer histopathology segmentation: A comparative study of deep pre-trained models." International Journal of Medical Informatics 159 (2022): 104669. [CrossRef]
  55. AKGÜN, D. T. KABAKUŞ, Z. K. ŞENTÜRK, A. ŞENTÜRK and E. KÜÇÜKKÜLAHLI. "A transfer learning-based deep learning approach for automated covid-19diagnosis with audio data." Turkish Journal of Electrical Engineering and Computer Sciences 29 (2021): 2807-23. [CrossRef]
  56. Sarvamangala, D. and R. V. Kulkarni. "Convolutional neural networks in medical image understanding: A survey." Evolutionary intelligence 15 (2022): 1-22. [CrossRef]
  57. Elfatimi, E. Eryigit and L. Elfatimi. "Beans leaf diseases classification using mobilenet models." IEEE Access 10 (2022): 9471-82. [CrossRef]
  58. Filatov, D. and G. N. A. H. Yar. "Brain tumor diagnosis and classification via pre-trained convolutional neural networks." arXiv preprint arXiv:2208. 0076; arXiv:2208.00768. [Google Scholar]
  59. Chaturvedi, S. S. V. Tembhurne and T. Diwan. "A multi-class skin cancer classification using deep convolutional neural networks." Multimedia Tools and Applications 79 (2020): 28477-98. [CrossRef]
Figure 1. Network Architecture of ResNet50.
Figure 1. Network Architecture of ResNet50.
Preprints 94233 g001
Figure 2. Network Architecture of MobileNet.
Figure 2. Network Architecture of MobileNet.
Preprints 94233 g002
Figure 3. Network Architecture of EfficientNetB-1.
Figure 3. Network Architecture of EfficientNetB-1.
Preprints 94233 g003
Figure 4. Network Architecture of InceptionV3.
Figure 4. Network Architecture of InceptionV3.
Preprints 94233 g004
Figure 5. Average accuracy performance of various CNN models for PD diagnosis.
Figure 5. Average accuracy performance of various CNN models for PD diagnosis.
Preprints 94233 g005
Figure 6. Precision performances of four class in various model of CNN.
Figure 6. Precision performances of four class in various model of CNN.
Preprints 94233 g006
Figure 7. Recall performances of four class in various model of CNN.
Figure 7. Recall performances of four class in various model of CNN.
Preprints 94233 g007
Figure 8. F1-Score performances of four class in various model of CNN.
Figure 8. F1-Score performances of four class in various model of CNN.
Preprints 94233 g008
Table 1. Accuracy performance of ResNet50, MobileNet, EfficientNetB-1 and InceptionV3.
Table 1. Accuracy performance of ResNet50, MobileNet, EfficientNetB-1 and InceptionV3.
Model Class Accuracy
ResNet50 Spiral Normal Spiral Parkinson Wave Normal Wave Parkinson 0.80
MobileNet Spiral Normal Spiral Parkinson Wave Normal Wave Parkinson 0.92
EfficientNet-B1 Spiral Normal Spiral Parkinson Wave Normal Wave Parkinson 0.83
Inception Spiral Normal Spiral Parkinson Wave Normal Wave Parkinson 0.82
Table 2. Precision, Recall and F1-Score Performances.
Table 2. Precision, Recall and F1-Score Performances.
Model Class Performance
Precision Recall F1-Score
ResNet50 Spiral Normal 0.74 0.93 0.82
Spiral Parkinson 0.91 0.67 0.77
Wave Normal 0.80 0.80 0.80
Wave Parkinson 0.80 0.80 0.80
MobileNet Spiral Normal 0.82 0.93 0.87
Spiral Parkinson 0.92 0.80 0.86
Wave Normal 0.94 1.00 0.97
Wave Parkinson 1.00 0.93 0.97
EfficientNet-B1 Spiral Normal 0.92 0.80 0.86
Spiral Parkinson 0.82 0.93 0.87
Wave Normal 0.85 0.73 0.79
Wave Parkinson 0.76 0.87 0.81
Inception V3 Spiral Normal 0.78 0.93 0.85
Spiral Parkinson 0.92 0.73 0.81
Wave Normal 0.85 0.73 0.79
Wave Parkinson 0.76 0.87 0.81
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated