Submitted:
26 July 2023
Posted:
28 July 2023
You are already at the latest version
Abstract
Keywords:
1. Introduction
- Asymmetry: (A) both sides match the other, and (B) one side does not match the other.
- Border: (C) Regular edges, (D) Irregular or Blurred.
- Color: (E) Consistent shades, (F) Different shades.
- Diameter: (G) lesion is smaller than 6mm, (H) lesion is larger than 6mm.
characteristics of lesions.
- The proposed method applies to any image (dermoscopy or photographic) of pigmented skin lesions using you only look once (YOLOv5) and ResNet50.
- The suggested system classifies samples and determines each class with probability.
- It interacted directly with the skin-color images that were obtained with different sizes.
2. Related Work
3. Proposed Melanoma Detection Technique
3.1. Preprocessing
3.2. The Structure of the YOLOv5-S Model
4. Experimental Results
4.1. Dataset
4.2. Experimental Platform
4.3. Performance Metrics
4.4. Results
5. Discussion
6. Conclusion
Supplementary Materials
Author Contributions
Data Availability Statement
Conflicts of Interest
References
- Park, S. Biochemical, Structural and Physical Changes in Aging Human Skin, and Their Relationship. 2022, 23, 275-288,. [CrossRef]
- Liu, L.; Tsui, Y.Y.; Mandal, M. Skin Lesion Segmentation Using Deep Learning with Auxiliary Task. 2021, 7. [CrossRef]
- Islami, F.; Guerra, C.E.; Minihan, A.; Yabroff, K.R.; Fedewa, S.A.; Sloan, K.; Wiedt, T.L.; Thomson, B.; Siegel, R.L.; Nargis, N. American Cancer Society's Report on the Status of Cancer Disparities in the United States, 2021. 2022, 72, 112–143.
- Saleem, S.M.; Abdullah, A.; Ameen, S.Y.; Sadeeq, M.A.M.; Zeebaree, S.R.M. Multimodal Emotion Recognition Using Deep Learning. 2021, 2. [CrossRef]
- Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Skin Cancer Classification Using Deep Learning and Transfer Learning.; IEEE: Cairo, Egypt, July 3, 2018. [Google Scholar] [CrossRef]
- Premaladha, J.; Ravichandran, K. Novel Approaches for Diagnosing Melanoma Skin Lesions Through Supervised and Deep Learning Algorithms. 2016, 40, 1–12.
- Lee, H.; Chen, Y.-P.P. Image Based Computer Aided Diagnosis System for Cancer Detection. 2015, 42. [CrossRef]
- Goyal, M.; Oakley, A.; Bansal, P.; Dancey, D.; Yap, M.H. Skin Lesion Segmentation in Dermoscopic Images with Ensemble Deep Learning Methods. 2019, 8, 4171–4181.
- Dargan, S.; Kumar, M.; Ayyagari, M.R.; Kumar, G. A Survey of Deep Learning and Its Applications: A New Paradigm to Machine Learning. 2020; 27. [Google Scholar] [CrossRef]
- Codella, N.; Cai, J.; Abedini, M.; Garnavi, R.; Halpern, A.; Smith, J.R. Deep Learning, Sparse Coding, and SVM for Melanoma Recognition in Dermoscopy Images. In Proceedings of the International workshop on machine learning in medical imaging; Springer: Munich, Germany, July 3, 2015. [CrossRef]
- Gessert, N.; Sentker, T.; Madesta, F. ; Schmitz,. digger.; Kniep, H.; Baltruschat, I.; Werner, R.; Schlaefer, A. Skin Lesion Diagnosis Using Ensembles, Unscaled Multi-crop Evaluation and Loss Weighting.. 2018.
- Waheed, Z.; Waheed, A.; Zafar, M.; Riaz, F. An Efficient Machine Learning Approach for the Detection of Melanoma Using Dermoscopic Images.; IEEE: Islamabad, Pakistan, July 3 2017. [Google Scholar]
- Roy, S.; Meena, T.; Lim, S.-J. Demystifying Supervised Learning in Healthcare 4. 0: A New Reality of Transforming Diagnostic Medicine.. 2022, 12, 2549. [Google Scholar]
- Srivastava, V.; Kumar, D.; Roy, S. A Median Based Quadrilateral Local Quantized Ternary Pattern Technique for the Classification of Dermatoscopic Images of Skin Cancer. 2022, 102, 108259.
- Tschandl, P.; Rosendahl, C.; Kittler, H. The HAM10000 Dataset, a Large Collection of Multi-source Dermatoscopic Images of Common Pigmented Skin Lesions. 2018, 5, 1–9. [CrossRef]
- The HAM10000 Dataset, a Large Collection of Multi-sources Dermatoscopic Images of Common Pigmented Skin Lesions Available online: https://github.com/ptschandl/HAM10000_dataset.
- Romero Lopez, A.; Giro Nieto, X.; Burdick, J.; Marques, O. Skin Lesion Classification from Dermoscopic Images Using Deep Learning Techniques.; ACTA Press: Innsbruck, Austria, January 1 2017. [Google Scholar]
- Dreiseitl, S.; Ohno-Machado, L.; Kittler, H.; Vinterbo, S.A.; Billhardt, H.; Binder, M. A Comparison of Machine Learning Methods for the Diagnosis of Pigmented Skin Lesions. 2001, 34, 28–36. [CrossRef]
- Hekler, A.; Utikal, J.; Utikal, J.; Enk, A.; Solass, W.; Schmitt, M.; Klode, J.; Schadendorf, D.; Sondermann, W.; Franklin, C. Deep Learning Outperformed 11 Pathologists in the Classification of Histopathological Melanoma Images. 2019, 118. [CrossRef]
- Pham, T.-C.; Luong, C.-M.; Visani, M.; Hoang, V.-D. Deep CNN and Data Augmentation for Skin Lesion Classification. In Proceedings of the Intelligent Information and Database Systems; Springer: Dong Hoi City, Vietnam, July 3 2018. [Google Scholar] [CrossRef]
- Yu, L.; Chen, H.; Dou, Q.; Qin, J.; Heng, P.-A. Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks. 2017, 36. [CrossRef]
- Li, Y.; Shen, L. Skin Lesion Analysis Towards Melanoma Detection Using Deep Learning Network. 2018, 18. [CrossRef]
- Seeja, R.D.; Suresh, A. Deep Learning Based Skin Lesion Segmentation and Classification of Melanoma Using Support Vector Machine (SVM). 2019, 20. [CrossRef]
- Nasiri, S.; Helsper, J.; Jung, M.; Fathi, M. Depict Melanoma Deep-class: A Deep Convolutional Neural Networks Approach to Classify Skin Lesion Images. 2020, 21. [CrossRef]
- Inthiyaz, S.; Altahan, B.R.; Ahammad, S.H.; Rajesh, V.; Kalangi, R.R.; Smirani, L.K.; Hossain, M.A.; Rashed, A.N.Z. Skin Disease Detection Using Deep Learning. In Proceedings of the Advances in Engineering Software; Elsevier, July 3 2023; Vol. 175. [Google Scholar]
- Mohammed, M.A.; Lakhan, A.; Abdulkareem, K.H. ; Bego, ; Garcia-Zapirain,. A Hybrid Cancer Prediction Based on Multi-omics Data and Reinforcement Learning State Action Reward State Action (SARSA). 2023, 154, 106617. [Google Scholar]
- Wu, X.; Sahoo, D.; Hoi, S.C.H.; Hoi, S.C.H. Recent Advances in Deep Learning for Object Detection. 2020, 396. [CrossRef]
- Tan, M.; Pang, R.; Le, Q.V. Efficientdet: Scalable and Efficient Object Detection. In Proceedings of the Computer Vision and Pattern Recognition; IEEE: Seattle, Washington, USA, July 3, 2020.
- Thuan, D. Evolution of Yolo Algorithm and Yolov5: The State-of-the-art Object Detection Algorithm, 2021.
- Jung, H.-K.; Choi, G.-S. Improved Yolov5: Efficient Object Detection Using Drone Images Under Various Conditions. 2022, 12, 7255.
- Liu, K.; Tang, H.; He, S.; Yu, Q.; Xiong, Y.; Wang, N. Performance Validation of YOLO Variants for Object Detection; Proceedings of the 2021 International Conference on bioinformatics and intelligent computing: Harbin, China, 2021. [Google Scholar] [CrossRef]
- Wang, C.-Y.; Liao, H.-Y.M.; Wu, Y.-H.; Chen, P.-Y.; Hsieh, J.-W.; Yeh, I.-H. Cspnet: A New Backbone That Can Enhance Learning Capability of CNN.; IEEE Computer Society: Seattle, WA, USA, June 14 2020. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. 2015, 37. [CrossRef]
- Hafiz, A.M.; Bhat, G.M. A Survey on Instance Segmentation: State of the Art. 2020, 9. [CrossRef]
- Redmon, J.; Farhadi, A. Yolov3: An Incremental Improvement. 2018.
- Gao, H.; Huang, H. Stochastic Second-order Method for Large-scale Nonconvex Sparse Learning Models.; International Joint Conferences on Artificial Intelligence Organization: Cincinnati, Ohio, USA, July 1, 2021. [Google Scholar]
- Alsaade, F.W.; Aldhyani, T.H.; Al-Adhaileh, M.H. Developing a Recognition System for Diagnosing Melanoma Skin Lesions Using Artificial Intelligence Algorithms. 2021, 1–20.
- Ali, S.; Miah, S.; Miah, S.; Haque, J.; Rahman, M.; Islam, K. An Enhanced Technique of Skin Cancer Classification Using Deep Convolutional Neural Network with Transfer Learning Models. 2021, 5. [CrossRef]
- Khaledyan, D.; Tajally, A.; Sarkhosh, A.; Shamsi, A.; Asgharnezhad, H.; Khosravi, A.; Nahavandi, S. Confidence Aware Neural Networks for Skin Cancer Detection. 2021. [Google Scholar] [CrossRef]
- Chang, C.-C.; Li, Y.-Z.; Wu, H.-C.; Tseng, M.-H. Melanoma Detection Using XGB Classifier Combined with Feature Extraction and K-means SMOTE Techniques. 2022, 12, 1747.
- Kawahara, J.; Hamarneh, G. Fully Convolutional Neural Networks to Detect Clinical Dermoscopic Features. 2019, 23. [CrossRef]
- Khan, M.A.; Akram, T.; Zhang, Y.; Sharif, M. Attributes Based Skin Lesion Detection and Recognition: A Mask RCNN and Transfer Learning-based Deep Learning Framework. 2021, 143. [CrossRef]
- Chaturvedi, S.S.; Gupta, K.; Prasad, P.S. Skin Lesion Analyzer: An Efficient Seven-way Multi-class Skin Cancer Classification Using Mobilenet; Springer: Singapore, 2021. [Google Scholar] [CrossRef]








| Reference | Proposed Technique | Accuracy | Limitation |
|---|---|---|---|
| Premaladha et al. [6] | Segmentation using Otsu's normalized algorithm and then classification | SVM (90.44), DCNN (92.89), and Hybrid AdaBoost (91.73) | Uses only three classes of skin cancer lesions |
| Codella et al. [10] |
Melanoma recognition using DL, sparse coding, and SVM | 93.1% | Need to deepen features and add more cases of melanoma. |
| Waheed et al. [12] |
Diagnosing melanoma using the color and texture of different types of lesions | SVM (96.0%) | Need more attributes of skin lesions |
| Hekler et al. [19] |
Classifying histopathologic melanoma using DCNN | 68.0% | Uses low resolution and can't differentiate between melanoma and nevi classes |
| Pham et al. [20] | Classification using DCNN | AUC (89.2%) | Less sensitivity |
| Yu et al. [21] | Segmentation and classification using DCNN and FCRN | AUC (80.4 %) | Insufficient training data |
| Li and Shen [22] | Two FCRN for melanoma segmentation and classification | AUC (91.2%) | Overfitting in AUC and low segmentation |
| Seeja and Suresh [23] | Segmenting data using form, color, and texture variables, then classification using SVM, RF, KNN, and NB | SVM (85.1%), RF (82.2%), KNN(79.2%) and NB (65.9%) | Low classification accuracy |
| Nasiri et al. [24] |
Using the 19-layer model of CNN for melanoma classification | 75.0% | Need to enhance accuracy |
| Vasc | Nv | Mel | Df | Bkl | Bcc | Akiec | |
|---|---|---|---|---|---|---|---|
| All images | 142 | 6705 | 1113 | 115 | 1099 | 514 | 327 |
| Train | 115 | 5360 | 891 | 92 | 879 | 300 | 262 |
| Test | 27 | 1345 | 222 | 23 | 220 | 214 | 65 |

| Parameter | First run | Second run | Definition | |
|---|---|---|---|---|
| Epoch | 300 | 100 | The frequency with which the learning algorithm | |
| Batch_size | 16 | 32 | how many training instances are used in a single iteration | |
| lr0 | 0.001 | 0.001 | Initial learning rate (SGD=1E-2, Adam=1E-3) | |
| Lrf | 0.2 | 0.2 | Final OneCycleLR learning rate (lr0 * lrf) | |
| Momentum | 0.937 | 0.937 | SGD momentum/Adam beta1 | |
| warmup_epochs | 3.0 | 3.0 | Warmup epochs (fractions ok) | |
| weight_decay | 0.0005 | 0.0005 | Optimizer weight decay 5e-4 | |
| warmup_momentum | 0.8 | 0.8 | Warmup initial momentum | |
| warmup_bias_lr | 0.1 | 0.1 | Warmup initial bias learning rate | |
| Box | 0.05 | 0.05 | Box loss gain | |
| Cls | 0.5 | 0.5 | Class loss gain | |
| cls_pw | 1.0 | 1.0 | Cls BCELoss positive_weight | |
| Obj | 1.0 | 1.0 | Obj loss gain (scale with pixels) | |
| obj_pw | 1.0 | 1.0 | Obj BCELoss positive_weight | |
| anchor_t | 4.0 | 4.0 | Anchor-multiple threshold | |
| iou_t | 0.20 | 0.20 | IOU training threshold | |
| Scale | 0.5 | 0.5 | Image scale (+/- gain) | |
| Shear | 0.0 | 0.0 | Image shear (+/- deg) | |
| Perspective | 0.0 | 0.0 | Image perspective (+/- fraction), range 0-0.001 | |
| Precision (%) | Recall (%) | DSC (%) | MAP 0.0:0.5 (%) | MAP 0.5:0.95 (%) | Accuracy (%) | |
|---|---|---|---|---|---|---|
| AKIEC | 99.1 | 94.9 | 96.9 | 99.7 | 95.2 | 95.2 |
| BKL | 95.3 | 96.8 | 96.0 | 95.3 | 94.5 | 96.1 |
| VASC | 97.0 | 95.6 | 96.2 | 98.7 | 95.5 | 97.2 |
| BCC | 97.1 | 97.6 | 97.3 | 97.5 | 96.4 | 97.3 |
| DF | 98.7 | 99.5 | 99.0 | 94.3 | 94.8 | 98.8 |
| NV | 100.0 | 98.6 | 99.2 | 96.4 | 99.5 | 98.1 |
| MEL | 98.8 | 100.0 | 99.3 | 98.2 | 98.6 | 100.0 |
| Average | 98.1 | 97.5 | 97.7 | 97.1 | 96.3 | 97.5 |
| Precision (%) | Recall (%) | DSC (%) | MAP 0.0:0.5 (%) | MAP 0.5:0.95 (%) | Accuracy(%) | ||
|---|---|---|---|---|---|---|---|
| AKIEC | 100.0 | 96.7 | 98.3 | 98.9 | 99.7 | 98.8 | |
| BKL | 98.2 | 98.2 | 98.2 | 97.6 | 94.9 | 98.9 | |
| VASC | 98.8 | 99.6 | 99.1 | 97.9 | 97.9 | 99.4 | |
| BCC | 97.1 | 96.9 | 96.9 | 99.5 | 99.1 | 99.7 | |
| DF | 99.6 | 98.9 | 99.2 | 98.6 | 96.2 | 100.0 | |
| MV | 100.0 | 100.0 | 100.0 | 96.2 | 100.0 | 99.8 | |
| MEL | 99.9 | 100.0 | 99.9 | 99.8 | 98.9 | 100.0 | |
| Average | 99.0 | 98.6 | 98.8 | 98.3 | 98.7 | 99.5 | |
| Reference | Year | Method | Precision (%) | Recall (%) | DSC (%) | Accuracy(%) | Dataset |
|---|---|---|---|---|---|---|---|
| Nasiri et al. [24] | 2020 | KNN | 73.0 | 55.0 | 79.0 | 67.0 | ISIC dataset |
| SVM | 58.0 | 47.0 | 66.0 | 62.0 | |||
| CNN | 77.0 | 73.0 | 78.0 | 75.0 | |||
| Alsaade et al. [37] | 2021 | CNN | 81.2 | 92.9 | 87.5 | 97.5 | PH2 |
| Ali et al. [38] | 2021 | CNN | 96.5 | 93.6 | 95.0 | 91.9 | HAM10000 |
| Khaledyan et al. [39] | 2021 | Ensemble Bayesian Networks | 88.6 | 73.4 | 90.7 | 83.6 | HAM10000 |
| Chang et al. [40] | 2022 | XGB classifier | 97.4 | 87.8 | 90.5 | 94.1 | ISIC |
| Kawahara et al. [41] | 2019 | FCNN | 97.6 | 81.3 | 93.0 | 98.0 | ISIC |
| Khan et al. [42] | 2021 | Mask RCNN | 88.5% | 88.5% | 88.6% | 93.6 | ISIC |
| Chaturvedi et al. [43] | 2020 | Mobile Net | 83.0% | 83.0% | 89.0% | 83.1 | HAM10000 |
| Proposed model | 2022 | YOLOv5+ResNet | 99.0 | 98.6 | 98.8 | 99.5 | HAM10000 |
| Database | Description |
|---|---|
| PH2 |
|
| ISIC |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).