Preprint
Article

This version is not peer-reviewed.

Skin Cancer Detection and Classification Through Medical Image Analysis Using EfficientNet

A peer-reviewed article of this preprint also exists.

Submitted:

08 July 2025

Posted:

25 July 2025

You are already at the latest version

Abstract
Skin cancer is one of the most frequently occurring and life-threatening forms of cancer globally, highlighting the importance of timely and precise diagnosis to enhance treatment success.Automated skin lesion classification has greatly benefited from deep learning methods, especially convolutional neural networks. This study utilizes the EfficientNet-B0 architecture, a lightweight yet robust CNN, to develop a reliable multi-class skin cancer classifier using the HAM10000 dermoscopic image dataset. To ensure compatibility with the pre-trained EfficientNet-B0 model, images were uniformly scaled to 224×224 pixels and normalized according to ImageNet data to achieve consistent dimensions and brightness levels. Addressing class imbalance, minority classes such as actinic keratoses, basal cell carcinoma, dermatofibroma, and vascular lesions were increased to 1,000 images each through augmentation, whereas the majority class, melanocytic nevi (nv), was reduced to 1,300 images.This resulted in a balanced dataset comprising 7512 images distributed evenly across seven classes. Initially, transfer learning was applied by freezing the base layers and fine-tuning the final layer, achieving 77.39% accuracy. Further full-network fine-tuning improved accuracy to 89.36%. Test-time augmentation with flips increased performance to 90.16%, and integrating TTA with Monte Carlo Dropout and additional augmentations boosted the final accuracy to 92.29%. The results highlight the potential of EfficientNet-B0. This research improved classification model for skin lesion detection can aid healthcare professionals in early diagnosis, ultimately enhancing patient care and reducing the burden on healthcare systems.
Keywords: 
;  ;  ;  ;  ;  ;  ;  

1. Introduction

Skin cancer continues to be one of the most widespread and life-threatening types of cancer globally, making early and precise diagnosis essential for effective treatment. Advancements in medical image analysis have increasingly turned to deep learning techniques, offering improved accuracy and efficiency in the detection and classification of various skin cancer types. Ashfaq et al. [1] proposed ’DermaVision’, a deep learning-based platform for precise skin cancer diagnosis and classification, highlighting the potential of DL models in real-time healthcare applications. Similarly, Kavitha et al. [2] implemented deep learning techniques to detect and classify skin cancer using various convolutional neural networks (CNNs), showing high precision on benchmark datasets. Naeem et al. [3] provided a comprehensive overview of malignant melanoma classification using DL, analyzing datasets, performance metrics, and challenges in real-world deployment. Transfer learning has proven to be an effective strategy for improving diagnostic performance. For example, Balaha and Hassan [4] enhanced classification accuracy by combining deep transfer learning with the sparrow search optimization technique. Alotaibi and AlSaeed [5] further enhanced model performance using deep attention mechanisms combined with transfer learning. In a comparative analysis, Djaroudib et al. [6] emphasized that data quality plays a more critical role than data quantity in training effective transfer learning models. Numerous review articles have been published to assess existing approaches. Nazari and Garcia [7]presented an in-depth analysis of automated skin cancer detection methods based on clinical imagery, while Naqvi et al. [8] focused on deep learning methods and their limitations. Naseri and Safaei [9] presented a systematic literature review on melanoma diagnosis and prognosis using both ML and DL techniques, underscoring the importance of robust datasets and ensemble learning. Magalhaes et al. [10] synthesized DL techniques for skin cancer detection and outlined key research directions. Model integration and ensemble techniques have gained attention for improving diagnostic reliability. Imran et al. [11] proposed a method combining decisions from multiple deep learners, achieving high classification accuracy. Moturi et al. [12] leveraged CNN techniques for efficient melanoma detection, while Kreouzi et al. [13] developed a deep learning approach to distinguish malignant melanoma from benign nevi using dermoscopic images. To address the multi-class nature of skin lesions, Tahir et al. [14] introduced DSCC_Net, a DL model capable of multi-class classification using dermoscopy images. Similarly, Naeem et al. [15] developed SNC_Net, which integrates handcrafted and DL-based features for enhanced skin cancer detection. Zia Ur Rehman et al. [16] employed explainable DL to classify skin cancer lesions, making AI predictions interpretable for clinicians. Karki et al. [17] combined segmentation, augmentation, and transfer learning techniques to improve early skin cancer detection. Gouda et al. [18] employed CNNs for classifying lesion images and reported promising results on standard datasets. Traditional ML methods also contribute; Natha and Rajeswari [19] used classification models like SVM and Random Forests to detect cancer from extracted image features. Although most studies focus on skin cancer, Das et al. [20] explored brain cancer prediction using CNNs and chatbot integration for smart healthcare, indicating the broader applicability of these techniques. Ashafuddula and Islam [21] proposed intensity value-based estimation combined with CNNs for differentiating melanoma and nevus moles. Lastly, Rashad et al. [22] demonstrated an automated skin cancer screening system using deep learning techniques, highlighting its potential for scalable screening solutions.
This paper’s motivation stems from the growing concern around the world about the rising incidence of skin cancer, which is influenced by environmental factors like pollution and UV radiation. Ensuring effective treatment and improving patient survival outcomes depend heavily on timely and accurate diagnosis. Using dermoscopic images, this study suggests a pre-clinical, AI-based diagnostic method for detecting skin cancer and its subtypes. The model seeks to promote early intervention, lessen the strain on healthcare systems, and raise public health awareness by enabling prompt medical consultation or providing reassurance in benign cases.
The objectives of the current research are as follows:
  • Early Detection Improvement: To develop an automated system that facilitates the early identification of skin cancer, enabling prompt intervention and thereby increasing patient survival rates.
  • Accessibility Enhancement: Create a scalable diagnostic tool deployable in resource-limited settings with minimal dermatologist access.
  • Clinical Decision Support:To assist medical professionals by improving classification accuracy for diagnostically challenging lesions (especially melanoma) while reducing inter-observer variability.
  • System Efficiency:Optimize model performance for integration into mobile health apps and clinical workflows without compromising computational efficiency.
The contribution and novelty of the research are as follows:
  • Hybrid Training Strategy:Introduces a progressive fine-tuning method that combines uncertainty-aware inference, full-network optimization, and transfer learning (TTA + Monte Carlo Dropout). It achieves 92.29% accuracy on HAM10000 dermoscopic image dataset, which is 15% better than baseline frozen-layer transfer learning.
  • Web-Based Diagnostic Interface: Web-based platform allows non-experts to upload images and receive instant predictions with confidence scores, addressing accessibility gaps.
  • Clinical-Grade Data Handling: Strategic oversampling (akiec/bcc/df/vasc → 1,000 images each) and downsampling (nv → 1,300) improves rare-class recall by 15–20% while maintaining 92.29% overall accuracy.
  • Lightweight Architecture: EfficientNet-B0 model implementation ensures high performance suitable for real-time clinical use and edge devices.
The structure of the paper is as follows: Section2 outlines the materials and methodology, Section3 provides a detailed discussion of the results, and Section 4 concludes the study

2. Materials and Methods

This section discusses the materials and methods used in the current research. Figure 1, illustrates the overall process and classification approach. Figure 2, illustrates A strong workflow for classifying skin lesions that combines deep learning and uncertainty quantification is depicted in this diagram. When a user uploads a picture of a skin lesion, the system preprocesses it by resizing it to 224 by 224 pixels and applying ImageNet statistics to normalize it. Five different transformations—original, horizontal flip, vertical flip, color jitter, and a 20° rotation—are then applied to this standardized input as part of Test-Time Augmentation (TTA), which simulates real-world variations to improve model robustness. The EfficientNetB0 model is used to process each of these augmented images. During inference, Monte Carlo (MC) Dropout is used to measure prediction uncertainty. For every transformed image, the model makes ten stochastic forward passes while maintaining dropout layers active to produce a variety of outputs. All passes’ softmax probabilities are gathered and combined, and the system computes the MC dropout variance to estimate prediction confidence by averaging these TTA probabilities. In order to improve clinical interpretability, the model provides the user with both the classification and the associated uncertainty by generating a confidence score for the skin lesion class with the highest average probability (out of seven: akiec, bcc, bkl, df, mel, nv, and vasc).
Figure 1. Proposed Architecture.
Figure 1. Proposed Architecture.
Preprints 167171 g001
Preprints 167171 i001
Figure 2. Proposed Sequence Diagram for single skin lesion image classification
Figure 2. Proposed Sequence Diagram for single skin lesion image classification
Preprints 167171 g002

2.1. Data Preparation and Initialization

In this subsection, High-resolution dermoscopic images were gathered from publicly accessible medical datasets in order to classify skin lesion images. Resizing and Normalization were performed after the data had been meticulously preprocessed to guarantee consistency in image size and quality. To facilitate robust model training, stratified splitting and data augmentation were used to achieve a balanced class distribution.

2.1.1. Data Collection and Description

The HAM10000 dataset was sourced from Kaggle, comprising a total of 10,015 dermoscopic images categorized into seven diagnostic classes as shown in Figure 3:
  • Actinic keratoses and intraepithelial carcinoma (akiec): Precancerous or early-stage malignant lesions with 327 images.
  • Basal cell carcinoma (bcc): A common form of skin cancer represented by 327 images.
  • Benign keratosis-like lesions (bkl): Includes benign growths like seborrheic keratoses with 1,099 images.
  • Dermatofibroma (df): A rare, benign fibrous skin tumor with 115 images.
  • Melanoma (mel): A highly dangerous skin cancer with 1,113 representative images.
  • Melanocytic nevi (nv): Benign moles dominating the dataset with 6,705 images.
  • Vascular lesions (vasc): Includes blood vessel-related lesions like angiomas, with 142 images.

2.1.2. Data Preprocessing and Balancing

Given the class imbalance, image augmentation was applied to minority classes (akiec, bcc, df, vasc) to increase their samples to 1,000 images each. This was achieved using Keras’ ImageDataGenerator, applying random transformations such as rotation, zoom, shifts, and flips. The majority class ’nv’ was downsampled to 1,300 images through random selection to avoid bias. Classes bkl and mel were retained at their original counts. The final dataset consisted of 7,512 images with improved class balance.

2.1.3. Dataset Splitting

The balanced dataset was split into training (90%), validation (5%), and testing (5%) subsets via stratified sampling to maintain proportional class distribution across sets. This resulted in 6,760 training, 376 validation, and 376 test images.

2.1.4. Image Transformation and Loading

Images were resized to 224×224 pixels, normalized based on ImageNet statistics, and converted to PyTorch tensors. The dataset structure was formatted for use with PyTorch’s ImageFolder to facilitate streamlined loading.

2.2. Model Set Up and Implementation

The EfficientNet-B0 architecture, which was selected for its ideal balance between accuracy and efficiency, is used in this subsection to classify skin lesion images. The final classification layers of the model were adjusted, and early stopping, data augmentation, and hyperparameter tuning were used to improve training and guarantee reliable results.

2.2.1. Model Architecture and Transfer Learning

The EfficientNet-B0 model, pre-trained on the ImageNet dataset, was utilized in this study. To take advantage of its pre-learned features, the convolutional layers were initially kept frozen. The original classification head was replaced with a new fully connected layer configured to predict the seven skin cancer categories.

2.2.2. Model Training

Using the Adam optimizer and CrossEntropyLoss criterion, the model was trained for 15 epochs. During this phase, only the classifier layer’s weights were updated. Training and validation metrics were monitored to assess learning progress and mitigate overfitting.

2.2.3. Fine-Tuning

To further improve accuracy, all layers of EfficientNet-B0 were unfrozen for full model fine-tuning with a reduced learning rate (1e-4). This allowed the entire network to adapt to the specific dataset over another 15 epochs of training.

2.3. Model Evaluation

Final evaluation on the test set showed substantial accuracy improvement after fine-tuning. Additional enhancements included Test-Time Augmentation (TTA) such as horizontal flip, vertical flip, color jitter, and a 20° rotation as you see in Figure 4, where predictions on multiple augmented versions of test images were averaged to reduce prediction variance. Furthermore, Monte Carlo Dropout was integrated at inference to capture uncertainty, combining with TTA for robust performance.

3. Results and Discussion

3.1. Training and Validation Performance

The EfficientNet-B0 model initially trained with frozen convolutional layers showed progressive improvement over 15 epochs. The training accuracy increased from 57.74% to 75.53%, while validation accuracy improved from 64.10% to 73.40%. Corresponding loss values steadily decreased, indicating effective learning without significant overfitting.
Based on the results presented in Table 2, EfficientNetB0 was identified as the most suitable model for final deployment in this study. Prior to this decision, an extensive comparative analysis was conducted involving multiple deep learning architectures. The performance of these models was evaluated through standard training and testing procedures under various train–validation–test splits, as detailed in the comparison table. Among all the models tested, EfficientNetB0 achieved the highest classification accuracy of 77.39% before fine-tuning, specifically with a 90:5:5 train–validation–test split. Based on this superior baseline performance, EfficientNetB0 was chosen for further optimization. Subsequent fine-tuning of the model resulted in a notable improvement in accuracy, reaching 87.77%. To enhance generalization and robustness, Test-Time Augmentation (TTA) was applied, which further improved the accuracy beyond 90%. Finally, the incorporation of Monte Carlo Dropout during inference led to a peak accuracy of 92.29%, establishing EfficientNetB0 as the most effective model in our experimental pipeline.
Table 1. Training and validation performance metrics across epochs using the EfficientNet-B0 model with frozen base layers.
Table 1. Training and validation performance metrics across epochs using the EfficientNet-B0 model with frozen base layers.
Preprints 167171 i002
Table 2. Model Accuracy Comparison Table (Before Fine tuning).
Table 2. Model Accuracy Comparison Table (Before Fine tuning).
Train:Val:Test Ratio EfficientNet-B0 Res Net50 Dense Net121 Mobile Net InceptionV3
60:20:20 74.18% 70.79% 74.38% 72.85% 65.67%
70:15:15 74.80% 70.19% 74.17% 73.20% 65.48%
80:10:10 74.60% 74.20% 74.20% 73.80% 67.42%
90:5:5 77.39% 73.94% 73.94% 73.14% 67.29%

3.2. Fine-Tuning Performance

Full fine-tuning of the model with all layers trainable over 15 epochs yielded significant gains. Training accuracy reached 99.08%, and validation accuracy peaked at 89.36%. Loss values consistently decreased, demonstrating improved generalization.
Table 3. Training and validation performance metrics across epochs during full fine-tuning of the model.
Table 3. Training and validation performance metrics across epochs during full fine-tuning of the model.
Preprints 167171 i003

3.3. Test Set Evaluation

The final test accuracy of the fine-tuned EfficientNet-B0 model was 89.36%. Applying Test-Time Augmentation (TTA) improved accuracy to 90.16%, while combining TTA with Monte Carlo Dropout further increased test accuracy to 92.29%.
The classification performance metrics for each class are summarized in the Table 4. The model achieves perfect precision and recall for the classes df and vasc, indicating flawless classification on these categories. Classes such as akiec and bcc also show high precision and recall values above 0.90, reflecting strong model reliability. The bkl class has a slightly lower recall (0.85) but still maintains a respectable F1-score of 0.89. The mel category, corresponding to melanoma, exhibits the lowest performance, with precision and recall values approximately 0.78, suggesting challenges in correctly identifying this high-risk class. In contrast, the nv class, having the highest number of samples, demonstrates strong performance with an F1-score of 0.90. Overall, the model attains an accuracy of 92% across all 376 samples. The macro average and weighted average metrics are consistent at approximately 0.93 and 0.92 respectively, demonstrating balanced performance across both common and less frequent classes.
The Figure 5, shows the performance evaluation of the classification model. Subfigure (a) presents the Confusion Matrix Analysis, illustrating how accurately the model classifies each lesion type. The model performs exceptionally well on classes such as akiec, df, and vasc, achieving near-perfect precision and recall. Classes like bcc and nv also exhibit strong classification performance with high scores across metrics.Classes bkl and mel, on the other hand, exhibit comparatively lower F1-scores, suggesting some difficulties that are probably caused by class overlap or a lack of data. Subfigure (b) shows the AUC and ROC curve analysis, showing that the model can differentiate between classes with high AUC values, especially for akiec, df, and vasc. Consistent macro and weighted average scores support the model’s 92% overall accuracy, which reflects balanced performance across common and rare classes. These findings support the model’s robustness, particularly when it comes to identifying critical lesion types.
The Figure 6, shows the results from the Web-based Skin Cancer Detection System using EfficientNetB0. Test Result 1 indicates a diagnosis of Benign Keratosis, a non-cancerous skin condition. Test Result 2 reveals the presence of Melanoma, a serious form of skin cancer that requires immediate medical attention.

4. Conclusions

Skin cancer is still one of the most common and deadly cancers in the world, which highlights how crucial early and precise detection is to successful treatment. Because traditional diagnostic methods are frequently laborious and highly dependent on specialized knowledge, interest in automated diagnostic tools has grown. Medical image analysis has seen significant promise with deep learning, particularly with convolutional neural networks (CNNs). One contemporary CNN framework that stands out for its ideal balance between accuracy and computational efficiency is EfficientNet. The application of the EfficientNet-B0 architecture for reliable multi-class skin cancer classification using dermoscopic images is examined in this study. The study uses the HAM10000 dataset and applies transfer learning and data balancing strategies, then fine-tunes the entire network to improve model performance. With a classification accuracy of 92.29%, the suggested model showed a notable improvement. Monte Carlo Dropout and test-time augmentation were also used to improve the model’s dependability and generalization. EfficientNet-B0 is a good choice for implementation in real-time clinical settings with constrained computational resources because of its lightweight design. The results of this study highlight how deep CNN-based methods can help with accurate and timely skin lesion diagnosis, improving patient outcomes and treatment plans.
Future work may explore integrating this approach into clinical workflows to support dermatologists and reduce diagnostic workloads, thereby contributing to the advancement of AI-assisted medical imaging.

Author Contributions

Conceptualization, supervision, validation, writing—review and editing, methodology, software, visualization, Sima Das; Data curation, software, formal analysis, editing, investigation, and visualization, Rishav Kumar Addya. All authors have read and agreed to the published version of the manuscript.

Funding

There was no outside funding for this study. The journal waived the Article Processing Charge (APC).

Conflicts of Interest

No conflicts of interest are disclosed by the authors.

Abbreviations

This manuscript uses the following abbreviations:
DL Deep Learning
ML Machine Learning
CNN Convolutional Neural Network
HAM Human Against Machine
TTA Test Time Augmentation
AKIEC Actinic keratoses and intraepithelial carcinoma
BCC Basal Cell Carcinoma,
DF Dermatofibroma
VASC Vascular Lesions
BCAT Brain Computer Aptitude Test
BKL Benign keratosis
MEL Melanoma

References

  1. Ashfaq, N., Suhail, Z., Khalid, A., et al. 2025. SkinSight: advancing deep learning for skin cancer diagnosis and classification. Discovery Computing 28: 63. [CrossRef]
  2. Kavitha, C., Priyanka, S., Praveen Kumar, M., Kusuma, V. 2024. Skin Cancer Detection and Classification using Deep Learning Techniques. Procedia Computer Science 235: 2793–2802. [CrossRef]
  3. Naeem, A., Farooq, M. S., Khelifi, A., & Abid, A. 2020. Malignant Melanoma Classification Using Deep Learning: Datasets, Performance Measurements, Challenges and Opportunities. IEEE Access 8: 110575–110597. [CrossRef]
  4. Balaha, H. M., & Hassan, A. E. S. 2023. Skin cancer diagnosis based on deep transfer learning and sparrow search algorithm. Neural Computing & Applications 35: 815–853. [CrossRef]
  5. Alotaibi, A., & AlSaeed, D. 2025. Skin Cancer Detection Using Transfer Learning and Deep Attention Mechanisms. Diagnostics 15: 99. [CrossRef]
  6. Djaroudib, K., Lorenz, P., Belkacem Bouzida, R., & Merzougui, H. 2024. Skin Cancer Diagnosis Using VGG16 and Transfer Learning: Analyzing the Effects of Data Quality over Quantity on Model Efficiency. Applied Sciences 14: 7447. [CrossRef]
  7. Nazari, S., & Garcia, R. 2023. Automatic Skin Cancer Detection Using Clinical Images: A Comprehensive Review. Life 13(11): 2123. [CrossRef]
  8. Naqvi, M., Gilani, S. Q., Syed, T., Marques, O., & Kim, H.-C. 2023. Skin Cancer Detection Using Deep Learning—A Review. Diagnostics 13: 1911. [CrossRef]
  9. Naseri, H., & Safaei, A. A. 2025. Diagnosis and prognosis of melanoma from dermoscopy images using machine learning and deep learning: a systematic literature review. BMC Cancer 25: 75. [CrossRef]
  10. Magalhaes, C., Mendes, J., & Vardasca, R. 2024. Systematic Review of Deep Learning Techniques in Skin Cancer Detection. BioMedInformatics 4: 2251–2270. [CrossRef]
  11. Imran, A., Nasir, A., Bilal, M., Sun, G., Alzahrani, A., & Almuhaimeed, A. 2022. Skin Cancer Detection Using Combined Decision of Deep Learners. IEEE Access 10: 118198–118212. [CrossRef]
  12. Moturi, D., Surapaneni, R. K., & Avanigadda, V. S. G. 2024. Developing an efficient method for melanoma detection using CNN techniques. Journal of the Egyptian National Cancer Institute 36: 6. [CrossRef]
  13. Kreouzi, M., Theodorakis, N., Feretzakis, G., Paxinou, E., Sakagianni, A., Kalles, D., Anastasiou, A., Verykios, V. S., & Nikolaou, M. 2025. Deep Learning for Melanoma Detection: A Deep Learning Approach to Differentiating Malignant Melanoma from Benign Melanocytic Nevi. Cancers 17: 28. [CrossRef]
  14. Tahir, M., Naeem, A., Malik, H., Tanveer, J., Naqvi, R. A., & Lee, S.-W. 2023. DSCC_Net: Multi-Classification Deep Learning Models for Diagnosing of Skin Cancer Using Dermoscopic Images. Cancers 15: 2179. [CrossRef]
  15. Naeem, A., Anees, T., Khalil, M., Zahra, K., Naqvi, R. A., & Lee, S.-W. 2024. SNC_Net: Skin Cancer Detection by Integrating Handcrafted and Deep Learning-Based Features Using Dermoscopy Images. Mathematics 12: 1030. [CrossRef]
  16. Zia Ur Rehman, M., Ahmed, F., Alsuhibany, S. A., Jamal, S. S., Zulfiqar Ali, M., & Ahmad, J. 2022. Classification of Skin Cancer Lesions Using Explainable Deep Learning. Sensors 22: 6915. [CrossRef]
  17. Karki, R., G C, S., Rezazadeh, J., & Khan, A. 2025. Deep Learning for Early Skin Cancer Detection: Combining Segmentation, Augmentation, and Transfer Learning. Big Data Cogn. Comput. 9: 97. [CrossRef]
  18. Gouda, W., Sama, N. U., Al-Waakid, G., Humayun, M., & Jhanjhi, N. Z. 2022. Detection of Skin Cancer Based on Skin Lesion Images Using Deep Learning. Healthcare 10: 1183. [CrossRef]
  19. Natha, P., & Rajeswari, P. R. 2023. Skin Cancer Detection using Machine Learning Classification Models. International Journal of Intelligent Systems and Applications in Engineering 12(6s): 139–145. https://ijisae.org/index.php/IJISAE/article/view/3966.
  20. Das, S., Kumar, V., & Cicceri, G. 2024. Chatbot Enable Brain Cancer Prediction Using Convolutional Neural Network for Smart Healthcare. In Healthcare-Driven Intelligent Computing Paradigms to Secure Futuristic Smart Cities (pp. 268–279). Chapman and Hall/CRC.
  21. Ashafuddula, N. I. M., & Islam, R. 2023. Melanoma skin cancer and nevus mole classification using intensity value estimation with convolutional neural network. Computer Science 24(3). [CrossRef]
  22. Rashad, N. M., Abdelnapi, N. M., Seddik, A. F., et al. 2025. Automating skin cancer screening: a deep learning. J. Eng. Appl. Sci. 72: 6. [CrossRef]
Figure 3. sample of Data Collection.
Figure 3. sample of Data Collection.
Preprints 167171 g003
Figure 4. Skin Lesion Image Preprocessing by Test Time Augmentation (TTA).
Figure 4. Skin Lesion Image Preprocessing by Test Time Augmentation (TTA).
Preprints 167171 g004
Figure 5. Confusion matrix and AUC-ROC curve analyses of the classification model.
Figure 5. Confusion matrix and AUC-ROC curve analyses of the classification model.
Preprints 167171 g005
Figure 6. Web-based Skin Cancer Detection System using EfficientNetB0.
Figure 6. Web-based Skin Cancer Detection System using EfficientNetB0.
Preprints 167171 g006
Table 4. Classification Report Summary.
Table 4. Classification Report Summary.
Class Precision Recall F1-score Support
akiec 1.00 0.96 0.98 50
bcc 0.92 0.98 0.95 50
bkl 0.92 0.85 0.89 55
df 1.00 1.00 1.00 50
mel 0.78 0.77 0.77 56
nv 0.87 0.92 0.90 65
vasc 1.00 1.00 1.00 50
Accuracy 0.92 376
Macro avg 0.93 0.93 0.93 376
Weighted avg 0.92 0.92 0.92 376
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated