Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Convolution Neural Network Based Multi- Label Disease Detection using Tongue Images

Version 1 : Received: 7 April 2024 / Approved: 8 April 2024 / Online: 8 April 2024 (13:02:06 CEST)

How to cite: Bhatnagar, V.; Bansod, P.P. Convolution Neural Network Based Multi- Label Disease Detection using Tongue Images. Preprints 2024, 2024040565. https://doi.org/10.20944/preprints202404.0565.v1 Bhatnagar, V.; Bansod, P.P. Convolution Neural Network Based Multi- Label Disease Detection using Tongue Images. Preprints 2024, 2024040565. https://doi.org/10.20944/preprints202404.0565.v1

Abstract

ABSTRACT Purpose: Tongue image analysis for disease diagnosis is an ancient traditional non-invasive diagnosis technique widely used by traditional medicine practitioners. Deep Learning based multi-label disease detection models offer tremendous potential to clinical decision support systems, by facilitating preliminary diagnosis. Methods: In this work, we propose a multi-label disease detection pipeline, where in observation and analysis of tongue images captured and received via smartphones assist in predicting the health status of an individual. All images are voluntarily given by subjects consulting collaborating physicians. Images thus acquired are first and foremost classified either into a diseased or a normal category by a 5-fold cross-validation algorithm using a convolution neural network (MobileNetV2) model for binary classification. Once the diseased label is predicted, the image is used to diagnose multiple diseases using the prediction algorithm based on DenseNet121. Results: Average accuracy of 93 % was achieved in classifying diseased from normal healthy tongue by detection model with MobileNetV2 architecture. Multilabel disease classification produced more than 90% accurate results for the seven class labels considered. Conclusion: AI based image analysis shows promising results and an extensive dataset could provide further improvements to this approach. Rather than employing high-cost sophisticated image capturing setup, experimenting with smartphone images opens opportunity to provide preliminary health status to individuals on smartphone prior to further line of treatment and diagnosis.

Keywords

Key Words: Tongue Feature Extraction, Disease Diagnosis, segmentation, MobileNet V2 architecture, DenseNet 121 architecture, Convolutional Neural Network

Subject

Public Health and Healthcare, Public Health and Health Services

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.