1. Introduction
Lung cancer accounts for 21% of all cancer-related fatalities globally and is the second most frequent malignancy in both men and women [
1]. Lung cancer has a high death rate when compared to all other cancer types, and early diagnosis is still difficult despite modern screening techniques. Only about 20% of cases are diagnosed at stage I, a statistic that has remained unchanged for years, highlighting ongoing challenges clinicians face in prognosis assessment and treatment planning [
2]. Even while immunotherapy and targeted treatment for lung cancer have advanced significantly, their effectiveness is still unpredictable, and response rates to these treatments vary greatly [
3]. As a result, early detection of lung cancer requires the use of extremely sensitive and specific diagnostic instruments [
4]. In 2017, several studies [
5] emphasized the significance of analyzing the morphological characteristics of pathological slides for both diagnosing and predicting the progression of lung cancer [
6]. These findings underscored the crucial role that computer-aided image analysis plays in enhancing the accuracy and effectiveness of lung cancer prognosis.
Adenocarcinoma, the most common subtype of non-small cell lung cancer (NSCLC), accounts for approximately 40% of all lung cancer cases [
7]. It primarily originates in the peripheral regions of the lungs and is strongly associated with smoking, though it can also occur in non-smokers. This subtype is characterized by glandular differentiation and the production of mucin, which can be observed in histopathological evaluations. Adenocarcinoma often progresses asymptomatically in its early stages, contributing to delayed diagnosis and treatment challenges [
7].
Squamous cell carcinoma (SCC), a major subtype of NSCLC, accounts for approximately 25–30% of lung cancer cases [
8]. It typically originates in the central parts of the lungs, such as the bronchi, and is closely linked to a history of smoking. Squamous cell carcinoma is characterized histologically by the presence of keratin pearls and intercellular bridges, reflecting its origin from the squamous epithelial cells lining the respiratory tract [
9].
Large cell carcinoma (LCC) is a less common subtype of NSCLC, accounting for approximately 10–15% of lung cancer cases. It is characterized by large, undifferentiated cells under microscopic examination, lacking the glandular or squamous features seen in adenocarcinoma or squamous cell carcinoma [
10]. LCC can occur in any part of the lung but is most frequently found in the peripheral regions. Clinically, large cell carcinoma is often aggressive, with a tendency for rapid growth and early metastasis, which contributes to its poor prognosis [
11]. Patients may present with nonspecific symptoms such as cough, chest pain, or weight loss, which can delay diagnosis [
12].
Given the challenges in diagnosing and treating aggressive cancers like LCC, there is a growing interest in integrating advanced technologies to enhance clinical outcomes. AI, particularly machine learning (ML) and deep learning (DL), holds significant promise in transforming cancer diagnostics and treatment. By leveraging vast amounts of medical data, AI systems can detect subtle patterns and anomalies that might go unnoticed by traditional methods, aiding in earlier diagnosis and more personalized treatment plans. John McCarthy first introduced the concept of AI in 1956, defining it as the use of computer systems to mimic human intelligence and critical thinking [
13]. In medicine, AI is categorized into two primary domains: virtual and physical. The virtual domain is further divided into ML and DL [
14]. ML refers to a system’s ability to independently learn from data without explicit programming [
15]. It encompasses four main approaches: supervised, unsupervised, reinforcement, and active learning [
16]. Supervised learning involves analyzing labeled input data to discern patterns, using models such as Bayesian inference, decision trees, linear discriminants, support vector machines, logistic regression, and artificial neural networks [
17]. DL, a specialized subset of ML, employs multiple layers simultaneously to perform feature extraction and model optimization [
18].
AI technologies aim to create systems and robots capable of performing tasks such as pattern recognition, decision-making, and situational adaptation—abilities traditionally requiring human intelligence [
19]. Advances in computational power, coupled with innovations in machine learning and neural networks, have driven significant progress in AI [
20]. As a subset of AI, machine learning focuses on training computers to analyze large datasets, identify patterns, and use those insights to make predictions or decisions [
19]. AI has demonstrated transformative potential in various domains, including natural language processing, autonomous vehicles, healthcare, and image recognition, excelling in swiftly analyzing complex datasets, uncovering patterns invisible to humans, and delivering highly accurate predictions [
21], [
22], [
23].
This paper presents a novel multimodal AI framework designed to enhance the detection and classification of lung cancer, integrating both CNNs and ANNs. By combining imaging data, specifically CT scans, and clinical data from patients, this approach aims to overcome the challenges associated with early diagnosis and accurate prognosis of lung cancer subtypes, including adenocarcinoma, squamous cell carcinoma, and large cell carcinoma. Together, these models not only provide precise classifications but also emphasize the importance of interpretability and scalability in real-world clinical settings. This framework highlights the potential of combining imaging and clinical data through AI to improve early diagnosis, personalize treatment strategies, and ultimately advance the management of lung cancer. The key contributions of this study are highlighted below:
A CNN model was developed to classify lung cancer types from CT images, achieving 91% accuracy and excellent discrimination between adenocarcinoma, large cell carcinoma, squamous cell carcinoma, and normal tissue.
The model incorporated data augmentation and Grad-CAM visualizations, enhancing generalization and interpretability by highlighting key tumor regions in CT scans.
An ANN model, based on clinical data, achieved 99% accuracy, with near-perfect precision, recall, and F1 scores, demonstrating the power of integrating demographic and clinical features into lung cancer diagnostics.
Both models showed strong performance, with the CNN excelling in image-based classification and the ANN providing accurate predictions based on patient clinical data.
2. Datasets and Symptom Analysis
In this study, we developed three distinct AI models using a combination of image-based and clinical data to predict lung cancer status. In this study, we developed two distinct AI models to predict lung cancer types and assess their potential malignancy. The first model employed a CNN trained on 900 CT images, and the second utilized an ANN trained on clinical data from 999 patients. The primary aim was to design an integrated diagnostic approach capable of accurately classifying lung cancer into four categories: adenocarcinoma, large cell carcinoma, squamous cell carcinoma, and normal tissue.
The CT dataset consisted of 900 labeled images divided into these four classes. Each image underwent preprocessing to ensure uniformity and compatibility with the CNN model, including resizing to a standard resolution of 256x256 pixels and normalization to enhance computational efficiency while preserving key diagnostic features. CNN predicts the class of the uploaded image along with a probability score for each category, providing valuable insights into the likelihood of a specific diagnosis. Additionally, Grad-CAM was implemented to generate heatmap overlays on the input images, highlighting regions critical to the model’s predictions. These heatmaps offer visual interpretability, making the system more transparent and clinically applicable.
The second phase of the study involved an ANN trained on clinical data collected from 999 patients. The dataset consisted of 24 features, capturing various demographic, lifestyle, genetic, and symptomatic factors associated with lung cancer. These features included:
Age and Gender: Basic demographic variables critical for understanding population-specific cancer risks.
Air Pollution and Smoking: Indicators of environmental and behavioral exposure to carcinogens, with smoking being a primary risk factor for lung cancer.
Alcohol Use: Reflects the patient’s alcohol consumption, which may compound the risk of cancer in smokers.
Dust Allergy and Occupational Hazards: Variables assessing exposure to dust particles or workplace environments with known lung irritants or carcinogens.
Genetic Risk and Chronic Lung Disease: Capture hereditary predisposition and pre-existing respiratory conditions that increase cancer susceptibility.
Balanced Diet and Obesity: Reflect the role of nutrition and body mass index in overall health and immune response.
Passive Smoker: Accounts for indirect exposure to second-hand smoke, an established risk factor for lung cancer.
Chest Pain, Coughing of Blood, Fatigue, and Weight Loss: Common symptoms associated with lung cancer progression.
Shortness of Breath, Wheezing, and Swallowing Difficulty: Respiratory symptoms indicative of tumor obstruction or advanced disease.
Clubbing of Fingernails: A physical manifestation often linked to chronic hypoxia in lung diseases, including cancer.
Frequent Cold and Dry Cough: Represent the recurring respiratory issues that could signal early lung disease.
Snoring: Included as a potential indirect factor linked to airway obstruction or respiratory abnormalities.
Level: Represents the severity or progression stage of lung cancer, providing additional context for diagnosis.
These features were standardized and preprocessed to ensure compatibility with the ANN model. By integrating demographic, symptomatic, and lifestyle data, the ANN was designed to classify patients into potential lung cancer categories. This dual-model approach combining CNN for imaging and ANN for clinical data analysis presents a robust and comprehensive diagnostic framework for lung cancer detection and classification. The goal was to create a multimodal approach that integrates different diagnostic techniques for more accurate and comprehensive predictions.
3. Machine Learning Model
A CNN model was created to classify CT images for lung cancer detection, distinguishing between four classes using Python and the Keras library. Unlike methods that convert images into 1D arrays, this approach retained the original image format to preserve critical spatial and contextual details essential for medical image interpretation. The preprocessing pipeline ensured uniformity and compatibility with the CNN architecture by resizing input images to 256x256 pixels, normalizing pixel values to a 0–3 (4 classes) range, and maintaining their three-channel RGB format (256x256x3). These preprocessing steps ensured efficient training while preserving vital diagnostic features. The dataset was divided into training, validation, and testing subsets to assess the model’s performance and generalizability.
The CNN architecture incorporated multiple convolutional layers (Conv2D) with ReLU activation functions, followed by max-pooling layers to reduce dimensionality while preserving significant features. To enhance generalization and mitigate overfitting, dropout layers were added between dense layers, and batch normalization was applied to stabilize and accelerate training. The model comprised three convolutional blocks, each consisting of a convolutional layer, max-pooling, and batch normalization. The final dense layer used a SoftMax activation function to predict class probabilities for the four categories. The model was optimized using the Adam optimizer, known for its adaptive learning rate, and trained with the categorical cross-entropy loss function, suitable for multi-class classification tasks. Training was conducted over 100 epochs with a batch size of 32, balancing computational demands and performance. To improve robustness and mitigate overfitting, data augmentation was performed using Keras’s ImageDataGenerator, introducing variations such as random rotations, flips, zooms, and shifts to simulate real-world imaging conditions. The model’s performance was evaluated using metrics such as accuracy, precision, recall, and F1 score, alongside confusion matrices to visualize classification outcomes. Training and validation performance were monitored through learning curves, providing insights into the model’s optimization and generalization across epochs. This approach demonstrated CNN’s effectiveness in leveraging CT data for lung cancer detection and classification.
Alongside the CNN, an ANN was developed to classify lung cancer types based on clinical data. The dataset was preprocessed by standardizing the feature values to ensure consistency and improve the training process. The ANN model has 3 classes that are 0 (Low), 1 (Medium), and 2 (High). The ANN’s architecture featured a straightforward feed-forward network comprising fully connected layers. The input layer accommodated 24 features, which were followed by two dense layers utilizing ReLU activation functions. The output layer employed a SoftMax activation function to enable multi-class classification of lung cancer types. The model was trained using the Adam optimizer and the categorical cross-entropy loss function, ensuring optimal weight adjustments during the learning process. To assess the ANN’s performance, metrics such as accuracy, precision, recall, and F1 score were calculated. Additionally, a confusion matrix was used to analyze the distribution of predictions against true labels. Both the CNN and ANN models displayed excellent performance in their respective areas of focus. The CNN was highly effective in analyzing CT images to detect lung cancer types, while the ANN delivered accurate predictions based on patient-specific clinical data. Together, these models highlight the potential of deep learning techniques to improve diagnostic accuracy and provide valuable insights into lung cancer classification and management. The CNN excelled in image-based diagnosis, while the ANN provided accurate predictions based on clinical data, demonstrating the versatility and effectiveness of deep learning in medical applications.
4. Experimental Results
The results of this study provide a comprehensive evaluation of the developed models, demonstrating their performance across a range of critical metrics and visualizations. These analyses encompass key aspects such as classification accuracy, interpretability, and reliability, showcasing the models’ effectiveness in handling complex datasets. Through a detailed exploration of training dynamics, precision-recall characteristics, and class-wise discrimination capabilities, the findings underscore the robustness and clinical relevance of the proposed systems. The figures and tables presented in this section serve to substantiate these outcomes, offering valuable insights into the models’ strengths and areas for potential improvement.
The training and validation accuracy and loss curves, depicted in
Figure 1, illustrate the dynamics of the CNN model over 100 epochs. The training and validation accuracy curves demonstrate gradual improvement, converging around 0.91. Similarly, the loss curves exhibit a steady decline, indicating that the model effectively minimized the loss function during training.
The classification performance of the model is further validated using Receiver Operating Characteristic (ROC) curves, as depicted in
Figure 2. The ROC curves for each class are presented, with the areas under the curve (AUC) values of 0.97 for adenocarcinoma, 1.00 for large cell carcinoma, 1.00 for normal, and 0.99 for squamous cell carcinoma. These results demonstrate the model’s excellent discrimination capabilities across all categories. The near-perfect AUC values for multiple classes underscore its efficacy in differentiating between malignant and normal cases, supporting its application in real-world clinical scenarios. The performance metrics for the CNN are detailed in
Table 1. The model achieved a weighted average accuracy of 91%, with a Matthews Correlation Coefficient (MCC) of 0.87. The weighted precision, recall, and F1-score are all consistent at 0.91, 0.91, and 0.90, respectively. These metrics suggest that the model performs well overall, though its ability to differentiate between certain classes remains limited, as indicated by the ROC AUC scores.
Class-level performance metrics demonstrate variability:
Adenocarcinoma attained a high recall of 0.94, indicating the model’s ability to correctly identify most instances of this class.
Large cell carcinoma achieved a balanced precision and recall of 0.87.
The normal class was detected with exceptional accuracy, achieving a precision of 0.93 and a recall of 1.00.
Squamous cell carcinoma showed an imbalanced performance with a perfect precision of 1.00 but a lower recall of 0.75, suggesting many instances were missed.
Figure 3 presents the confusion matrix summarizing the model’s classification performance across four categories: adenocarcinoma, squamous cell carcinoma, large cell carcinoma, and normal. The diagonal elements indicate correct classifications, while off-diagonal elements represent misclassifications. The model achieves high accuracy, particularly for large cell carcinoma and normal cases, with minimal errors across all classes. Notably, squamous cell carcinoma shows slightly higher misclassification rates compared to other categories, which may reflect inherent similarities in imaging patterns with adenocarcinoma. This matrix provides a comprehensive overview of the model’s strengths and areas needing further refinement.
Figure 4 provides a series of predictions made by CNN, including the ground truth labels, predicted labels, and the model’s confidence scores. These samples demonstrate the model’s ability to produce highly confident and accurate predictions in most cases. Furthermore, they highlight occasional misclassifications, which offer valuable insight into potential areas for improvement in training or data representation.
Figure 5 demonstrates the performance of the CNN on an individual test image. The model confidently predicts the input CT scan as adenocarcinoma with a confidence score of 98.78%. This high-confidence prediction indicates the robustness of the model in identifying adenocarcinoma from chest CT images. Such performance underlines the model’s clinical potential for aiding diagnostic workflows.
Figure 6 showcases the Grad-CAM (Gradient-weighted Class Activation Mapping) visualization, which identifies the regions of the CT image that contribute most significantly to the model’s predictions. This visualization serves as an essential interpretability tool, offering insights into the CNN’s decision-making process. By localizing critical features in the image, such as tumor regions, Grad-CAM masking enhances the model’s transparency and trustworthiness, an essential aspect in clinical AI applications.
Figure 7 displays the ANN model’s training and validation accuracy over 15 epochs. The training accuracy increases consistently, achieving near-perfect accuracy at the final epoch. Validation accuracy follows a similar trend, converging closely with the training accuracy, which indicates minimal overfitting and excellent generalization to unseen data. This behavior demonstrates that the model is well-trained and effectively learns from the provided dataset.
ROC Curves and Precision-Recall (PR) Curves, showcasing the classifier’s performance across different thresholds for three classes. ROC Curves: The True Positive Rate (TPR) versus False Positive Rate (FPR) plots highlight the model’s ability to discriminate between classes. The ROC AUC scores for Class 0, Class 1, and Class 2 were 0.99, 0.99, and 1.00, respectively. These results demonstrate near-perfect classification across all classes. PR Curves: Precision versus Recall plots further evaluate the classifier, especially under imbalanced data conditions. The PR AUC scores for Class 0 and Class 1 were 0.98, while Class 2 achieved a perfect PR AUC score of 1.00. These metrics emphasize the model’s outstanding precision and recall capabilities, ensuring accurate detection without significant false positives.
Trained ANN model demonstrates its robust performance across multiple evaluation metrics and visualizations. The final table summarizes the performance metrics achieved by the model, which reflect outstanding classification accuracy. The overall accuracy of the model is 99%, complemented by a weighted precision, recall, and F1-score of 1.00, 0.99, and 0.99, respectively. Furthermore, the MCC and the ROC AUC both achieve values of 0.99, underscoring the model’s effectiveness in distinguishing between classes. The detailed classification report further validates these findings, showing class-wise precision, recall, and F1-scores all approaching or achieving 1.00. This high level of performance is maintained across all classes, demonstrating the model’s ability to accurately predict outcomes irrespective of class distribution.
The confusion matrix offers a detailed breakdown of the model’s predictions across three classes (Low, Medium, and High). It revealed:
For Class 0 (Low), 59 instances were correctly classified, with only 1 misclassification.
For Class 1 (Medium), 69 instances were accurately identified with no errors.
For Class 2 (High), all 71 samples were perfectly predicted.
The matrix confirms the model’s high performance, particularly its ability to minimize errors and achieve perfect results for specific classes. These visualizations collectively underline the model’s robust performance, showcasing minimal errors, high agreement, and excellent class discrimination.
Figure 9.
Confusion Matrix of ANN Model’s Classes.
Figure 9.
Confusion Matrix of ANN Model’s Classes.
5. Discussion
The results of this study demonstrate the efficacy of the proposed dual-model AI approach in accurately classifying lung cancer types and assessing malignancy potential based on imaging and clinical data. The CNN model exhibited excellent performance in leveraging CT images to distinguish between adenocarcinoma, large cell carcinoma, squamous cell carcinoma, and normal tissue. With a weighted average accuracy of 91% and high AUC values across classes (ranging from 0.97 to 1.00), the model showcased its robust discrimination capabilities. Grad-CAM visualizations further enhanced its interpretability, allowing clinicians to identify regions of interest critical to the model’s predictions, thereby increasing trust in the system’s diagnostic decisions. However, variability in class-wise performance, particularly for squamous cell carcinoma, indicates room for improvement. The lower recall for this category suggests the need for either additional training data or improved feature extraction techniques to mitigate class imbalances and enhance discriminatory power.
The ANN model achieved even higher performance metrics when applied to clinical data, with an overall accuracy of 99% and class-wise precision, recall, and F1-scores nearing or achieving perfection. The high accuracy of the ANN model further highlights the value of integrating clinical data, corroborating findings that demographic and symptomatic factors significantly enhance diagnostic precision [
24]. The model’s success demonstrates the value of integrating demographic, symptomatic, and lifestyle factors into lung cancer diagnostics. Features such as genetic risk, occupational hazards, and respiratory symptoms played pivotal roles in differentiating between malignancy levels. The ANN’s ability to achieve near-perfect classification, as supported by ROC and PR curves, underscores the power of combining clinical expertise with deep learning techniques.
Despite these promising results, some limitations merit discussion. First, the CNN model’s performance was affected by inherent similarities between certain cancer types, which occasionally led to misclassifications. The variability in CT image quality and resolution could also have contributed to these challenges. For the ANN model, while its accuracy was exemplary, it relied heavily on the quality and diversity of the clinical dataset. Ensuring broader representation across patient demographics and disease stages would improve its generalizability in real-world settings. Additionally, both models could benefit from further optimization, such as hyperparameter tuning and advanced transfer learning techniques, to enhance their robustness and reduce the potential for overfitting.
The combined use of CNN and ANN models leverages the strengths of both image-based and clinical data, providing a holistic approach to lung cancer diagnostics. The CNN and ANN models demonstrated strong performance, with high accuracy, precision, and AUC values, validating their reliability in a diagnostic setting. The incorporation of Grad-CAM for model interpretability aligns with previous research emphasizing the importance of explainability in AI-driven medical diagnostics [
25]. The use of Grad-CAM visualizations makes the CNN model’s predictions more transparent, aiding clinicians in understanding the rationale behind diagnostic decisions. The study highlights the significance of various clinical features, such as genetic risk and occupational hazards, demonstrating the ANN model’s ability to incorporate complex data.
Class-Wise Variability: The CNN model showed inconsistencies in classifying squamous cell carcinoma, indicating challenges in addressing inter-class similarities. Both models are dependent on the quality and diversity of the dataset. As noted in prior studies, CNNs have demonstrated exceptional capability in processing medical imaging data for cancer detection, but challenges in distinguishing visually similar cancer types remain a common limitation [
26]. Variability in CT image quality and limited representation of certain demographics may hinder generalizability. Despite high performance metrics, the risk of overfitting remains, particularly for the ANN model given its near-perfect scores.
6. Conclusion
This study presents a comprehensive AI-based diagnostic framework for lung cancer detection, combining CNN for imaging and ANN for clinical data analysis. By integrating these two modalities, the system achieves robust classification performance, demonstrating its potential as a valuable tool in clinical workflows. The CNN model effectively leverages CT data for image-based diagnosis, while the ANN excels in analyzing patient-specific clinical features, highlighting the complementary strengths of these deep learning techniques. Future work will focus on several key areas to enhance the system’s accuracy and applicability. These include expanding the dataset to incorporate more diverse and heterogeneous samples, employing advanced transfer learning and data augmentation strategies, and integrating additional diagnostic modalities such as genetic sequencing or histopathological imaging. Additionally, real-world validation in clinical environments will be critical to assessing the system’s practical utility and refining its performance further. Efforts to address interpretability and user-friendliness, such as enhancing Grad-CAM visualizations and incorporating interactive interfaces, will also be prioritized to ensure widespread adoption in healthcare settings.
In conclusion, the dual-model approach provides a promising foundation for improving lung cancer diagnostics, offering precise and interpretable predictions that can support clinicians in making informed decisions. By addressing current limitations and expanding its capabilities, this framework holds significant potential to advance early detection, classification, and management of lung cancer, ultimately contributing to better patient outcomes.
Acknowledgements
The endeavor was exclusively carried out using the organization’s current staff and infrastructure, and all resources and assistance came from inside sources. Ethical approval is not applicable. The data supporting the study’s conclusions are accessible inside the journal, according to the author. Upon a reasonable request, the corresponding author will provide the raw data supporting the study’s findings.
Credit authorship contribution statement:
Emir Öncü developed the AI models used in tissue formation, wrote all the sections related to AI model development, monkeypox determination and drew the figures.
Conflicts of competing interests:
In this article, the author states that they have no competing financial interests or personal affiliations.
Declaration of generative AI and AI assisted technologies in the writing process:
The writer utilized Grammarly, Quillbot, and ChatGPT to improve readability and check grammar while preparing this work. After utilizing this tool/service, the writers assumed complete accountability for the publication’s content, scrutinizing and revising it as needed.
Data Availability:
The datasets used in this study are publicly available on Kaggle. The clinical data utilized for the Artificial Neural Network (ANN) model can be accessed at Cancer Patients and Air Pollution: A New Link. The imaging data employed for the Convolutional Neural Network (CNN) model is available at Chest CT Scan Images. These datasets were used under their respective open-access licenses for research purposes.. The code developed for this study is available upon reasonable request.
Biographical Statement:
Emir Oncu is a Biomedical Engineer with expertise in biomedical imaging techniques, signal processing, medical device design, and artificial intelligence. He has authored several studies focused on the applications of CNNs and ANNs in healthcare, particularly in cancer detection and diagnosis.
References
- I. Salter and S. R. Riddell, ‘Tinkering in the garage — tuning CARs for safety’, Sep. 01, 2019, Nature Research. [CrossRef]
- M. Cellina et al., ‘Artificial Intelligence in Lung Cancer Imaging: Unfolding the Future’, Nov. 01, 2022, Multidisciplinary Digital Publishing Institute (MDPI). [CrossRef]
- X. Yin et al., ‘Artificial intelligence-based prediction of clinical outcome in immunotherapy and targeted therapy of lung cancer’, Semin Cancer Biol, vol. 86, pp. 146–159, Nov. 2022. [CrossRef]
- K. Xu et al., ‘Progress of exosomes in the diagnosis and treatment of lung cancer’, Biomedicine & Pharmacotherapy, vol. 134, p. 111111, Feb. 2021. [CrossRef]
- K. H. Yu et al., ‘Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features’, Nat Commun, vol. 7, Aug. 2016. [CrossRef]
- Z. Gandhi et al., ‘Artificial Intelligence and Lung Cancer: Impact on Improving Patient Outcomes’, Nov. 01, 2023, Multidisciplinary Digital Publishing Institute (MDPI). [CrossRef]
- W. D. Travis et al., ‘International Association for the Study of Lung Cancer/American Thoracic Society/European Respiratory Society International Multidisciplinary Classification of Lung Adenocarcinoma’, Journal of Thoracic Oncology, vol. 6, no. 2, pp. 244–285, Feb. 2011. [CrossRef]
- 8. R. S. Heist, L. V. Sequist, and J. A. Engelman, ‘Genetic Changes in Squamous Cell Lung Cancer: A Review’, Journal of Thoracic Oncology, vol. 7, no. 5, pp. 924–933, May 2012. [CrossRef]
- D. R. Gandara, P. S. Hammerman, M. L. Sos, P. N. Lara, and F. R. Hirsch, ‘Squamous cell lung cancer: From tumor genomics to cancer therapeutics’, Clinical Cancer Research, vol. 21, no. 10, pp. 2236–2243, May 2015. [CrossRef]
- J. M. Varlotto et al., ‘Should Large Cell Neuroendocrine Lung Carcinoma be Classified and Treated as a Small Cell Lung Cancer or with Other Large Cell Carcinomas?’, Journal of Thoracic Oncology, vol. 6, no. 6, pp. 1050–1058, Jun. 2011. [CrossRef]
- R. J. Battafarano et al., ‘Large cell neuroendocrine carcinoma: An aggressive form of non-small cell lung cancer’, J Thorac Cardiovasc Surg, vol. 130, no. 1, pp. 166–172, Jul. 2005. [CrossRef]
- H. Takei et al., ‘Large cell neuroendocrine carcinoma of the lung: A clinicopathologic study of eighty-seven cases’, J Thorac Cardiovasc Surg, vol. 124, no. 2, pp. 285–292, Aug. 2002. [CrossRef]
- Amisha, P. Malik, M. Pathania, and V. Rathaur, ‘Overview of artificial intelligence in medicine’, J Family Med Prim Care, vol. 8, no. 7, p. 2328, 2019. [CrossRef]
- P. Hamet and J. Tremblay, ‘Artificial intelligence in medicine’, Metabolism, vol. 69, pp. S36–S40, Apr. 2017. [CrossRef]
- H. Goyal et al., ‘Application of artificial intelligence in pancreaticobiliary diseases’, 2021, SAGE Publications Ltd. [CrossRef]
- H. Goyal et al., ‘Scope of artificial intelligence in screening and diagnosis of colorectal cancer’, Oct. 01, 2020, MDPI. [CrossRef]
- Y. J. Yang and C. S. Bang, ‘Application of artificial intelligence in gastroenterology’, 2019, Baishideng Publishing Group Co. [CrossRef]
- E. Lawson et al., ‘Machine learning for metabolic engineering: A review’, Metab Eng, vol. 63, pp. 34–60, Jan. 2021. [CrossRef]
- J. R. BuRt et al., ‘Deep learning beyond cats and dogs: recent advances in diagnosing breast cancer with deep neural networks’, 2018.
- M. Medvedeva, M. Vols, and M. Wieling, ‘Using machine learning to predict decisions of the European Court of Human Rights’, Artif Intell Law (Dordr), vol. 28, no. 2, pp. 237–266, Jun. 2020. [CrossRef]
- J. Kietzmann, J. Paschen, and E. Treen, ‘Artificial intelligence in advertising: How marketers can leverage artificial intelligence along the consumer journey’, Sep. 01, 2018, World Advertising Research Center. [CrossRef]
- K. Seetharam, S. Shrestha, and P. P. Sengupta, ‘Cardiovascular Imaging and Intervention Through the Lens of Artificial Intelligence’, Interventional Cardiology: Reviews, Research, Resources, vol. 16, 2021. [CrossRef]
- E. J. Topol, ‘High-performance medicine: the convergence of human and artificial intelligence’, Jan. 01, 2019, Nature Publishing Group. [CrossRef]
- Y. C. Chen, W. C. Ke, and H. W. Chiu, ‘Risk classification of cancer survival using ANN with gene expression data from multiple laboratories’, Comput Biol Med, vol. 48, no. 1, pp. 1–7, May 2014. [CrossRef]
- H. Zhang and K. Ogasawara, ‘Grad-CAM-Based Explainable Artificial Intelligence Related to Medical Text Processing’, Bioengineering, vol. 10, no. 9, Sep. 2023. [CrossRef]
- W. Salehi et al., ‘A Study of CNN and Transfer Learning in Medical Imaging: Advantages, Challenges, Future Scope’, Apr. 01, 2023, MDPI. [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).