Preprint
Review

This version is not peer-reviewed.

Artificial Intelligence in Primary Malignant Bone Tumors Imaging: A Narrative Review

A peer-reviewed article of this preprint also exists.

Submitted:

14 May 2025

Posted:

15 May 2025

You are already at the latest version

Abstract
Artificial Intelligence (AI) is definitely a transformative tool in orthopedic oncology, enabling advancements in the diagnosis, classification, and treatment response prediction of primary malignant bone tumors (PBT). AI, particularly machine learning and deep learning ,utilizes the power of computational algorithms and large datasets to improve imaging interpretation and clinical decision making. Radiomics, combined with AI, facilitates the extraction of quantitative features from medical images, in order to precisely characterise the tumor and assist in personalized therapeutic strategies. The integration of convolutional neural networks has demonstrated outstanding capabilities in imaging pattern recognition, advancing tumor detection, segmentation, and differentiation.
Keywords: 
;  ;  ;  ;  

1. Introduction

The term artificial intelligence (AI) is defined as a field of computer science that allows computers to operate like human cognitive functions. It was first introduced by Mc Carthy in the 1950s with the first AI algorithm approved by FDA in 2017. A sublet of AI, machine learning (ML) employs computational algorithms that improve task performance through experience. Over the years the advancements in computational algorithms, combined with the growth of big data led to the development of deep learning, a specialized subset of machine learning. Deep learning (DL) utilizes artificial neural networks that simulate the architecture of biological nervous systems. Among those convolutional neural networks (CNN) became more popular in radiology due to their exceptional imaging pattern recognition capabilities [1], [Figure 1]. Radiomics involves the extraction of mathematical features from medical images [2]. The combination of radiomics with machine learning enables the identification of complex patterns for diagnosis, prognosis prediction, classification, and treatment response assessment in orthopedic oncology [3]. Despite the low incidence of primary bone tumors (PBT), accurate diagnosis and classification are critical due to their variable biological behaviors and treatment requirements [4]. Radiomics-ML have demonstrated promising results in analyzing lesion characteristics, standardizing image comparisons, and improving diagnostic accuracy. Studies have also demonstrated the potential of AI tools in distinguishing specific tumor types, such as enchondroma and chondrosarcoma, through magnetic resonance imaging (MRI)-based radiomic features [5].
Indeed MRI is the most sensitive modality for evaluating primary bone tumors, as it allows assessment of bone marrow involvement, soft tissue invasion, and lesion fluid content. Radiologists have to deal with a lot of diagnostic dilemmas, for example distinguishing primary bone tumors from bone infections can be very challenging due to overlapping features [4].
Radiomics texture feature extraction involves the quantitative analysis of medical images to capture the underlying tissue heterogeneity within a defined region of interest (ROI), such as a tumor or lesion [6]. The process typically begins with the acquisition of high-quality images, such as MRI or CT scans, followed by segmentation of the ROI, either manually by experts or using automated methods.Once the ROI is defined, the image data is preprocessed to ensure consistency and reliability in feature extraction. This may include steps such as intensity normalization, resampling to a standard voxel size, and discretization of grayscale values into fixed bins to reduce noise and enhance pattern detection [7].
Texture features are then extracted using statistical matrices that describe the spatial relationships between pixel or voxel intensities. One of the most commonly used is the Gray-Level Co-Occurrence Matrix (GLCM), which quantifies how often pairs of pixel intensities occur at a given distance and orientation. Other widely used matrices include the Gray-Level Run Length Matrix (GLRLM), which captures the length of consecutive pixels with the same intensity, and the Gray-Level Size Zone Matrix (GLSZM), which measures the size of homogeneous intensity zones within the ROI [8].
From these matrices, a variety of quantitative features are derived. These include contrast, which measures local intensity variation; entropy, which reflects the complexity or randomness of the texture; homogeneity, which indicates how uniform the intensity distribution is; energy, a measure of image uniformity; and correlation, which assesses the linear dependency of gray levels across pixels.
This narrative review discusses in detail the applications of AI in primary bone tumors, including tumor detection, imaging processing, therapy response prediction, and tumor classification. While ML and DL techniques continue to evolve, their application to PBTs is still in its early stages. The development of advanced DL models capable of simultaneous detection, segmentation, and classification represents a significant step forward in the field [4]. By integration of artificial intelligence in clinical practice we can potentially improve clinical decision making, personalize treatment strategies, and finally improve patient outcomes. Through this review, we also aim to highlight the current advancements, practical applications, and future directions of artificial intelligence in managing primary bone tumors [2,5].

2. Materials and Methods

An extensive search of the Medline/PubMed, Embase and Scopus library in English literature was conducted in December 2024. For the search we used the keywords “artificial intelligence”, “Neural Networks, Computer”, “Image Processing, Computer-Assisted”, “Deep Learning”, “Machine Learning”, “Artificial Intelligence”, “Artificial Neural Network”, “Convolutional Neural Network”, “Deep Learning”, “Machine Learning”, “image processing”, “automated detection”, “Diagnostic Imaging”, “diagnostic imaging”, “imaging”, “Osteosarcoma”, “Chondrosarcoma”, “Histiocytoma, Malignant Fibrous”, “Fibrosarcoma”, “Osteosarcoma”, “Osteosarcomas”, “Osteogenic Sarcoma”, “Osteogenic Sarcomas”, “Osteosarcoma Tumor”, “Osteosarcoma Tumors”, “Ewing Sarcoma”, “Ewing Tumor”, “Chondrosarcoma”, “Chondrosarcomas”, “Malignant Fibrous Histiocytomas”, “Malignant Fibrous Histiocytoma”, “Malignant Fibrohistiocytic Tumor”, “Malignant Fibrohistiocytic Tumors”, “Fibrosarcoma”, “Fibrosarcomas”, “Bone Lymphoma”, “primary”, “bone cancer”, “bone neoplasm”, “bone cancers”, “bone neoplasms”, “bone tumor”, “bone tumors”, “bone metastasis”. The studies were eligible as long as they included any type of primary bone tumor and any tool of AI, all types of studies except conference abstracts from 2017 until 2024 deemed eligible. A total of 601 records were identified, from which 102 fulfilled our inclusion criteria. The workflow process is demonstrated in [Figure 2.]

3. Results

A total of 104 articles published 2017-2024 were included in this narrative review after screening more than 600 high quality articles. These articles were categorized into five main domains, as demonstrated in [Figure 3.]. The majority of studies were focusing on osteosarcoma followed by Ewing sarcoma and chondrosarcoma. Deep learning models, particularly convolutional neural networks (CNNs), were the most commonly applied AI tools, achieving high diagnostic accuracies,often surpassing radiologists in detection and classification tasks. In addition Radiomics-based machine learning models demonstrated strong predictive power in evaluating chemotherapy response, with area under the curve (AUC) values frequently exceeding 0.85. Furthermore, segmentation models using advanced architectures such as U-Net, Transformers, and hybrid CNN-transformer systems reported Dice similarity coefficients above 0.90, highlighting the efficacy of AI in enhancing diagnostic workflows and reducing physician workload.

4. Discussion

Treatment Response & Prediction

Treatment response and prediction in PBT enhances precision medicine, improves patient outcomes, and reduces the burden of ineffective therapies. Radiomics, a method that extracts quantitative features from medical images, has been very promising in chemotherapy response predictions. Studies comparing 2D and 3D MRI radiomics; 2D extract features from a single or a few selected slices of a volumetric scan (e.g., the largest cross-sectional slice of a tumor) and 3D extract features from the entire tumor volume (across all slices), capturing spatial heterogeneity in three dimensions [10]. In the case of skeletal Ewing sarcoma it demonstrated the superior reproducibility of 3D features in predicting responses to neoadjuvant chemotherapy [11]. Additionally, delta-radiomics models, which evaluate temporal changes in imaging features, have allowed for preoperative assessments of chemotherapy response in high-grade osteosarcoma having an area under the curve (AUC) 0.871 in the training cohort, and 0.843 in the validation cohort [12].
To standardize these predictions, scoring systems have been developed. A multicenter study introduced a revised scoring system with the assistance of four ML models, logistic regression [LR], decision tree [DT], support vector machine [SVM], and neural network [NN], that could accurately predict neoadjuvant chemotherapy responses in primary high-grade bone sarcomas achieving an AUC of 0.893 [13]. Furthermore, a deep learning model coupled with an MRI-based radiomics nomogram automated the prediction process of neoadjuvant chemotherapy (NAC) in osteosarcoma patients, providing reliable and efficient evaluations achieving an AUC of 0.793 (95% CI, 0.610-0.975) and the decision curve analysis (DCA) suggested the clinical utility of this nomogram [14].
Preoperative imaging techniques have also benefited from AI integration. A CT-based deep learning radiomics model (DLRM) demonstrated its ability to predict the histological grade and prognosis of chondrosarcoma in comparison to Radiomics signature (RS) and deep learning signature (DLS) scoring an AUC of 0.879 with 95 % CI, 0.802-0.956 [15]. Similarly, [K. Teo et al.] used a support vector machine (SVM) with radial basis function (RBF) for the classification method, combining histopathology data with multi-modal MRI and found that conventional MRI chemotherapy response predictions in childhood osteosarcoma, by identifying histopathological tumor necrosis, improved by above 95% when dynamic contrast enhanced DCE-MRI was added into consideration [16].
Interactive deep learning tools have made annotation processes, which is a prerequisite to calculate texture features, and CNNs training more efficient. A deep interactive learning approach, Deep Interactive Learning (DIaL), facilitated the rapid labeling of treatment response data for osteosarcoma, reducing the time required for CNN model training to 7 hours [17]. Moreover, a Siamese network (DS-Net) effectively differentiated from necrotic tumor regions, streamlining tumor segmentation tasks using hematoxylin and eosin (H&E)-stained osteosarcoma histology slides and achieving an average accuracy of 95.1% [18].
AI models trained on multimodal imaging data have demonstrated enhanced prediction capabilities. A deep learning model using Fludeoxyglucose (18F-FDG) positron emission tomography (PET) images found that deep learning architecture with the selected radiomics feature provides the higher prediction accuracy of chemotherapy response in patients with osteosarcoma [19]. A class-structured convolutional neural network applied to diffusion-weighted imaging (DWI) implementing peak signal-to-noise ratio (PSNR), mean square error (MSE), and edge preserve index (EPI) to evaluate the image quality after processing by the CSDCNN algorithm and provided novel insights into osteosarcoma prognosis, scoring better denoising, Accuracy, Recall, Precision, F1 score and Effect Evaluation of Neoadjuvant Chemotherapy with an apparent diffusion coefficient ADCmean value of the patients after chemotherapy of 1.66 ± 0.17 and the ADCmin value of 1.33 ± 0.15 [20]. Directionally sensitive fractal radiomics, applying least absolute shrinkage and selection operator (LASSO) machine learning, revealed associations with chemoresistance in osteosarcoma with AUCs reaching 0.95 and capability of handling irregularly shaped tumor regions in contrast to most radiomic analytical methods that are compatible only with rectangular regions of interest (ROIs)[21].
Machine learning-based MRI radiomics nomograms have shown significant promise in evaluating chemotherapy efficacy. A DWI-based radiomics model successfully assessed neoadjuvant chemotherapy responses in osteosarcoma patients outperforming the standalone clinical or radiomics model, attaining an AUC of 0.848 [22]. Combining multi-parametric MRI data with machine learning further enhanced the evaluation of necrosis post-chemotherapy in patients with osteosarcoma, significantly improving discriminating ability for distinguishing, non-cartilaginous tumour survival from tumour nonviable AUC from 0.93 to 0.97, tumour survival from tumour non viable AUC from 0.83 to 0.9 and cartilaginous tumour survival from tumour nonviable AUC from 0.61 to 0.81 [23].
Fusion radiomics, DLRM, which merges imaging features from multiple modalities, has also been applied to improve predictions of NAC in patients with osteosarcoma. Advances in MRI techniques like the DLRM developed by [Zheng et al.] which reviewed axial T2-weighted imaging (T2WI) and contrast-enhanced T1-weighted (T1CE) have refined chemotherapy assessments, with dynamic contrast-enhanced MRI models predicting treatment efficacy in osteosarcoma achieving an accuracy of 93.8% and an AUC of 0.961 [7]. Similarly, [L. Zhang et al.] utilized K-nearest neighbor (KNN), SVM and LR for a model establishment to evaluate the value of machine learning-based DCE-MRI radiomics nomogram attaining an AUC of 0.86, 0.92, and 0.93, respectively [24] Another study, [Y. Zhang et al.] constructed a radiomic model based on before and after NAC, predicting the histological response to NAC in patients with high-grade osteosarcoma using MRI-based radiomics, correlating with improved survival in localized high-grade osteosarcoma scoring an AUC of 0.999 and 0.915, higher scores were achieved in post NAC [25]. Texture analysis of intraosseous and extraosseous lesions further contributed to predicting patient outcomes implementing T2-weighted images in extraosseous with AUCs of 0.94 and 0.89 and T1-weighted images for intraosseous with AUCs of 0.99 and 0.88 [26]. AI’s predictive application extends even further to pediatric sarcomas. For instance, machine learning algorithms trained on MRI-based radiomics effectively predicted neoadjuvant chemotherapy responses in Ewing sarcoma with an AUC of 0.9 [10]. Additionally, [Chen et al.] selected Thirteen radiomics features based on the LASSO-LR classifier to construct the CE FS T1WI radiomics signature that demonstrated potential for forecasting pathological response to NAC in young patients with osteosarcoma [27]. According to [Mieler et al.], radiomic features in pediatric Ewing sarcoma seem to have a potential to distinguish between children with good and poor response already before and during NAC [28]. Innovative approaches, including the use of infrared spectroscopy combined with machine learning, add new dimensions to treatment outcome prediction for Ewing sarcoma with an accuracy of 92% [29].
Machine learning applied to radiomics data and FDG-PET imaging has further enhanced the prediction of chemotherapy response in osteosarcoma achieving an AUC-ROC of 0.98, a sensitivity of 100% [30]. Baseline textural features from FDG-PET imaging, analyzed through principal component analysis (PCA) and machine learning linear SVM, offer valuable insights into treatment outcomes as it contributes to better scores in AUC [31]. Pretherapeutic MRI radiomics has demonstrated predictive capabilities for histologic response in osteosarcoma with the most predictive model achieving an AUC of 0.97 [32], while convolutional neural networks of tumor center FDG-PET images enhance response prediction prior to chemotherapy in osteosarcoma patients [33].
Binary convolutional neural networks and machine learning techniques trained on PET data continue to improve prediction accuracy for osteosarcoma [31,32]. Prognostic logistic models utilizing metabolic imaging phenotypes further refine response predictions, integrating tumor biology and imaging features to SUVmax and GLZLM_SZLGE (Gray-Level Zone Length based on intensity-size-zone Matrix _ Short-Zone Low Grey-Level Emphasis) as independent predictors of metastasis risk estimation in high risk for metastasis osteosarcoma patients [36]. Lastly, T2-weighted MRI radiomics provides reliable predictive markers for assessing chemotherapy response, survival, and disease-free outcomes in high-grade intramedullary osteosarcoma with an AUC of 0.708 ± 0.046 [37].

Tumor Detection

Tumor detection is the initial but critical step in improving patients outcome through early diagnosis and timely intervention. Medical imaging modalities such as CT, MRI, Radiographs are generally preferred by radiologists due to their efficiency and ability to provide detailed information about tumor’s structural insights. Studies have shown that it is almost essential to follow pre-processing steps in order to enhance image quality before segmentation and make the tumor detection easier . For segmentation of cancerous regions, various techniques such as K-means clustering, Canny edge detection, and threshold-based methods have been applied with promising results,helping to reduce noise, define boundaries, and segment tumors effectively. Notably, AlexNet outperformed models like ResNet50 in early detection of parosteal osteosarcoma, osteochondroma, and enchondroma, using CT images and achieving a high testing accuracy of 100% [38].
CT-Radiomics has a cobblestone role in differentiating benign and malignant tumors. By combining clinical features with radiomics the model has a validation AUC of 0.823. This model is not only valuable for tumor detection but also for treatment planning [39].
Moreover, Radiomics can be used in pathology clinical workflow by assisting physicians in analyse high quality low noise images and reduce the intraclass variation between them.Deep Learning is leading in tumor detection advancement through approaches that have been applied to histological analysis of osteosarcoma and addressing pathologists’ challenges such as noise and intra class variations [40].
Since early detection of osteosarcoma it remains critical for CNN to streamline the process. For instance, a comparative evaluation of four CNN-based models VGG16, VGG19, DenseNet201, and ResNet101,revealed that ResNet101 was the most effective, achieving 90.36% accuracy, 89.51% precision, and an AUC of 0.9461. The model’s superior performance, coupled with efficient training time, highlights its power for osteosarcoma detection. The potential of advanced architectures like Xception, NASNetLarge, and EfficientNetV2L to further improve diagnostic accuracy and reliability, underscoring the transformative role of AI in early cancer detection [41]. The detection of osteosarcoma by a pathologist is a labor intensive and difficult process that on top of that requires a lot of experience. Automatic detection systems like IF-FSM-C can detect osteosarcoma from whole slide images with an accuracy of 96.08% [42]. Similarly, a convolutional neural network (CNN) model demonstrated exceptional performance, achieving an accuracy of 99.8% in distinguishing normal from tumor images and 71.2% accuracy with a positive predictive value of 91.9% in differentiating benign from malignant bone tumors, offering a promising tool for histopathological diagnosis [43]. Additionally, the Bone Cancer Detection Network (BCDNet), a novel CNN-based model, achieved 96.29% accuracy for binary classification and 94.69% for multi-class classification, further underscoring its utility in early and accurate osteosarcoma detection [44].
Deep learning models further enhance diagnostic precision, with one designed for osteolytic osteosarcoma and giant cell tumor (GCT) on knee radiographs achieving 93.1% accuracy, significantly surpassing junior radiologists, and performing comparably to senior radiologists [45]. Likewise, a deep learning model integrating biochemical markers such as alkaline phosphatase (ALP) and lactate dehydrogenase (LDH) with X-ray imaging features achieved 97.17% accuracy [46].
Moreover, a CNN model used in pediatric nuclear medicine holds promising results in differentiating benign and malignant bone disease on nuclear scintigraphy with an impressive accuracy 96.17% and specificity 91.67% [47]. On top of that, ChatGPT-4 demonstrated high specificity 100% for identifying bone lesions but showed limited sensitivity and accuracy for differentiating malignant from non-malignant conditions [48]. Furthermore, a deep learning model for detecting primary bone tumors on knee radiographs achieved 96.4% accuracy internally and 92.0% externally, significantly outperforming junior radiologists while being much faster [49].
Primary malignant bone tumors significantly affect not only adults but also the pediatric population,making early tumor detection really important. A U-net-based AI model demonstrated remarkable sensitivity 95.52%, specificity 96.21%, on annotated X-ray data, outperforming traditional models and enhancing early detection and patient outcomes [50]. Similarly, DUconViT, a hybrid transformer-CNN system, achieved a Dice Similarity Coefficient of 92.4%, excelling in osteosarcoma segmentation and aiding surgical planning through efficient tumor size estimation [51]. Additionally, a Mask R-CNN model demonstrated 92% precision in distinguishing osteosarcoma and osteochondroma, further highlighting AI’s growing role in clinical diagnostics [52].

AI and Classification of PBT

The classification of primary bone tumors remains a significant challenge due to their rarity and diverse histological subtypes. In their study, [Song et al.] employed a deep learning model to classify primary bone tumors using incomplete multimodal images from X-rays, CT, and MRI demonstrated a significant enhancement in classification accuracy. By integrating features from various imaging modalities, the model addressed the limitations of single-modality analysis and offered diagnostic support scoring a satisfactory micro-average AUCs of 0.847 [53]. Radiograph-based deep learning models have also been shown to improve radiologists performance in classifying histological types of primary bone tumors. A multicenter study of [Xie et al.] highlighted that integrating AI tools with radiologist expertise significantly enhanced diagnostic precision and efficiency with a macro average AUC of 0.904/0.873 [54]. Similarly, a preliminary study using deep learning-based classification of primary bone tumors on radiographs validated the potential of these models in clinical workflows in distinguishing between benign and non-benign AUC 0.894 and 0.877and malignant and non-malignant 0.907 and 0.916 [55].
Advanced algorithms, such as the Remora Optimization Algorithm, have been utilized to enhance deep learning models for automated detection and classification of osteosarcoma; these methods demonstrated high accuracy and efficiency, making them valuable in early diagnosis and management [56]. Optimization techniques, including DenseNet and Elephant Herd Optimization, have also been applied to classify osteosarcomas and giant cell tumors of the bone, with great success in handling complex imaging data [54,55]. Additionally, comprehensive diagnostic models for osteosarcoma classification using CT imaging features have been developed to address the specific challenges posed by these tumors like the ones developed by [Rahouma et al.] a XG-Boost, support vector machine (SVM), and K-nearest neighbors and by [Wang et al.] showed that the principal component analysis (PCA- IPSO) outperforms traditional feature selection methods in predicting the accuracy of binary classification using support vector machine (SVM) [56,57].
Studies focusing on CT radiomics-based machine learning have effectively differentiated atypical cartilaginous tumors from chondrosarcomas, highlighting the power of texture analysis in tumor grading [61] MRI radiomics-based models have further advanced classification efforts, particularly in distinguishing between low-grade and high-grade chondrosarcomas and other subtypes, through detailed texture and intensity mapping [59,60,61,62].
Several innovative optimization algorithms have been integrated into AI models for osteosarcoma classification. For instance, the Honey Badger Optimization Algorithm combined with deep transfer learning has been designed to achieve high diagnostic accuracy [66]. Similarly, a Bald Eagle Search Optimization integrated with an Artificial Neural Network demonstrated promising results in osteosarcoma classification [67]. These methods underscore the importance of optimization in enhancing AI model performance.
Machine learning approaches have also been applied to classify and predict osteosarcoma grading. By leveraging metabolomic data alongside imaging, these models provided comprehensive diagnostic insights, further solidifying the utility of multimodal data integration [68]. A novel deep learning model called You Only Look Once (YOLO) for primary bone tumor detection and classification in full-field radiographs have also proven effective in handling large datasets, demonstrating their scalability and real-world applicability, detecting bone neoplasms from full-field radiographs in one shot and then simultaneously classify radiographs into normal, benign, intermediate, or malignant [69].
Multitask deep learning models have showcased their potential to simultaneously segment and classify primary bone tumors in radiographs, streamlining workflows and expediting diagnosis [70]. The application of AlexNet and ResNet architectures for spinal bone tumor classification has highlighted the versatility of AI in diverse clinical scenarios [71]. Systematic evaluations and meta-analyses have reinforced the diagnostic value of machine learning for malignant bone tumors, providing insights into its capabilities and limitations while guiding future research directions [72]. . Advanced algorithms have also been developed for the segmentation and differentiation of pelvic and sacral osteosarcomas from Ewing’s sarcoma using CT-based machine learning networks [70,71].
X-rays radiomics-based models have shown promise in classifying atypical cartilaginous tumors and high-grade chondrosarcomas of long bones, further expanding the role of radiomics in bone tumor analysis [72,73]. Lastly, [von Schacky et al.] analyzed the radiographs from 934 patients over a period of 20 years, and achieved in creating a multitask DL model with an accuracy of 80.2% which was higher than two radiology residents and comparable to two fellowship trained radiologists, showing the high potential [77] .

Tumor Segmentation

AI has shown significant potential to improve segmentation accuracy and efficiency in the management of primary bone tumors. A systematic review of radiomics studies on chondrosarcoma reported strong diagnostic performance, with pooled DORs of 43.90 and AUCs between 0.90 and 0.94, but segmentation remains largely manual remind us the need of AI integration in radiology workflow to increase our efficiency [78].
To address segmentation challenges in osteosarcoma, the ETUNet model achieves a Dice. Similarity Coefficient (DSC) consistently above 90% and improving metrics like Intersection Over Union (IoU) and DSC Pre-screening with the Slide Block Filter (SBF) demonstrated a robust accuracy of 95.67%, while noise reduction with the Non-Local Means (NLM) algorithm and CRF optimization further enhanced segmentation precision, proving highly strategy in image processing [79].
Likewise , SEAGNET uses supervised edge-attention guidance to address blurred tumor boundaries, achieving outstanding metrics such as a DSC of 0.967, precision of 0.968, and accuracy of 0.996. Its ability to precisely localize malignant tumors significantly enhances diagnostic accuracy and clinical efficiency, making it a valuable tool especially for high grade primary bone tumors [80].
Additionally, the NSRDN framework, which integrates noise reduction through Differential Activation Filters (DAF) and super resolution reconstruction, achieved 96.4% DSC, 92.8% IoU, and 95.5% accuracy using HRNet [81].
Recently, TBNet, a transformer-enhanced U-Net model incorporating edge-enhanced modules and multi-head cross-fusion transformers, achieved DSC of 0.949 and accuracy of 0.997 in osteosarcoma MRI segmentation. Pre-screening with a Threshold Screening Filter (TSF) and noise reduction via fast NLM and Fourier transforms further supported this approach, optimizing segmentation accuracy while maintaining computational efficiency for early detection that can improve patients’ outcomes substantially [82].
Furthermore, the Eformer model combined with the DFANet segmentation network effectively addresses challenges like noise and blurred edges in osteosarcoma MRI images, achieving accuracy of 0.995. This auxiliary segmentation method enhances tumor localization, precision, and automation, making it a cutting edge tool for radiologists [83].
OSTransnet, which integrates U-Net and Transformer-based approaches with innovations like Channel-based Transformers (CTrans) and Boundary Augmentation Blocks (BAB), achieved high tech metrics such as DSC of 0.949, IoU of 0.904, Precision of 0.924 and Recall of 0.981. These advancements enable faster, more accurate diagnoses while reducing physician workload, positioning OSTransnet as a promising tool for clinical applications [84].
BA-GCA Net incorporates modules like Grid Contextual Attention (GCA), Statistical Texture Learning Block (STLB), and Spatial Transformer Block (STB), achieving DSC of 0.927, IoU of 0.880, while maintaining low computational costs. These features make it effective for handling low-contrast, complex boundaries, improving diagnostic accuracy [85].
The 3D U-Net model, trained using the MONAI framework, achieved mean DSC scores of 83.75% (T1-weighted), 85.45% (T2-weighted), and 87.62% (T1-gd) after preprocessing MRI images with techniques like Contrast-Limited Adaptive Histogram Equalization (CLAHE) and denoising filters, this approach demonstrated notable segmentation performance, effectively addressing blurred tumor edges and overfitting [86].
The DECIDE model leverages Multi-modality Feature Fusion and Recalibration (MFR), Lesion Attention Enhancement (LAE), and Boundary Context Aggregation (BCA) modules to improve segmentation performance, achieving precision of 74.85%, recall of 71.52%, DSC of 70.40%, IoU of 54.50% [87].
The OSDCN framework, combining data preprocessing, segmentation with SepUNet, and conditional random fields (CRF), demonstrated DSC of 0.914, F1-score of 0.937, and IoU of 0.883. It relates to Mean Teacher optimization for noise reduction and multi-scale segmentation, enabling accurate tumor boundary delineation and area calculations on a dataset of over 80,000 MRI images which demonstrates a reliable internal validation [88].
Manual and semiautomatic segmentation techniques using the GrowCut tool within the 3D-Slicer software achieved DSC ranging from 0.83 to 0.97 for manual segmentation and 0.71 to 0.96 for semi automatic methods, with semiautomatic segmentation requiring significantly less time.These methods explains that semiautomatic approaches are more efficient but elaborates on the need of AI to for reliability and reproducibility [89].
The MSRN (Multiple Supervised Residual Network) model further advanced CT-based segmentation with 89.22% DSC, 88.74% sensitivity, and 0.9305 F1-measure demonstrating robust precision in mixed bone and soft tissue regions, making it an excellent tool to implement [90].
The OSGABN (Osteosarcoma Segmentation Guided Aggregated Bilateral Network) employs FaBiNet to integrate low-level and high-level contextual features, achieving 95% accuracy, DSC of 0.915, and IoU of 0.853 on a dataset of over 80,000 MRI images, making it highly applicable to resource-limited healthcare settings [91]. The U-Net model for pediatric sarcoma segmentation in PET/CT scans achieved voxel-wise precision/sensitivity of 0.71/0.54 (thorax), 0.71/0.39 (extremities), and 0.52/0.38 (abdomen), despite challenges with high FDG uptake and limited training data, which promise that it can handle tumor variability and complex metabolic activity [92].
Additionally, a framework for bone cancer detection utilizing MRI images integrated preprocessing techniques like Alternate Sequential Filtering (ASF) and Decision-Based Median Filters (DBME-F), enhancing edge and texture retention while eliminating noise. The Modified DeeplabV3+ model with Atrous Spatial Pyramid Pooling (ASPP) enabled multi-scale feature analysis, achieving DSC of 70.40%, IoU of 54.50% [93].
The UATransNet framework, leveraging a modified U-Net with self-attentive mechanisms and dense residual learning, achieved IoU of 0.922 ± 0.03, DSC of 0.921 ± 0.04, and 96.2% accuracy, validated on 80,000 MRI images. The model efficiently mitigates noise and supports precise tumor edge detection, making it an optimal solution for osteosarcoma diagnosis [94].
RTUNet++, a hybrid architecture integrating ResNet, Transformer attention mechanisms, and Dense Skip Connections, addressed challenges like spatial information loss and grayscale heterogeneity. Achieving DSC of 0.82. Ablation studies confirmed the critical role of Transformer blocks in segmentation performance, demonstrating RTUNet++’s potential for accurate segmentation in diverse tumor morphologies [95].
Among automated and semi-automated segmentation methods for osteosarcoma using diffusion-weighted MRI (DWI), SLIC-Superpixels (SLIC-S) and Fuzzy C-means clustering (FCM) achieved Dice Coefficients (DC) of approximately 82% and 79%, respectively. These methods demonstrated rapid execution times and precision in delineating tumor regions, emphasizing their potential for advancing computer-aided diagnosis and treatment planning [96].
An integrated pipeline, incorporating the MPFNet model for segmentation, achieved a mean DSC of 84.19% and a High-Quality Segmentation Rate (HQSR) of 94.38%, while its fusion nomogram predicted survival probabilities with a C-index of 0.806, surpassing traditional radiomics and clinical nomograms [97].

Insights in Discrimination and Future Steps by AI

Another area in which AI is demonstrating its potential, is the discrimination between primary bone tumors by leveraging advanced imaging techniques and machine learning models to enhance differentiation accuracy. MRI-based texture analysis has demonstrated significant diagnostic value in distinguishing enchondroma from chondrosarcoma as shown by [Cilengir et al.] that found Naive Bayes, K neighbors and logistic regression models, were offering a non-invasive method for early and precise detection achieving high accuracy and AUC for T1 weighted, FS-PD images and their combination respectively [98]. Radiomics, combined with machine learning, has further refined the ability to distinguish between chondrosarcoma and enchondroma as found by [Erdem et al.], emphasizing the potential of quantitative imaging features in tumor characterization with an advanced neural network that achieved high diagnostic performance AUC of 0.979–0.984 [2]. Similarly, computed tomography (CT)-based machine learning networks have shown promise in automatically segmenting and differentiating pelvic and sacral osteosarcoma from Ewing’s sarcoma [99] enabling faster clinical decision-making. Deep learning algorithms, such as two-phase models, distinguish Ewing sarcoma from acute osteomyelitis in pediatric radiographs, achieving test accuracies of 90.6% and 86.7% in detecting pathological cases and differentiating Ewing sarcoma from osteomyelitis, respectively. Gradient-Weighted Class Activation Mapping (Grad-CAM) visualizations further validated these models by confirming their focus on clinically relevant regions [100]. Models like Support Vector Machine and convolutional neural networks achieved impressive accuracies 89.9% for SVM, 93.3% for CNNs, excellent and useful tools for assessing chemotherapy response and advancing personalized care [101].
Radiogenomics, functional imaging, and advanced surgical technologies are transforming the diagnosis and treatment of bone sarcomas, such as Ewing sarcoma. The integration of functional imaging with transcriptomics has revealed insights into tumor biology, such as glucose uptake patterns, aiding in personalized treatment approaches [102]. Innovations like single-shot multispectral quantitative phase imaging, enhanced by deep learning, and trained and validated on two different samples: the optical waveguide and MG63 osteosarcoma cells, allow for rapid, label-free visualization of biological samples, offering precise tumor characterization [103]. Lastly, computer-assisted tumour surgery (CATS) and 3D printing in surgical management provides improved preoperative planning and intraoperative accuracy, leading to better outcomes for bone sarcoma patients, when implementing patient specific instrumentation with custom made implants [104].

5. Conclusions

AI has shown significant promise in advancing the diagnosis, classification, and treatment response prediction of primary malignant bone tumors (PBTs). From enhancing radiological interpretation to improving treatment outcomes through predictive modeling, AI represents a transformative tool in orthopedic oncology. The integration of advanced machine learning (ML) and deep learning (DL) techniques into clinical workflows has not only increased diagnostic accuracy but also enabled more personalized therapeutic approaches, potentially improving patient outcomes. However, achieving widespread clinical adoption will require further robust validation studies.
Despite its potential, our study highlights several limitations. Firstly, the application of AI to PBTs remains constrained by the rarity of these tumors, which limits the availability of high-quality, diverse datasets for model training and validation. Additionally, the lack of standardization in imaging protocols and radiomics feature extraction poses challenges to reproducibility and generalizability. Ethical considerations, including data privacy and the interpretability of complex AI models, also warrant careful attention. Future research should focus on addressing these limitations by fostering multicenter collaborations, developing explainable AI models, and integrating AI systems into clinical practice with robust regulatory oversight to ensure safety and efficacy.

Author Contributions

“Conceptualization, P.P*. and R.C.; methodology, P.P*. and R.C.; writing—original draft preparation, P.P*. , R.C. and D.P.; writing—review and editing, P.P*. and R.C; visualization, P.P*. and R.C.; supervision, P.P., N.P., A.W., P.K. and O.P.; All authors have read and agreed to the published version of the manuscript.” Please turn to the CRediT taxonomy for the term explanation. Authorship must be limited to those who have contributed substantially to the work reported.

Funding

“This research received no external funding”

Data Availability Statement

We encourage all authors of articles published in MDPI journals to share their research data. In this section, please provide details regarding where data supporting reported results can be found, including links to publicly archived datasets analyzed or generated during the study. Where no new data were created, or where data is unavailable due to privacy or ethical restrictions, a statement is still required. Suggested Data Availability Statements are available in section “MDPI Research Data Policies” at https://www.mdpi.com/ethics.

Acknowledgments

We state that Platon Papageorgiou and Rafail Christodoulou contributed equally to the research and to the preparation of the manuscript and all the above mentioned authors contributed significantly to be included in this research.

Conflicts of Interest

“The authors declare no conflicts of interest.”

Abbreviations

The following abbreviations are used in this manuscript:
AI Artificial Intelligence
PBT Primary Bone Tumors
NAC Neoadjuvant Chemotherapy
ML Machine Learning
DL Deep Learning
RS Radiomics Signature
RBF Radial Basis Function
CNN Convolutional Neural Networks
MRI Magnetic Resonance Imaging
AUC Area Under the Curve
DT Decision Tree
LR Logic Recession
SVM Support Vector Machine
DCA Decision Curve Analysis
DLRM Deep Learning Radiomics Model
DIaL Deep Learning Interactive Model
DS-Net Deep Supervision Network
H&E Hematoxylin and Eosin
18F-FDG Fluorine 18 Fluorodeoxyglucose
PET Positron Emission Tomography
DWI Diffusion-Weighted Imaging
PSNR Peak Signal-to-Noise Ratio
MSE Mean Squared Error
EPI Edge Presence Index
LASSO Least Absolute Shrinkage and Selection Operator
ROI Region Of Interest
T2WI T2 Weighted Imaging
T1CE T1 Weighted Contrast-Enhanced Imaging
KNN K Nearest Neighbor
NAC Neoadjuvant Chemotherapy
DCE-MRI Dynamic Contrast-Enhanced Magnetic Resonance Imaging
SUVmax Maximum Standardized Uptake Value
CT Computed Tomography
VGG16 Visual Geometry Group 16 layer Network
VGG19 Visual Geometry Group 19 layer Network
DenseNet201 Densely Connected Convolutional Network 201 Layers
ResNet101 Residual Network 101 Layers
NASNetLarge Neural Architecture Search Network Large
EfficientNetV2L Efficient Network Version 2 Large
IF-FSM-C Inception Framework with Feature Selection Mechanism for Classification
BCDNet Bone Cancer Detection Network
GCT Giant Cell Tumor
ALP Alkaline Phosphatase
LDH Lactate Dehydrogenase
ChatGPT-4 Chat Generative Pre-trained Transformer 4
U-net U-shaped Convolutional Network
DUconViT Dual Convolutional Vision Transformer
Mask R-CNN Mask Region-Based Convolutional Neural Network
PCA-IPSO Principal Component Analysis Improved Particle Swarm Optimization
DECIDE Deep Ensemble Classifier with Integration of Dual Enhancers
Grad-CAM Gradient-weighted Class Activation Mapping
CATS Computer-Assisted Tumor Surgery

References

  1. Vogrin, M.; Trojner, T.; Kelc, R. Artificial Intelligence in Musculoskeletal Oncological Radiology. Radiol Oncol 2020, 55, 1–6. [Google Scholar] [CrossRef] [PubMed]
  2. Erdem, F.; Tamsel, İ.; Demirpolat, G. The Use of Radiomics and Machine Learning for the Differentiation of Chondrosarcoma from Enchondroma. J Clin Ultrasound 2023, 51, 1027–1035. [Google Scholar] [CrossRef] [PubMed]
  3. Meng, Y.; Yang, Y.; Hu, M.; Zhang, Z.; Zhou, X. Artificial Intelligence-Based Radiomics in Bone Tumors: Technical Advances and Clinical Application. Semin Cancer Biol 2023, 95, 75–87. [Google Scholar] [CrossRef]
  4. Ye, Q.; Yang, H.; Lin, B.; Wang, M.; Song, L.; Xie, Z.; Lu, Z.; Feng, Q.; Zhao, Y. Automatic Detection, Segmentation, and Classification of Primary Bone Tumors and Bone Infections Using an Ensemble Multi-Task Deep Learning Framework on Multi-Parametric MRIs: A Multi-Center Study. Eur Radiol 2024, 34, 4287–4299. [Google Scholar] [CrossRef]
  5. Emil, N.S.; Sibbitt, R.R.; Sibbitt, W.L., Jr. Machine Learning and Magnetic Resonance Imaging: Differentiating Benign from Malignant Osseous Tumors. J Clin Ultrasound 2023, 51, 1036–1038. [Google Scholar] [CrossRef]
  6. Yildirim, M.; Yildirim, H. CT Radiomics-Based Machine Learning Model for Differentiating between Enchondroma and Low-Grade Chondrosarcoma. Med. Baltim. 2024, 103, e39311. [Google Scholar] [CrossRef]
  7. Zheng, F.; Yin, P.; Liang, K.; Wang, Y.; Hao, W.; Hao, Q.; Hong, N. Fusion Radiomics-Based Prediction of Response to Neoadjuvant Chemotherapy for Osteosarcoma. Acad Radiol 2024, 31, 2444–2455. [Google Scholar] [CrossRef]
  8. Avery, E.; Sanelli, P.C.; Aboian, M.; Payabvash, S. Radiomics: A Primer on Processing Workflow and Analysis. Semin. Ultrasound CT MRI 2022, 43, 142–146. [Google Scholar] [CrossRef]
  9. Li, M.D.; Ahmed, S.R.; Choy, E.; Lozano-Calderon, S.A.; Kalpathy-Cramer, J.; Chang, C.Y. Artificial Intelligence Applied to Musculoskeletal Oncology: A Systematic Review. Skelet. Radiol 2022, 51, 245–256. [Google Scholar] [CrossRef]
  10. Gitto, S.; Corino, V.; Bologna, M.; Marzorati, L.; Milazzo Machado, E.; Albano, D.; Messina, C.; Mainardi, L.; Sconfienza, L.M. MRI Radiomics-Based Machine Learning to Predict Neoadjuvant Chemotherapy Response in Ewing Sarcoma. Insights Imaging 2022, 14, 77–78. [Google Scholar] [CrossRef]
  11. Gitto, S.; Corino, V.D.A.; Annovazzi, A.; Milazzo Machado, E.; Bologna, M.; Marzorati, L.; Albano, D.; Messina, C.; Serpi, F.; Anelli, V.; et al. 3D vs. 2D MRI Radiomics in Skeletal Ewing Sarcoma: Feature Reproducibility and Preliminary Machine Learning Analysis on Neoadjuvant Chemotherapy Response Prediction. Front Oncol 2022, 12, 1016123. [Google Scholar] [CrossRef]
  12. Lin, P.; Yang, P.F.; Chen, S.; Shao, Y.Y.; Xu, L.; Wu, Y.; Teng, W.; Zhou, X.Z.; Li, B.H.; Luo, C.; et al. A Delta-Radiomics Model for Preoperative Evaluation of Neoadjuvant Chemotherapy Response in High-Grade Osteosarcoma. Cancer Imaging 2020, 20, 7. [Google Scholar] [CrossRef]
  13. He, F.; Xie, L.; Sun, X.; Xu, J.; Li, Y.; Liu, R.; Sun, K.; Shen, D.; Gu, J.; Ji, T.; et al. A Scoring System for Predicting Neoadjuvant Chemotherapy Response in Primary High-Grade Bone Sarcomas: A Multicenter Study. Orthop. Surg. 2022, 14, 2499–2509. [Google Scholar] [CrossRef]
  14. Zhong, J.; Zhang, C.; Hu, Y.; Zhang, J.; Liu, Y.; Si, L.; Xing, Y.; Ding, D.; Geng, J.; Jiao, Q.; et al. Automated Prediction of the Neoadjuvant Chemotherapy Response in Osteosarcoma with Deep Learning and an MRI-Based Radiomics Nomogram. Eur Radiol 2022, 32, 6196–6206. [Google Scholar] [CrossRef]
  15. Nie, P.; Zhao, X.; Ma, J.; Wang, Y.; Li, B.; Li, X.; Li, Q.; Xu, Y.; Dai, Z.; Wu, J.; et al. Can the Preoperative CT-Based Deep Learning Radiomics Model Predict Histologic Grade and Prognosis of Chondrosarcoma? Eur J Radiol 2024, 181, 111719. [Google Scholar] [CrossRef]
  16. Teo, K.Y.; Daescu, O.; Cederberg, K.; Sengupta, A.; Leavey, P.J. Correlation of Histopathology and Multi-Modal Magnetic Resonance Imaging in Childhood Osteosarcoma: Predicting Tumor Response to Chemotherapy. PLoS One 2022, 17, e0259564. [Google Scholar] [CrossRef]
  17. Ho, D.J.; Agaram, N.P.; Schüffler, P.J.; Vanderbilt, C.M.; Jean, M.-H.; Hameed, M.R.; Fuchs, T.J. Deep Interactive Learning: An Efficient Labeling Approach for Deep Learning-Based Osteosarcoma Treatment Response Assessment.; Springer Science and Business Media Deutschland GmbH, 2020; Vol. 12265 LNCS, pp. 540–549.
  18. Fu, Y.; Xue, P.; Ji, H.; Cui, W.; Dong, E. Deep Model with Siamese Network for Viable and Necrotic Tumor Regions Assessment in Osteosarcoma. Med Phys 2020, 47, 4895–4905. [Google Scholar] [CrossRef]
  19. Kim, W.; Park, J.; Sheen, H.; Byun, B.H.; Lim, I.; Kong, C.-B.; Lim, S.M.; Woo, S.-K. Development of Deep Learning Model for Prediction of Chemotherapy Response Using PET Images and Radiomics Features.; Institute of Electrical and Electronics Engineers Inc., 2018.
  20. Hu, Y.; Tang, J.; Zhao, S.; Li, Y. Diffusion-Weighted Imaging-Magnetic Resonance Imaging Information under Class-Structured Deep Convolutional Neural Network Algorithm in the Prognostic Chemotherapy of Osteosarcoma. Sci Program 2021, 2021. [Google Scholar] [CrossRef]
  21. Djuričić, G.J.; Ahammer, H.; Rajković, S.; Kovač, J.D.; Milošević, Z.; Sopta, J.P.; Radulovic, M. Directionally Sensitive Fractal Radiomics Compatible With Irregularly Shaped Magnetic Resonance Tumor Regions of Interest: Association With Osteosarcoma Chemoresistance. J Magn Reson Imaging 2023, 57, 248–258. [Google Scholar] [CrossRef]
  22. Zhang, L.; Gao, Q.; Dou, Y.; Cheng, T.; Xia, Y.; Li, H.; Gao, S. Evaluation of the Neoadjuvant Chemotherapy Response in Osteosarcoma Using the MRI DWI-Based Machine Learning Radiomics Nomogram. Front Oncol 2024, 14, 1345576. [Google Scholar] [CrossRef]
  23. Huang, B.; Wang, J.; Sun, M.; Chen, X.; Xu, D.; Li, Z.P.; Ma, J.; Feng, S.T.; Gao, Z. Feasibility of Multi-Parametric Magnetic Resonance Imaging Combined with Machine Learning in the Assessment of Necrosis of Osteosarcoma after Neoadjuvant Chemotherapy: A Preliminary Study. BMC Cancer 2020, 20, 322. [Google Scholar] [CrossRef] [PubMed]
  24. Zhang, L.; Ge, Y.; Gao, Q.; Zhao, F.; Cheng, T.; Li, H.; Xia, Y. Machine Learning-Based Radiomics Nomogram With Dynamic Contrast-Enhanced MRI of the Osteosarcoma for Evaluation of Efficacy of Neoadjuvant Chemotherapy. Front Oncol 2021, 11, 758921. [Google Scholar] [CrossRef]
  25. Zhang, Y.; Zhi, L.; Li, J.; Wang, M.; Chen, G.; Yin, S. Magnetic Resonance Imaging Radiomics Predicts Histological Response to Neoadjuvant Chemotherapy in Localized High-Grade Osteosarcoma of the Extremities. Acad. Radiol. 2024, 31, 5100–5107. [Google Scholar] [CrossRef]
  26. Mori, Y.; Ren, H.; Mori, N.; Watanuki, M.; Hitachi, S.; Watanabe, M.; Mugikura, S.; Takase, K. Magnetic Resonance Imaging Texture Analysis Based on Intraosseous and Extraosseous Lesions to Predict Prognosis in Patients with Osteosarcoma. Diagnostics 2024, 14. [Google Scholar] [CrossRef] [PubMed]
  27. Chen, H.; Zhang, X.; Wang, X.; Quan, X.; Deng, Y.; Lu, M.; Wei, Q.; Ye, Q.; Zhou, Q.; Xiang, Z.; et al. MRI-Based Radiomics Signature for Pretreatment Prediction of Pathological Response to Neoadjuvant Chemotherapy in Osteosarcoma: A Multicenter Study. Eur. Radiol. 2021, 31, 7913–7924. [Google Scholar] [CrossRef]
  28. Miedler, J.; Schaal, M.; Götz, M.; Cario, H.; Beer, M. Potential Role of MRI-Based Radiomics in Prediction of Chemotherapy Response in Pediatric Patients with Ewing-Sarcoma. Pediatr. Radiol. 2023, 53, S163–S164. [Google Scholar] [CrossRef]
  29. Chaber, R.; Arthur, C.J.; Łach, K.; Raciborska, A.; Michalak, E.; Bilska, K.; Drabko, K.; Depciuch, J.; Kaznowska, E.; Cebulski, J. Predicting Ewing Sarcoma Treatment Outcome Using Infrared Spectroscopy and Machine Learning. Molecules 2019, 24. [Google Scholar] [CrossRef]
  30. Dufau, J.; Bouhamama, A.; Leporq, B.; Malaureille, L.; Beuf, O.; Gouin, F.; Pilleul, F.; Marec-Berard, P. Prediction of Chemotherapy Response in Primary Osteosarcoma Using the Machine Learning Technique on Radiomic Data. Bull. Cancer (Paris) 2019, 106, 983–999. [Google Scholar] [CrossRef]
  31. Jeong, S.Y.; Kim, W.; Byun, B.H.; Kong, C.B.; Song, W.S.; Lim, I.; Lim, S.M.; Woo, S.K. Prediction of Chemotherapy Response of Osteosarcoma Using Baseline (18)F-FDG Textural Features Machine Learning Approaches with PCA. Contrast Media Mol Imaging 2019, 2019, 3515080. [Google Scholar] [CrossRef]
  32. Bouhamama, A.; Leporq, B.; Khaled, W.; Nemeth, A.; Brahmi, M.; Dufau, J.; Marec-Bérard, P.; Drapé, J.L.; Gouin, F.; Bertrand-Vasseur, A.; et al. Prediction of Histologic Neoadjuvant Chemotherapy Response in Osteosarcoma Using Pretherapeutic MRI Radiomics. Radiol Imaging Cancer 2022, 4, e210107. [Google Scholar] [CrossRef]
  33. Kim, J.; Jeong, S.Y.; Kim, B.C.; Byun, B.H.; Lim, I.; Kong, C.B.; Song, W.S.; Lim, S.M.; Woo, S.K. Prediction of Neoadjuvant Chemotherapy Response in Osteosarcoma Using Convolutional Neural Network of Tumor Center (18)F-FDG PET Images. Diagn. Basel 2021, 11. [Google Scholar] [CrossRef]
  34. Helen, R.; Gurumoorthy, G.; Thennarasu, S.R.; Sakthivel, P.R. Prediction of Osteosarcoma Using Binary Convolutional Neural Network: A Machine Learning Approach.; Institute of Electrical and Electronics Engineers Inc., 2024.
  35. Im, H.-J.; McIlwain, S.; Ong, I.; Lee, I.; Song, C.; Shulkin, B.; Cho, S. Prediction of Response to Neoadjuvant Chemotherapy Using Machine Learning Algorithm Trained by Baseline FDG-PET Textural Parameters in Osteosarcoma. J. Nucl. Med. 2017, 58. [Google Scholar]
  36. Sheen, H.; Kim, W.; Byun, B.H.; Kong, C.-B.; Lim, I.; Lim, S.M.; Woo, S.-K. Prognostic and Predictive Logistic Model for Osteosarcoma Using Metabolic Imaging Phenotypes. J. Nucl. Med. 2019, 60. [Google Scholar]
  37. White, L.M.; Atinga, A.; Naraghi, A.M.; Lajkosz, K.; Wunder, J.S.; Ferguson, P.; Tsoi, K.; Griffin, A.; Haider, M. T2-Weighted MRI Radiomics in High-Grade Intramedullary Osteosarcoma: Predictive Accuracy in Assessing Histologic Response to Chemotherapy, Overall Survival, and Disease-Free Survival. Skelet. Radiol 2023, 52, 553–564. [Google Scholar] [CrossRef]
  38. Sampath, K.; Rajagopal, S.; Chintanpalli, A. A Comparative Analysis of CNN-Based Deep Learning Architectures for Early Diagnosis of Bone Cancer Using CT Images. Sci Rep 2024, 14, 2144. [Google Scholar] [CrossRef]
  39. Sun, W.; Liu, S.; Guo, J.; Hao, D.; Hou, F.; Wang, H.; Xu, W. A CT-Based Radiomics Nomogram for Distinguishing between Benign and Malignant Bone Tumours. Cancer Imaging 2021, 21. [Google Scholar] [CrossRef]
  40. Sanmartín, J.; Azuero, P.; Hurtado, R. A Modern Approach to Osteosarcoma Tumor Identification Through Integration of FP-Growth, Transfer Learning and Stacking Model.; Springer Science and Business Media Deutschland GmbH, 2024; Vol. 932 LNNS, pp. 298–307.
  41. Gawade, S.; Bhansali, A.; Patil, K.; Shaikh, D. Application of the Convolutional Neural Networks and Supervised Deep-Learning Methods for Osteosarcoma Bone Cancer Detection. Healthc. Anal. 2023, 3. [Google Scholar] [CrossRef]
  42. Bansal, P.; Gehlot, K.; Singhal, A.; Gupta, A. Automatic Detection of Osteosarcoma Based on Integrated Features and Feature Selection Using Binary Arithmetic Optimization Algorithm. Multimed. Tools Appl 2022, 81, 8807–8834. [Google Scholar] [CrossRef]
  43. Deng, S.; Huang, Y.; Li, C.; Qian, J.; Wang, X. Auxiliary Diagnosis of Primary Bone Tumors Based on Machine Learning Model. J. Bone Oncol. 2024, 49. [Google Scholar] [CrossRef]
  44. Rao, B.D.; Madhavi, K. BCDNet: A Deep Learning Model with Improved Convolutional Neural Network for Efficient Detection of Bone Cancer Using Histology Images. Int J Comput Exp Sci Eng 2024, 10, 988–998. [Google Scholar] [CrossRef]
  45. Shao, J.; Lin, H.; Ding, L.; Li, B.; Xu, D.; Sun, Y.; Guan, T.; Dai, H.; Liu, R.; Deng, D.; et al. Deep Learning for Differentiation of Osteolytic Osteosarcoma and Giant Cell Tumor around the Knee Joint on Radiographs: A Multicenter Study. Insights Imaging 2024, 15, 35. [Google Scholar] [CrossRef] [PubMed]
  46. Wang, S.; Shen, Y.; Zeng, F.; Wang, M.; Li, B.; Shen, D.; Tang, X.; Wang, B. Exploiting Biochemical Data to Improve Osteosarcoma Diagnosis with Deep Learning. Health Inf Sci Syst 2024, 12, 31. [Google Scholar] [CrossRef]
  47. Yang, P.; Jiang, L.; Xiang, Y.; Wei, J.; Zhao, Z.; Cai, H.; Yi, Z.; Li, L. Deep-Learning Model for Differentiation of Pediatric Bone Diseases by Bone Scintigraphy: A Feasibility Study. Eur. J. Nucl. Med. Mol. Imaging 2023, 50, S727. [Google Scholar] [CrossRef]
  48. Ren, Y.; Guo, Y.; He, Q.; Cheng, Z.; Huang, Q.; Yang, L. Exploring Whether ChatGPT-4 with Image Analysis Capabilities Can Diagnose Osteosarcoma from X-Ray Images. Exp. Hematol. Oncol. 2024, 13. [Google Scholar] [CrossRef]
  49. Loraksa, C.; Mongkolsomlit, S.; Nimsuk, N.; Uscharapong, M.; Kiatisevi, P. Effectiveness of Learning Systems from Common Image File Types to Detect Osteosarcoma Based on Convolutional Neural Networks (CNNs) Models. J Imaging 2022, 8. [Google Scholar] [CrossRef]
  50. Hasei, J.; Nakahara, R.; Otsuka, Y.; Nakamura, Y.; Hironari, T.; Kahara, N.; Miwa, S.; Ohshika, S.; Nishimura, S.; Ikuta, K.; et al. High-Quality Expert Annotations Enhance Artificial Intelligence Model Accuracy for Osteosarcoma X-Ray Diagnosis. Cancer Sci 2024, 115, 3695–3704. [Google Scholar] [CrossRef]
  51. Ling, Z.; Yang, S.; Gou, F.; Dai, Z.; Wu, J. Intelligent Assistant Diagnosis System of Osteosarcoma MRI Image Based on Transformer and Convolution in Developing Countries. IEEE J Biomed Health Inf. 2022, 26, 5563–5574. [Google Scholar] [CrossRef]
  52. Xia, G.; Ran, T.; Wu, H.; Wang, M.; Pan, J. The Development of Mask R-CNN to Detect Osteosarcoma and Oste-Ochondroma in X-Ray Radiographs. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2023, 11, 1869–1875. [Google Scholar] [CrossRef]
  53. Song, L.; Li, C.; Tan, L.; Wang, M.; Chen, X.; Ye, Q.; Li, S.; Zhang, R.; Zeng, Q.; Xie, Z.; et al. A Deep Learning Model to Enhance the Classification of Primary Bone Tumors Based on Incomplete Multimodal Images in X-Ray, CT, and MRI. Cancer Imaging 2024, 24, 135. [Google Scholar] [CrossRef]
  54. Xie, Z.; Zhao, H.; Song, L.; Ye, Q.; Zhong, L.; Li, S.; Zhang, R.; Wang, M.; Chen, X.; Lu, Z.; et al. A Radiograph-Based Deep Learning Model Improves Radiologists’ Performance for Classification of Histological Types of Primary Bone Tumors: A Multicenter Study. Eur J Radiol 2024, 176, 111496. [Google Scholar] [CrossRef]
  55. He, Y.; Pan, I.; Bao, B.; Halsey, K.; Chang, M.; Liu, H.; Peng, S.; Sebro, R.A.; Guan, J.; Yi, T.; et al. Deep Learning-Based Classification of Primary Bone Tumors on Radiographs: A Preliminary Study. EBioMedicine 2020, 62, 103121. [Google Scholar] [CrossRef] [PubMed]
  56. Obaid, M.K.; Abed, H.A.; Abdullah, S.B.; Al-Jawahry, H.M.; Majed, S.; Hassan, A.R. Automated Osteosarcoma Detection and Classification Using Advanced Deep Learning with Remora Optimization Algorithm.; Institute of Electrical and Electronics Engineers Inc., 2023; pp. 122–128.
  57. He, J.; Bi, X. Automatic Classification of Spinal Osteosarcoma and Giant Cell Tumor of Bone Using Optimized DenseNet. J Bone Oncol 2024, 46, 100606. [Google Scholar] [CrossRef]
  58. Malibari, A.A.; Alzahrani, J.S.; Obayya, M.; Negm, N.; Al-Hagery, M.A.; Salama, A.S.; Hilal, A.M. Biomedical Osteosarcoma Image Classification Using Elephant Herd Optimization and Deep Learning. Comput Mater Contin. 2022, 73, 6443–6459. [Google Scholar] [CrossRef]
  59. Rahouma, K.H.; Abdellatif, A.S. Bone Osteosarcoma Tumor Classification. Indones J Electr. Eng Comput Sci 2023, 31, 582–587. [Google Scholar] [CrossRef]
  60. Wang, Y.; Wang, Z.; Zhang, B.; Yang, F. Comprehensive Diagnostic Model for Osteosarcoma Classification Using CT Imaging Features. J Bone Oncol 2024, 47, 100622. [Google Scholar] [CrossRef]
  61. Georgeanu, V.; Mamuleanu, M.-L.; Selisteanu, D. Convolutional Neural Networks for Automated Detection and Classification of Bone Tumors in Magnetic Resonance Imaging.; Institute of Electrical and Electronics Engineers Inc., 2021; pp. 5–7.
  62. Sagar, C.V.; Bhan, A. Machine Learning Approach to Classify and Predict Osteosarcoma Grading.; Institute of Electrical and Electronics Engineers Inc., 2024; pp. 470–474.
  63. Gitto, S.; Albano, D.; Chianca, V.; Cuocolo, R.; Ugga, L.; Messina, C.; Sconfienza, L.M. Machine Learning Classification of Low-Grade and High-Grade Chondrosarcomas Based on MRI-Based Texture Analysis. Semin. Musculoskelet. Radiol. 2019, 23. [Google Scholar] [CrossRef]
  64. Gitto, S.; Cuocolo, R.; van Langevelde, K.; van de Sande, M.A.J.; Parafioriti, A.; Luzzati, A.; Imbriaco, M.; Sconfienza, L.M.; Bloem, J.L. MRI Radiomics-Based Machine Learning Classification of Atypical Cartilaginous Tumour and Grade II Chondrosarcoma of Long Bones. EBioMedicine 2022, 75, 103757. [Google Scholar] [CrossRef]
  65. Gitto, S.; Cuocolo, R.; Albano, D.; Chianca, V.; Messina, C.; Gambino, A.; Ugga, L.; Cortese, M.C.; Lazzara, A.; Ricci, D.; et al. MRI Radiomics-Based Machine-Learning Classification of Bone Chondrosarcoma. Eur J Radiol 2020, 128, 109043. [Google Scholar] [CrossRef]
  66. Vaiyapuri, T.; Jothi, A.; Narayanasamy, K.; Kamatchi, K.; Kadry, S.; Kim, J. Design of a Honey Badger Optimization Algorithm with a Deep Transfer Learning-Based Osteosarcoma Classification Model. Cancers Basel 2022, 14. [Google Scholar] [CrossRef]
  67. Jha, A.K.; Nayak, P.; Mithun, S.; Sherkhane, U.; Jaiswar, V.; Nath, B.; Tripathi, A.; Mehta, G.M.; Panchal, S.; Purandare, N.; et al. Development and Validation of Radiomic Signature for Classification of High and Low-Grade Chondrosarcoma: A Pilot Study. Mol. Imaging Biol. 2022, 24, S218. [Google Scholar] [CrossRef]
  68. Shen, R.; Li, Z.; Zhang, L.; Hua, Y.; Mao, M.; Cai, Z.; Qiu, Y.; Gryak, J.; Najarian, K. Osteosarcoma Patients Classification Using Plain X-Rays and Metabolomic Data. Annu Int Conf IEEE Eng Med Biol Soc 2018, 2018, 690–693. [Google Scholar] [CrossRef] [PubMed]
  69. Li, J.; Li, S.; Li, X.; Miao, S.; Dong, C.; Gao, C.; Liu, X.; Hao, D.; Xu, W.; Huang, M.; et al. Primary Bone Tumor Detection and Classification in Full-Field Bone Radiographs via YOLO Deep Learning Model. Eur Radiol 2023, 33, 4237–4248. [Google Scholar] [CrossRef] [PubMed]
  70. Hadi, M.R.; Hassan, A.R.; Mohammed, I.H.; Alazzai, W.K.; Alzubaidi, L.H.; Ai Sadi, H.I. Integrated Design of Artificial Neural Network with Bald Eagle Search Optimization for Osteosarcoma Classification.; Institute of Electrical and Electronics Engineers Inc., 2023; pp. 552–558.
  71. Guo, C.; Chen, Y.; Li, J. Radiographic Imaging and Diagnosis of Spinal Bone Tumors: AlexNet and ResNet for the Classification of Tumor Malignancy. J Bone Oncol 2024, 48, 100629. [Google Scholar] [CrossRef]
  72. Li, Y.; Dong, B.; Yuan, P. The Diagnostic Value of Machine Learning for the Classification of Malignant Bone Tumor: A Systematic Evaluation and Meta-Analysis. Front. Oncol. 2023, 13. [Google Scholar] [CrossRef]
  73. Gitto, S.; Cuocolo, R.; Annovazzi, A.; Anelli, V.; Acquasanta, M.; Cincotta, A.; Albano, D.; Chianca, V.; Ferraresi, V.; Messina, C.; et al. CT Radiomics-Based Machine Learning Classification of Atypical Cartilaginous Tumours and Appendicular Chondrosarcomas. EBioMedicine 2021, 68, 103407. [Google Scholar] [CrossRef]
  74. Pan, D.; Liu, R.; Zheng, B.; Yuan, J.; Zeng, H.; He, Z.; Luo, Z.; Qin, G.; Chen, W. Using Machine Learning to Unravel the Value of Radiographic Features for the Classification of Bone Tumors. BioMed Res. Int. 2021, 2021. [Google Scholar] [CrossRef]
  75. von Schacky, C.E.; Wilhelm, N.J.; Schäfer, V.S.; Leonhardt, Y.; Jung, M.; Jungmann, P.M.; Russe, M.F.; Foreman, S.C.; Gassert, F.G.; Gassert, F.T.; et al. Development and Evaluation of Machine Learning Models Based on X-Ray Radiomics for the Classification and Differentiation of Malignant and Benign Bone Tumors. Eur Radiol 2022, 32, 6247–6257. [Google Scholar] [CrossRef]
  76. Gitto, S.; Annovazzi, A.; Nulle, K.; Interlenghi, M.; Salvatore, C.; Anelli, V.; Baldi, J.; Messina, C.; Albano, D.; Di Luca, F.; et al. X-Rays Radiomics-Based Machine Learning Classification of Atypical Cartilaginous Tumour and High-Grade Chondrosarcoma of Long Bones. EBioMedicine 2024, 101, 105018. [Google Scholar] [CrossRef]
  77. von Schacky, C.E.; Wilhelm, N.J.; Schäfer, V.S.; Leonhardt, Y.; Gassert, F.G.; Foreman, S.C.; Gassert, F.T.; Jung, M.; Jungmann, P.M.; Russe, M.F.; et al. Multitask Deep Learning for Segmentation and Classification of Primary Bone Tumors on Radiographs. Radiology 2021, 301, 398–406. [Google Scholar] [CrossRef]
  78. Zhong, J.; Hu, Y.; Ge, X.; Xing, Y.; Ding, D.; Zhang, G.; Zhang, H.; Yang, Q.; Yao, W. A Systematic Review of Radiomics in Chondrosarcoma: Assessment of Study Quality and Clinical Value Needs Handy Tools. Eur Radiol 2023, 33, 1433–1444. [Google Scholar] [CrossRef]
  79. Wu, J.; Xiao, P.; Huang, H.; Gou, F.; Zhou, Z.; Dai, Z. An Artificial Intelligence Multiprocessing Scheme for the Diagnosis of Osteosarcoma MRI Images. IEEE J Biomed Health Inf. 2022, 26, 4656–4667. [Google Scholar] [CrossRef]
  80. Zhan, X.; Liu, J.; Long, H.; Zhu, J.; Tang, H.; Gou, F.; Wu, J. An Intelligent Auxiliary Framework for Bone Malignant Tumor Lesion Segmentation in Medical Image Analysis. Diagn. 2023, 13. [Google Scholar] [CrossRef]
  81. Zhong, X.; Gou, F.; Wu, J. An Intelligent MRI Assisted Diagnosis and Treatment System for Osteosarcoma Based on Super-Resolution. Complex Intell Syst 2024, 10, 6031–6050. [Google Scholar] [CrossRef]
  82. Lv, B.; Liu, F.; Li, Y.; Nie, J.; Gou, F.; Wu, J. Artificial Intelligence-Aided Diagnosis Solution by Enhancing the Edge Features of Medical Images. Diagn. Basel 2023, 13. [Google Scholar] [CrossRef]
  83. Wang, L.; Yu, L.; Zhu, J.; Tang, H.; Gou, F.; Wu, J. Auxiliary Segmentation Method of Osteosarcoma in MRI Images Based on Denoising and Local Enhancement. Healthc. Basel 2022, 10. [Google Scholar] [CrossRef]
  84. Liu, F.; Zhu, J.; Lv, B.; Yang, L.; Sun, W.; Dai, Z.; Gou, F.; Wu, J. Auxiliary Segmentation Method of Osteosarcoma MRI Image Based on Transformer and U-Net. Comput Intell Neurosci 2022, 2022, 9990092. [Google Scholar] [CrossRef]
  85. Wu, J.; Liu, Z.; Gou, F.; Zhu, J.; Tang, H.; Zhou, X.; Xiong, W. BA-GCA Net: Boundary-Aware Grid Contextual Attention Net in Osteosarcoma MRI Image Segmentation. Comput Intell Neurosci 2022, 2022, 3881833. [Google Scholar] [CrossRef]
  86. Lim, C.C.; Ling, A.H.W.; Chong, Y.F.; Mashor, M.Y.; Alshantti, K.; Aziz, M.E. Comparative Analysis of Image Processing Techniques for Enhanced MRI Image Quality: 3D Reconstruction and Segmentation Using 3D U-Net Architecture. Diagn. Basel 2023, 13. [Google Scholar] [CrossRef]
  87. Wu, Y.; Li, J.; Wang, X.; Zhang, Z.; Zhao, S. DECIDE: A Decoupled Semantic and Boundary Learning Network for Precise Osteosarcoma Segmentation by Integrating Multi-Modality MRI. Comput. Biol. Med. 2024, 174. [Google Scholar] [CrossRef]
  88. Wu, J.; Yang, S.; Gou, F.; Zhou, Z.; Xie, P.; Xu, N.; Dai, Z. Intelligent Segmentation Medical Assistance System for MRI Images of Osteosarcoma in Developing Countries. Comput Math Methods Med 2022, 2022, 7703583. [Google Scholar] [CrossRef]
  89. Dionísio, F.C.F.; Oliveira, L.S.; Hernandes, M.A.; Engel, E.E.; Rangayyan, R.M.; Azevedo-Marques, P.M.; Nogueira-Barbosa, M.H. Manual and Semiautomatic Segmentation of Bone Sarcomas on MRI Have High Similarity. Braz J Med Biol Res 2020, 53, e8962. [Google Scholar] [CrossRef] [PubMed]
  90. Zhang, R.; Huang, L.; Xia, W.; Zhang, B.; Qiu, B.; Gao, X. Multiple Supervised Residual Network for Osteosarcoma Segmentation in CT Images. Comput Med Imaging Graph 2018, 63, 1–8. [Google Scholar] [CrossRef]
  91. Shen, Y.; Gou, F.; Dai, Z. Osteosarcoma MRI Image-Assisted Segmentation System Base on Guided Aggregated Bilateral Network. Mathematics 2022, 10. [Google Scholar] [CrossRef]
  92. Ørum, L.; Banke, K.; Borgwardt, L.; Hansen, A.; Højgaard, L.; Andersen, F.; Ladefoged, C. Pediatric Sarcoma Segmentation Using Deep Learning. J. Nucl. Med. 2019, 60. [Google Scholar]
  93. Kaur, C.; Grag, U. Preprocessing and Segmentation of MRI Images for Bone Cancer Detection Using Aurous Spatial Pooling With Deeplabv3.; Grenze Scientific Society, 2024; Vol. 2, pp. 2374–2383.
  94. Ouyang, T.; Yang, S.; Gou, F.; Dai, Z.; Wu, J. Rethinking U-Net from an Attention Perspective with Transformers for Osteosarcoma MRI Image Segmentation. Comput Intell Neurosci 2022, 2022, 7973404. [Google Scholar] [CrossRef]
  95. Zou, B.; Chen, Y.; Chen, Z.; Sun, Y.; Huang, Y.; Qin, F.; Wang, C. RTUNet++: Assessment of Osteosarcoma MRI Image Segmentation Leveraging Hybrid CNN-Transformer Approach with Dense Skip Connection.; Institute of Electrical and Electronics Engineers Inc., 2023; pp. 217–223.
  96. Baidya Kayal, E.; Kandasamy, D.; Sharma, R.; Bakhshi, S.; Mehndiratta, A. Segmentation of Osteosarcoma Tumor Using Diffusion Weighted MRI: A Comparative Study Using Nine Segmentation Algorithms. Signal Image Video Process 2020, 14, 727–735. [Google Scholar] [CrossRef]
  97. Zhou, Z.; Xie, P.; Dai, Z.; Wu, J. Self-Supervised Tumor Segmentation and Prognosis Prediction in Osteosarcoma Using Multiparametric MRI and Clinical Characteristics. Comput. Methods Programs Biomed. 2024, 244. [Google Scholar] [CrossRef]
  98. Cilengir, A.H.; Evrimler, S.; Serel, T.A.; Uluc, E.; Tosun, O. The Diagnostic Value of Magnetic Resonance Imaging-Based Texture Analysis in Differentiating Enchondroma and Chondrosarcoma. Skelet. Radiol 2023, 52, 1039–1049. [Google Scholar] [CrossRef]
  99. Yin, P.; Wang, W.; Wang, S.; Liu, T.; Sun, C.; Liu, X.; Chen, L.; Hong, N. The Potential for Different Computed Tomography-Based Machine Learning Networks to Automatically Segment and Differentiate Pelvic and Sacral Osteosarcoma from Ewing’s Sarcoma. Quant Imaging Med Surg 2023, 13, 3174–3184. [Google Scholar] [CrossRef]
  100. Consalvo, S.; Hinterwimmer, F.; Neumann, J.; Steinborn, M.; Salzmann, M.; Seidl, F.; Lenze, U.; Knebel, C.; Rueckert, D.; Burgkart, R.H.H. Two-Phase Deep Learning Algorithm for Detection and Differentiation of Ewing Sarcoma and Acute Osteomyelitis in Paediatric Radiographs. Anticancer Res 2022, 42, 4371–4380. [Google Scholar] [CrossRef]
  101. Arunachalam, H.B.; Mishra, R.; Daescu, O.; Cederberg, K.; Rakheja, D.; Sengupta, A.; Leonard, D.; Hallac, R.; Leavey, P. Viable and Necrotic Tumor Assessment from Whole Slide Images of Osteosarcoma Using Machine-Learning and Deep-Learning Models. PLoS One 2019, 14, e0210706. [Google Scholar] [CrossRef] [PubMed]
  102. Prexler, C.; Kesper, M.S.; Mustafa, M.; Seemann, W.; Schmidt, O.; Gall, K.; Specht, K.; Rechl, H.; Knebel, C.; Woertler, K.; et al. Radiogenomics in Ewing Sarcoma: Integration of Functional Imaging and Transcriptomics Characterizes Tumor Glucose Uptake. Eur. J. Nucl. Med. Mol. Imaging 2019, 46, S694. [Google Scholar] [CrossRef]
  103. Bhatt, S.; Butola, A.; Kumar, A.; Thapa, P.; Joshi, A.; Jadhav, S.; Singh, N.; Prasad, D.K.; Agarwal, K.; Mehta, D.S. Single-Shot Multispectral Quantitative Phase Imaging of Biological Samples Using Deep Learning. Appl Opt 2023, 62, 3989–3999. [Google Scholar] [CrossRef]
  104. McCulloch, R.A.; Frisoni, T.; Kurunskal, V.; Donati, D.M.; Jeys, L. Computer Navigation and 3d Printing in the Surgical Management of Bone Sarcoma. Cells 2021, 10, 1–15. [Google Scholar] [CrossRef]
Figure 1. The correlation between Artificial Intelligence, Machine and Deep Learning.
Figure 1. The correlation between Artificial Intelligence, Machine and Deep Learning.
Preprints 159557 g001
Figure 2. The workflow of search and selection. The search was conducted in December of 2024.
Figure 2. The workflow of search and selection. The search was conducted in December of 2024.
Preprints 159557 g002
Figure 3. Clustering the studies according to the main area of focus.
Figure 3. Clustering the studies according to the main area of focus.
Preprints 159557 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated