Preprint
Review

This version is not peer-reviewed.

Advances in Artificial Intelligence for Glioblastoma Radiotherapy Planning and Treatment

A peer-reviewed article of this preprint also exists.

Submitted:

14 October 2025

Posted:

15 October 2025

You are already at the latest version

Abstract
Glioblastoma is an aggressive central nervous system tumor characterized by diffuse infiltration. Despite substantial advances in oncology, survival outcomes have shown little improvement over the past three decades. Radiotherapy remains a cornerstone of treatment; however, it faces several challenges, including considerable inter-observer variability in clinical target volume delineation, dose constraints associated with adjacent organs at risk, and the persistently poor prognosis of affected patients. Recent advances in artificial intelligence, particularly deep learning, have shown promise in automating radiation therapy mapping to improve consistency, accuracy, and efficiency. This narrative review explores current auto segmentation frameworks, dose mapping, and biologically informed radiotherapy planning guided by multimodal imaging and mathematical modeling. Studies have demonstrated reproducible tumor segmentations with DSCs exceeding 0.90, reduced planning within minutes, and emerging predictive capabilities for treatment response. Radiogenomic integration has enabled imaging-based classification of critical biomarkers with high accuracy, reinforcing the potential of deep learning models in personalized radiotherapy. Despite these innovations, deployment into clinical practice remains limited, primarily due to insufficient external validation and single-institution training datasets. This review emphasizes the importance of large, annotated imaging datasets, multi-institutional collaboration, and biologically explainable modeling to successfully translate deep learning into glioblastoma radiation planning and longitudinal monitoring.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  

Essentials

  • Deep learning-based auto segmentation models achieve high accuracy and substantially reduced inter-observer variability in glioblastoma radiotherapy planning (pp. 6-8)
  • Biologically informed mathematical modeling integrates tumor growth dynamics with imaging, enabling personalized radiotherapy dose mapping strategies (pp. 8-11)
  • Radiogenomic models integrating imaging and molecular data predict status of key biomarkers, supporting non-invasive tumor subtyping and personalized therapy (pp. 11-13)
  • Multi-institutional datasets, model interpretability, and standardized validation protocols remain critical barriers to clinical adoption of artificial intelligence-guided radiotherapy (pp. 13-15)
  • Recent advances including adaptive radiotherapy, multimodal integration, and foundation models enable personalization and real-time adaptability in glioblastoma radiotherapy (pp.15-17)

Summary Statement

Artificial intelligence offers enhanced glioblastoma radiotherapy clinical workflow by improving segmentation accuracy, integrating biological modeling, and enabling personalized treatment. Clinical translation hinges upon large datasets, interpretability, and multi-institutional collaboration.

Introduction

Glioblastoma is a highly invasive brain tumor with poor prognosis and rather static during the past three decades with median survival of 14 months despite aggressive treatment. Its mortality is further emphasized by 5-year survival rates near 5%. Standard first line treatment for high grade tumors entails maximal safe resection followed by concurrent temozolomide and radiation therapy (RT) for 3-6 weeks followed by temozolomide for an additional 6 months. Treatment challenges include tumor heterogeneity and diffuse infiltration, difficulty in defining precise tumor margins, and resistance to standard therapies [1].
The current clinical workflow for glioblastoma management typically begins with a neurological exam and imaging such as MRI or PET/CT scans for initial diagnosis [2]. This is followed by surgical biopsy or maximal safe resection of the tumor and subsequent histopathological and molecular characterization of the tumor and surrounding tissue [3]. Aiming to maximize tumor resection volume while preserving healthy brain function, clinicians utilize tumor visualization and cortical mapping methods during surgery such as ultrasound, fluorescent dyes, and intraoperative neuroanatomical navigation systems [4]. According to patient age, Karnofsky performance status, and tumor classification, clinicians develop a treatment plan utilizing a combination of temozolomide chemotherapy, palliative care such as corticosteroids, and RT, where RT planning plays a major role.
The current standard of care for RT in glioblastoma management follows European Organisation for Research and Treatment of Cancer and Radiation Therapy Oncology Group guidelines. European standards require delineation of the T1-weighted contrast-enhancing lesion plus a 2 cm margin, while Radiation Therapy Oncology Group includes FLAIR or T2-weighted abnormalities with a 2 cm margin [5]. An example of a standardized, manually segmented RT dose map for a patient with glioblastoma is shown in Figure 1.
Radiotherapy planning can be conceptually divided into three principal stages: treatment preparation, treatment delivery, and, when required, treatment adaptation.
  • Treatment preparation encompasses the delineation of target volumes and organs at risk, the adoption of dose prescriptions, and the determination of treatment plan through simulation.
  • Treatment delivery involves the fractionation of the simulated plan into multiple sessions and the systematic administration of radiation according to the established plan.
  • Treatment adaptation entails the continuous monitoring of treatment execution and the modification of the plan when anatomical or physiological changes compromise the ability of the initial plan to satisfy predefined dosimetric and clinical constraints.
Because glioblastomas grow aggressively and often infiltrate parenchyma near critical brain structures, accurate tumor delineation is essential for effective RT. Unfortunately, conventional clinical target volume (CTV) segmentation remains time-consuming and prone to user variability. Artificial intelligence (AI), particularly deep learning (DL) methods such as convolutional neural networks (CNNs), offers an opportunity to improve the accuracy, efficiency, and reproducibility of this process [6].
Numerous recent studies have attempted to optimize and automate the radiotherapy planning process, aiming to enable patient-specific segmentation and dose escalation by leveraging MRI/CT imaging, and eventually move further dose principles through boosting and local dose control on the basis of radiomics and genetic biomarkers.
This review functions as a survey of research publications, exploring recent methodological advances, their progress over the last ten years, current implementations, and future directions. While there are presently several commercial products for tumor auto-delineation ready for clinical deployment, a majority of the approaches discussed remain in early adoption stages. For predictive modeling in particular, regulatory certification will prove to be a significant challenge; its early adoption hinges upon extensive clinical evidence with cross-institutional validation. Where possible, the authors highlight whether each study was retrospective or prospective, the size of training/testing data, and testing cohorts. In parallel, we discuss emerging trends for AI in glioblastoma RT planning, including the rise of large-scale foundational models, which will undoubtedly improve inter-observer variability and clinical workflow.
During last decade, DL models have emerged as a viable solution for the purposes of glioblastoma diagnosis, patient risk stratification, treatment dose escalation, and tumor and organs at risk auto segmentation. Deep learning architecture offers the ability to extract local and global trends among robust datasets without the necessity for quantitative engineering features which are required for many supervised machine learning (ML) models such as traditional classifiers or regressors. This review aims to critically evaluate the applications of deep learning in the development of innovative and personalized radiotherapy treatment strategies for glioblastoma, with a particular emphasis on their potential to enhance precision, adaptability, and clinical outcomes.

Current Challenges in Glioblastoma Management and Treatment

While CNN architectures have dominated the field of RT planning and auto segmentation, advancing these algorithms for broad, clinically meaningful use requires addressing limitations in this workflow. One of the main limitations is due to the diffuse infiltration of glioblastoma, which leads to ambiguous tumor margins that make it difficult to distinguish between diseased and healthy tissue [7]. Standard RT planning relies heavily on expert manual or semi-automated segmentation of tumor and edema margins using MRI, a process that is subjective due to the heterogeneity of glioblastoma [8]. This subjectivity is worsened by inconsistent imaging techniques between institutions and interpretational differences among clinicians, leading to significant variability in tumor delineation and RT field definition. This complicates accurate tumor segmentation and classification, hampering both clinical decision-making and the development of reliable AI models. Therefore, the quality and consistency of ground truth used for training DL models is limited, impacting model generalizability and performance.
In an effort to create consensus contouring guidelines for glioblastoma, a panel of 10 academic radiation oncologists specializing in brain tumor treatment contoured CTVs on four glioblastoma cases independently before convening to review their contours [9]. Variations across these experts spanned from the definition of T1C and T2-FLAIR signals (with kappa statistics of just 0.69 and 0.74, respectively) to expansions based on delineation of barriers to spread and preferred anatomic pathways of spread. Similarly, in re-irradiation settings, it is extremely challenging to define the extent of disease when tumor recurrences arise in a milieu of radiation necrosis (or treatment effect). Despite careful clinical annotation, training of models can easily be distorted by outliers in datasets [10], imposing costs on either target coverage and/or normal tissue sparing. Closely mimicking the challenges with standardization of RT treatment volumes is that of variability in imaging techniques. Not only do hardware (vendor, magnetic field strength and gradients, receiver coil geometry) and software (sequence acquisition and reconstruction algorithms) matter, but also deformation correction, intensity normalization, and validation of robustness, saliency, and sensitivity of models generated from such imaging datasets require close attention and careful benchmarking.
Furthermore, conventional RT approaches are largely static, utilizing only pre-treatment imaging as the basis for planning and failing to account for the dynamic changes that can occur during the course of and in response to therapy. Tumor volumes, peritumoral edema, and treatment-induced effects such as necrosis or pseudo progression can alter the landscape of the tumor significantly, yet standard RT protocols are not designed to adapt to these temporal variations [11]. This can result in either under-treatment, where areas of true tumor progression fall outside of the planned target volume, or over-treatment, where unnecessary radiation is delivered to normal tissue. This can ultimately lead to post-treatment cognitive dysfunction in 30-50% of patients 6 months after RT [12]. Lastly, the growing availability of molecular data, such as transcriptomics and epigenomics, has not yet been fully incorporated into clinical workflows for RT planning. This data would provide valuable insights into tumor heterogeneity and treatment response, but the practical constraints of integrating large quantities of genomics data into RT planning still represents a critical gap in the field.

Tumor Delineation and Auto Segmentation

CNNs form the backbone of the most popular methods for auto segmentation; simplified inputs and outputs for such models. This architecture and its variants have largely been considered as state-of-the-art in computer vision and medical image processing for the last decade [13]. A variety of optimized CNN models (i.e., DeepMedic, ResNet, Seg-Net, etc.) are publicly available and serve as the initial building blocks in many of the studies outlined in this paper [14,15]. The most popular CNN architecture for auto segmentation is U-Net, a model which leverages an encoder-decoder framework for both down sampling and up sampling of data to preserve local and global image characteristics, thereby reducing noise while enhancing pertinent structural features [13]. Performance of models is standardized and compared primarily with the Dice similarity coefficient (DSC), which is calculated as:
2 * |A ∩ B| / (|A| + |B|).
Where |A ∩ B| is the number of elements in the intersection of sets A and B, |A| is the number of elements in set A, and |B| is the number of elements in set B. A score of 1 designates perfect overlap between the predicted and ground truth volumes, which are typically manual segmentations or clinically approved auto segmentations, and 0 indicates no overlap [16]. Several commercial solutions for organs at risk segmentation also rely on similar technologies [17].
The most common challenges for training and deploying CNNs include lack of large, annotated datasets of high quality and reproducibility. This has been addressed in recent years with large consortia of imaging data, such as the annual Medical Image Computing and Computer Assisted Intervention Society Brain Tumor Segmentation (BRATS) challenge. This international dataset consists of shared multimodal MRIs, annotations, clinical outcomes, and expert-generated segmentations for three subregions: complete tumor, core tumor, and enhancing tumor. The BRATS challenge has played a critical role in advancing glioblastoma auto-segmentation models over the past decade [18]. Early attempts of Medical Image Computing and Computer Assisted Intervention Society 2012 and 2013 BRATS challenge involved 20 algorithms trained on 65 multi-contrast MRI scans from both low- and high-grade glioma patients, with DSC performance ranging from 0.74 to 0.85 overlap [18,19]. At that time, deep learning methods had not yet been re-established as state-of-the-art technology for segmentation and were not in use. As of 2018, clinicians still outperformed auto-segmentation models, largely due to the limited size of annotated datasets available for training [20]. Despite this, CNNs were still considered the leading architecture for segmentation and treatment planning by BRATS organizers in both 2017 and 2018 [21,22]. Since then, BRATS has continued to advance state-of-the-art computer vision for auto segmentation of gliomas. The 2021 benchmark pooled preoperative MRI data stacks from 2,040 patients across multiple institutions. They also introduced the methylated-DNA-protein-cysteine methyltransferase methylation status challenge, inviting participants to train and validate radiogenomic predictions across a diverse clinical dataset [23]. In 2022, the winning ensemble primarily leveraged existing frameworks such as DeepSeg, DeepSCAN, nnU-Net, a novel self-configuring method which can reportedly be trained and deployed on a variety of different auto segmentations [24]. For whole tumor, enhancing tumor, and tumor core, the DSCs were 0.93, 0.88, and 0.88, respectively on BRATS testing dataset [25]. A similar ensemble was developed in 2023, again using the popular nnU-Net framework; notably, the model also implemented data augmentation by generating synthetic MRI training data using generative adversarial networks. This approach was found to mitigate class imbalances with the addition of numerous unique tumor locations and compositions, allowing for high generalizability and DSCs of 0.90, 0.85, and 0.87 for whole tumor, enhancing tumor, and tumor core [26]. Most recently, the BRATS 2024 challenge placed increasing emphasis on post-treatment, including annotated resection cavity, non-enhancing tumor core, and non-enhancing T2/FLAIR hyperintensity. For this task, Ferreira, Moradi, and colleagues developed the top performing model, once again using their nnU-Net synthetic data ensemble [27].
In recent years, DL models have continued to advance, offering improved precision, speed, and reproducibility in tumor-delineating auto-segmentation. Numerous studies have since evaluated various CNN-based architectures (Table 1), including cascaded 3D Fully CNNs [28], hybrid ensemble models such as Incremental XCNet [29], and artificial neural networks validated on large institutional and public datasets [13]. Tools like AutoRANO demonstrated near-perfect intraclass correlation for volumetric tumor metrics using U-Net-based architectures [30]. One retrospective study trained a CNN model using diffusion tensor imaging to predict microscopic tumor infiltration margins, aligning with standardized guidelines for CTV delineation [5]. A recent multi-reader study found that deep neural networks reduced inter-reader variability and segmentation time in stereotactic radiosurgery planning compared to manual expert contours [16]. Architectural innovations, including attention-enhanced CNNs [31], densely connected micro-block Fully CNNs [32], holistically nested neural networks [33], and multiple U-Net variants [34], have achieved high DSCs while reducing processing time to mere seconds in some cases. Earlier studies using BRATS datasets also explored strategies like kernel optimization [35], cascaded inputs [36], multimodal MRI integration [37], and dual-patch batch normalization techniques [38], all contributing to improvements in segmentation accuracy and model efficiency. A recent top performer called PKMI-Net was developed to automate segmentation of gross tumor volume (GTV), high-dose CTV, and low-dose CTV using non-contrast CT, multisequence MRI, and medical record inputs. The model was trained on 148 patients across four institutions and tested on 11 cases with histologically suspected glioblastomas. PKMI-Net achieved DSCs of 0.94, 0.95, and 0.92 for GTV, high-dose CTV, and low-dose CTV respectively, resulting in an overall DSC of 0.95. All outputs were deemed clinically acceptable without requiring revision. The architecture used a two-stage U-Net framework where the initial GTV segmentation informed subsequent high-dose CTV and low-dose CTV predictions, improving contextual accuracy across planning volumes [39].
In conclusion, DL continues to show strong potential for automating segmentation, margin detection, and RT planning. Advances in autoencoders and multi-layer CNNs have largely supplanted fully connected architectures. Many recent models are capable of automatically segmenting tumors with processing times as short as 20 seconds per patient. More recently, novel architecture, particularly diffusion models, are being investigated and show promise by coupling segmentation maps with uncertainty quantification. Nonetheless, high-quality labeling and collaborative data sharing remain essential to advance these technologies toward clinical implementation [40].

Personalized and Biologically Informed Tumor-Progression Radiotherapy

In conventional glioblastoma RT, the planning target volume is generally defined by applying an isotropic expansion to the CTV. This expansion is intended to account for potential errors on target delineation, set up uncertainties, and patient motion, thereby ensuring adequate dose coverage of the tumor. Such CTV approximation can result in centimeter-level errors in planning target volume definition, limiting treatment accuracy and increasing radiation exposure to healthy tissues [41]. Because tumor boundaries are difficult to define, clinicians often apply binary dose escalation protocols, maximizing dose to the core while minimizing dose to surrounding areas, even though microscopic infiltration frequently extends beyond visible margins [42].
Tumor infiltration beyond the visible imaging margins could be more effectively accounted for through the integration of tumor growth models, which not only capture the spatial-temporal dynamics of glioblastoma progression but also enhance the biological interpretability of treatment planning. The development of such informed and personalized RT frameworks has the potential to derive these insights by integrating patient-specific tumor dynamics and multimodal imaging. A Bayesian ML model was developed to infer tumor cell density using a reaction–diffusion model based on the Fisher-Kolmogorov equation. This model incorporated preoperative MRI and FET-PET imaging to estimate microscopic infiltration beyond MRI-visible regions. In a clinical population study, the personalized RT plans derived from these inferred tumor densities showed comparable tumor coverage to standard RT while sparing more healthy tissue. Furthermore, the regions of high tumor cell density aligned with known radioresistant areas, suggesting that such biologically guided maps could inform dose escalation strategies. This approach demonstrates the feasibility of individualized treatment design with clinically available imaging modalities; however, external validity and generalizability are questionable given a small testing cohort of just 8 patients [43].
Similarly, a large-scale modeling study used data from 124 glioblastoma patients in The Cancer Genome Archive alongside 397 from the UCSF Glioma Dataset to identify relations between tumor proliferation, infiltration, and molecular pathway activation. A patient-specific growth model was created using contrast-enhanced T1 and T2/FLAIR MRI inputs, outputting tumor growth predictions within 4-7 minutes. The model was validated on 30 patients by comparing predicted recurrence volumes with those defined by standard radiation oncology practices. Findings reinforced that many recurrences occur beyond standard CTV margins, emphasizing the need for biologically grounded treatment planning. Deployable models must be time-efficient, highly validated, and compatible with clinical computational infrastructure to support practical integration [42]. As an extension of this concept, imaging-derived tumor sub-regions or spatial habitats can be computationally extracted from the relative intensities of pixels within multi-parametric datasets (say, T1, T1C, T2, and FLAIR) and correlated with genomic and molecular features [44]. In principle, these imaging correlates derived from larger MR sequence datasets could populate an assortment of signatures that map to each of the hallmarks of cancer [45], such that personalized interventions can be tailored to specific pathways and/cellular processes. From an RT standpoint, however, the identification of tumor sub-regions harboring inherently aggressive phenotypes (biological target volumes) may enable a degree of rational personalization of RT treatment volumes, expansions, and doses that improves upon what is currently available.

Modification of Treatment, Patient Response Prediction, and Triage During Therapy

Despite advances in RT techniques, clinical outcomes for many patients with glioblastoma remain poor, underscoring the limitations of current treatment strategies. Timely identification of patients at heightened risk for unfavorable outcomes during the course of RT is essential for improving therapeutic efficacy, reducing the likelihood of treatment interruptions, and mitigating the associated burden on healthcare systems. ML offers a transformative opportunity in this regard, as it enables the systematic integration and analysis of large-scale, multimodal clinical datasets. By uncovering complex patterns that are often indiscernible to conventional statistical approaches, ML-based models have the potential to support early risk stratification, guide adaptive treatment strategies, and ultimately contribute to more personalized and effective patient care. Efforts to characterize local tumor infiltration, predict dose distributions based on patient-specific anatomy and prescription, and ultimately forecast overall clinical outcomes have gained increasing attention in recent years.
A recent multi-institutional study utilized leave-one-out cross validation to train and evaluate a patch-based CNN using multi-parameter MRI data stacks from 229 glioblastoma patients to predict regions of interest as either high- or low-infiltration. According to their findings, patients were found to be 8.13 to 19.48 times more likely to experience tumor recurrence when having high-infiltration regions compared to low-infiltration regions [46]. This highlights the requirement for datasets to be annotated by specific regions. DL is only as good as the data it is trained on, and its continued improvement and novel insights could potentially be catalyzed by further stratification of tumor infiltrating regions. DL has also been used in research to rapidly segment patient images. DeepMedic was applied retrospectively to assess high-grade glioma recurrence, offering segmentation-derived insights [14]. The study suggested that reirradiation is safe and effective in glioblastoma treatment, showing the utility of auto segmentation models in treatment planning and optimization.
A large clinical study evaluated the effectiveness of an ML-based triage system utilizing electronic medical record data for predicting acute care needs during RT and chemoradiation. The algorithm assessed 963 outpatient adult RT or chemoradiation treatment courses to identify patients with a ≥10% risk of requiring acute care, defined as emergency department visits or hospital admissions during treatment. Of these, 311 courses were randomized to either standardized weekly or required biweekly clinical evaluations. Patients identified as high-risk by the ML model and assigned to the intensified clinical follow-up experienced a significant reduction in acute care visits, dropping from 22.3% to 12.3% compared to those receiving standard care (P = .02). The model demonstrated strong predictive value, with a receiver operating characteristic area under the curve (AUC) of 0.851, supporting its potential as a tool for real-time patient management during therapy [47].
In a separate study focused on treatment response prediction, ML was used to stratify patients undergoing RT as likely responders or non-responders from radiomic features. Response probability was output by the model and compared to clinician assessments. A decision threshold of 67% was set by the model to classify patients as responders versus non-responders. The model achieved an accuracy of 75% with an AUC of 0.74, outperforming clinician assessments, which achieved an accuracy of 54% and an AUC of 0.56. These findings underscore the ability of ML frameworks to integrate complex imaging and clinical data to outperform physician predicted therapeutic response [48]. Integrated end-to-end workflows have also been developed. One example combined automated glioblastoma segmentation and ensemble-based survival prediction into a single pipeline trained on BRATS-2020 data. The model classified patients as long (> 12 months) or short (< 12 months) term survivors with AUCs of 0.86 and 0.72 on BRATS-2020 and institutional datasets, respectively. The auto segmentation component achieved a DSC of 0.91, supporting the model’s utility in streamlining the entire planning process from segmentation through prognosis [49]. One study further demonstrated the potential for mapping dose escalation and demonstrated planning target volume coverage comparable to manual segmentation, though training and validation were greatly limited due to small sample sizes of 95 and 15 patients, respectively [50].
Together, these studies highlight how AI-guided tools can assist not only in pre-treatment planning but also in monitoring and adjusting care during the treatment course [51]. By identifying patients who may benefit from closer follow-up or alternative treatment strategies, these models offer the potential to improve clinical outcomes, reduce treatment-related complications, and optimize healthcare resource utilization.

Radiogenomics and Non-Invasive Biomarker Integration

Radiogenomics represents a growing intersection between imaging, ML, and genomic data, offering the potential to guide personalized treatment strategies in glioblastoma. This approach is particularly promising given the heterogeneity of glioblastoma, which limits the predictive utility of traditional histopathology and challenges the generalizability of fixed RT protocols.
Multiple ML frameworks have been employed to predict genetic mutations and classify glioma subtypes based on imaging features. One study trained a residual CNN model using 406 preoperative brain MRIs to predict isocitrate dehydrogenase mutation status, achieving a testing accuracy of 85.7%. The ability to infer genotype from imaging suggests a reciprocal potential, where known mutation status could be integrated into AI models to inform RT planning [52]. This is especially relevant given that patients with isocitrate dehydrogenase-mutated glioblastoma have demonstrated significantly longer overall survival and progression free survival compared to isocitrate dehydrogenase-wildtype cases (overall survival of 39 months vs 14 months), independent of treatment status [53]. Similarly, methylated-DNA-protein-cysteine methyltransferase promoter methylation has been associated with increased responsiveness to alkylating agents including temozolomide, highlighting the importance of genomic markers in therapeutic decision-making [15].
A random forest-based radiomics model was developed for glioma grading using contrast-enhanced T1-weighted MRI from a training cohort of 101 patients. Testing on an independent cohort of 50 patients from two external institutions yielded an AUC of 0.898, with 84% sensitivity, 76% specificity, and 80% accuracy. The highest-performing model combined DL features from a simple architecture deep CNN model, VGG16, with traditional radiomic features, outperforming either input modality alone [54]. In another study, support vector machine classification combined with synthetic minority over-sampling was able to differentiate high (grades III and IV) and low (I and II) grade gliomas with 94-96% accuracy, further supporting the utility of hybrid AI approaches [55]. Recently, a DL model was cross validated on 357 patients with isocitrate dehydrogenase-wildtype glioblastoma using pre-operative multiparametric MRI. Notably, the model also incorporated radiogenomic features using genetic sequencing data, enabling spatial mapping of critical gene mutations including NF1, TP53, PTEN, and EGFR. The multimodal framework was compared with an MRI-only CNN model as well as a radiogenomic-only support vector machine, outperforming both with an AUC of 0.70-0.92 across 13 different biomarkers [56]. This, along with the aforementioned studies, further bolsters the case for incorporating combined models that can leverage both imaging and heterogenous, molecular-level precision.
Beyond imaging, non-invasive biomarker techniques such as liquid biopsy are being explored for diagnostic and therapeutic monitoring purposes. Circulating tumor DNA and microRNA can provide insights into tumor status, although their reliability remains limited. Transport is impeded by the blood-brain barrier, and intratumoral phenotypic heterogeneity is masked by assessment of a cumulative metric diluted in the systemic circulation. Brain biopsy remains the gold standard but is invasive and carries sampling-related risks. Novel systemic immune-inflammation indices offer non-invasive alternative biomarkers to benchmark clinical grade, glioma subtype, and patient prognosis [57]. Additionally, serum levels of exosome microRNA, along with specific microRNA expression profiles (e.g., miR-21, miR-181c, miR-195, miR-196b), may serve as prognostic biomarkers to accurately predict glioma status and treatment outcomes [58,59,60,61,62]. Furthermore, the integration of radiogenomics with precision population cancer medicine offers a novel approach for comprehensive and longitudinal monitoring of patients, enhancing individualized care and treatment stratification (Figure 2). Initiatives like the Children’s Brain Tumor Tissue Consortium are advancing this goal by curating large-scale shared data repositories. Complementary bioinformatics tools, such as NetworkAnalyst, OmicsNet, Cytoscape, and Alphafold, facilitate exploration of protein-protein interactions, while multimodal approaches combining MRI, genomics, metabolomics, and AI imaging offer a multidimensional view of tumor biology [63].
Radiogenomics and non-invasive biomarker integration provide a promising framework for future glioblastoma treatment personalization. Continued expansion of multi-institutional datasets, model validation across diverse patient populations, and incorporation of comprehensive molecular profiles into imaging-based models will be essential for translating these technologies into clinical care. Modernized data stewardship practices and incentives for pooling population scale data sets are a crucial step toward providing balance to data and a data ecosystem and models that are representative of populations [64,65]. Validated outcomes data and high-quality ground truth are also a significant hurdle to model validation when considering more nuanced outcomes beyond progression or disease survival [66].
A systematic review of 14 radiogenomics studies reported AUC values ranging from 0.74 to 0.91 but found no consistent patterns based on imaging modality. Modalities used across the studies included T1, T1C, T2, FLAIR, DTI, DWI, spectroscopy, Dynamic Susceptibility Contrast, and Dynamic Contrast Enhanced-MRI. AI techniques included support vector machine, diagonal linear and quadratic discriminant analysis, semi-supervised learning, and CNNs. All studies included MRI as a required input, and all models were trained on single-institution datasets with limited sample sizes (8–37 patients), increasing the risk for protocol-specific biases and overfitting [67]. It is worth noting that the real-world impact of these models on patient survival is unknown as they have yet to be implemented into clinical practice.

Interpretability and Explainability in Deep Learning Models

DL models, often referred to as “black-box” systems, have demonstrated impressive performance in RT planning for glioblastoma; however, a key barrier to clinical adoption remains the lack of consistent performance and generalization. This is further complicated by the absence of interpretability and explainability of these models, in particular when determining treatment plan and response [19,68]. For example, the challenge of inter-institutional variability was demonstrated in a CNN model trained on 44 glioblastoma patients across two institutions. When tested within the same institution, DSCs reached 0.72 and 0.76. However, when validated across institutions, performance dropped to 0.68 and 0.59, highlighting the need for large, diverse datasets to develop models capable of generalizing across clinical settings [69]. Beyond dataset size, sources of bias such as heterogeneity in imaging protocols, scanner physics, operator technique, patient demographics, and post-processing software must also be considered. Incorporating harmonization strategies and bias-aware model training will therefore be as critical as expanding dataset diversity in addressing systematic variability.
Normally, the most accurate methods such as DL are the least transparent, while methods encouraging transparency, such as decision trees, are less successful in their performance [70]. The explainability of AI systems is essential for fostering trust among medical professionals and could play a vital role in facilitating AI integration into clinical practice. Clinicians need a clear understanding of how automated algorithms arrive at specific segmentation, dose planning, or treatment response predictions in order to trust and effectively utilize these outputs in patient care [71]. Fortunately, the risks of unsupervised, black box predictions are inherently mitigated by rigorous assessment of dose maps by the RT team with adherence to strict Radiation Therapy Oncology Group guidelines for glioblastoma given that physicians can simply adjust the auto-generated plans as necessary. However, lack of transparency poses a significant challenge for incorporation of predictions, such as treatment response, which cannot be readily verified by physicians. While interpretability is important, precedent from genomic clinical decision support shows that black-box algorithms can still clear regulatory hurdles and achieve widespread use when backed by strong evidence that they accomplish the intended purpose. Thus, emphasis should be placed on rigorous prospective validation, unbiased benchmarking, and tools such as Shapley value plots providing insight into model reasoning in the absence of full explainability.
Saliency mapping approaches, including class activation mapping, gradient-weighted class activation mapping, and integrated gradients, have been introduced to generate post hoc explanations of model prediction [62,72,73]. For example, gradient-weighted class activation mapping overlays gradients or heatmaps of any target concept, such as tumor segmentation, on MRI or CT images, transitioning into the final convolutional layer to output a spatial map that visually indicates which localized regions most influence the target concept. These visualizations not only enable clinicians to verify that AI is focusing on clinically plausible anatomic patterns but also flag instances where the model may be inappropriately influenced by artifacts or non-tumor structures. Furthermore, these visualizations lend insights when models fail and help achieve model generalization by identifying dataset bias.
Other methods, namely Shapley additive explanations and Local Interpretable Model-Agnostic Explanations, are perturbation-based and are model-agnostic, requiring only model input and output [74]. For example, if a model predicts that a patient has a high risk of recurrence, Shapley additive explanations assign each feature of a dataset (tumor size, patient age, treatment history, etc.) an important value for a particular prediction to show how much each factor contributed to that prediction. The model works by analyzing many combinations of these features and distributing the influence each one has on the final output. While these methods have been widely used for tabular or radiomics data, emerging studies are adapting Shapley additive explanations values to highlight influential image features or radiomic descriptors relevant to segmentation boundaries or radiogenomic predictions.
Recent research also explores uncertainty quantification in model outputs, either through Bayesian DL or using Monte Carlo dropout during inference. This produces probabilistic segmentation maps or confidence intervals for dose predictions, supporting clinicians in assessing which automated outputs warrant further scrutiny or consensus review. In a recent study, two Bayesian DL models were assessed alongside eight uncertainty measures, utilizing 292 PET/CT scans comprising a sizeable cross-institutional dataset to investigate their RT approach for oropharyngeal cancer treatment. The study accurately estimated the quality of their novel DL segmentation in 86.6% of cases; more importantly, however, it successfully recognized areas of interest and cases where the DL framework generated low certainty and therefore increased probability of poor performance [75].
Overall, there is a push to integrate these explainability tools into AI-based RT planning software as standard features. Transparent outputs allow for cross-checking, identify hidden model biases, and reveal avenues for model optimization and re-training, thereby increasing the likelihood of clinician uptake.

AI-Driven Solutions and Current Trends in Technology

While CNNs remain the standard for RT planning in glioblastoma, alternative AI-driven solutions are also being developed to address the challenges in the current workflow. For example, intraoperative imaging combined with AI-driven segmentation could enhance tumor boundary detection in real time. In RT planning, DL models trained on consensus-derived contours could standardize target definition and reduce inter-observer variability. Real-time multimodal imaging can be combined with DL models to predict tumor trajectory and adjust treatment dynamically. ML applied to multi-omics data could help further characterize tumors, helping guide RT planning and treatment personalization. Such integrative approaches could pave the way for AI-driven, personalized RT planning that addresses the current challenges that physicians face in glioblastoma management and treatment.
CNNs are limited by their reliance on local receptive fields to segment tumors with poorly defined or infiltrative margins, but recent advances have incorporated self-attention mechanisms and transformer-based architectures to capture long-range dependencies and contextual information, improving boundary delineation [76]. Additionally, consensus learning frameworks can integrate outputs from multiple models or annotators to reduce interobserver variability and enhance segmentation reliability [77]. DL-based harmonization and normalization techniques can also minimize the impact of scanner- and protocol-related variability, enhancing reproducibility across institutions [78]. Collectively, these approaches address one of the key limitations of traditional CNNs and represent an important step toward AI-driven tumor segmentation.
Another promising solution involves integrating multi-omics data into ML frameworks to guide RT planning and improve tumor classification. For example, the integrative glioblastoma subtype classifier leveraged both gene expression (transcriptomic) and DNA methylation (epigenomic) data in a multi-omics model [79]. Using Random Forest for feature selection and Nearest Shrunken Centroid for classification, integrative glioblastoma subtype classifier achieved a high mean AUC of 0.96, outperforming classifiers built on either data modality alone. The authors utilized only five features per subtype and were able to produce a highly accurate, cost-effective model from large-scale genomics data. This approach provides a template for merging multi-omics data with imaging-driven predictions to enhance tumor classification and segmentation accuracy. This can allow for patient-specific risk stratification and dose personalization, especially as the accessibility of high-throughput molecular testing improves.
Next, adaptive radiotherapy accounts for temporal changes in tumor position, volume, and response over the course of treatment. Through multimodal imaging with PET or MRI, adaptive radiotherapy captures high-resolution datasets to evaluate patient-specific, anatomical changes in tumor shapes, borders, and locations throughout treatment [80,81]. State-of-the-art systems can quickly process real-time imaging data and even optimize radiation beam placement and intensity, making in-session adjustments to the treatment plan and accommodating daily variations in the patient’s anatomy [82]. Continuous tracking offers real time feedback, enabling rapid correction if there are any unexpected factors or significant deviation from the original treatment plan. A study by Guevara et al. studied whether adaptive radiotherapy could reduce the RT dose with the aim to improve post-RT cognitive function [83]. They evaluated 10 glioblastoma patients who previously received RT treatment over six weeks without adaptation and simulated weekly plans that adjusted the dosage according to the shrinking tumor. While still targeting the cancerous tissue, the mean and maximum doses administered to the hippocampus and brain were significantly reduced for the adjusted plan. Therefore, incorporating adaptive radiotherapy into pre-existing CNN architectures can address the limitations of static treatment, possibly mitigating the neurocognitive side effects of RT for patients.
Lastly, the emergence of foundation models and latent diffusion architecture represents the latest trends in AI for glioblastoma treatment. Foundation models are large-scale models pre-trained on millions of images and require less labeled examples and demonstrate improved standardization and data-efficiency [84]. For example, the Segment Anything foundation model, trained only on object segmentation in 2D photographs, was given the BRATS segmentation challenge and achieved high accuracy for interactive glioma MRI segmentation [85]. In parallel, latent diffusion models generate 3D multi-modal images of brain MRIs and their corresponding masks to augment scarce datasets. Diffusion models are trained by adding noise to an image in a series of iterative steps, gradually denoising, and transforming a noise vector into an image. This methodology allows them to capture complex, high-dimensional structures and generate synthetic images from the underlying data, boosting both the quantity and quality of training data for complex tasks like tumor segmentation [86]. These generative models underpin newer pipelines for auto segmentation and synthetic-CT/field optimization, complementing foundation models and feeding downstream dose-prediction networks. Together, these solutions highlight the growing role of DL in advancing and redefining the glioblastoma treatment workflow.

Concluding Remarks

Deep learning models have the power to dramatically streamline clinical workflow and are already being deployed to do so. Manual segmentation is redundant, tedious, and time consuming for physicians, and convolutional neural networks offer the perfect framework to rapidly automate this task with minimal risk due to standardized treatment guidelines and strict interprofessional review boards. In contrast, the outlook for treatment response prediction and personalized artificial intelligence generated therapy regimens remains uncertain due to the necessity for substantial clinical data, likely requiring randomized clinical trials to ensure patient safety and efficacy if any changes are to be made to treatment guidelines. Still, deep learning can dramatically optimize clinical workflow with auto segmentation, offering physicians the ability to spend more time with their patients where their time matters most.
Cross-institutional data annotation, sharing, and validation is a crucial step toward improving the performance and generalizability of deep learning models for glioblastoma radiotherapy planning. These collaborative efforts account for inter-institutional variability in imaging acquisition protocols, hardware, and preprocessing methods, which otherwise increase the risk of model overfitting. By incorporating diverse datasets, models can be better adapted to highly variable clinical scenarios, including those in rural or resource-limited settings where imaging quality and technique may differ significantly. While larger academic hospitals have begun deploying deep learning models trained on proprietary datasets, broader collaboration is essential to ensure equitable access to high-quality, artificial intelligence assisted care. Further, bifurcation within single institution practices and fragmentation between multiple institutions carry risk of lowering generalizability and external validity.
It is important to emphasize that, in the absence of prospective data validated by clinical trials, standard radiotherapy protocols established by the Radiation Therapy Oncology Group must remain the foundation of treatment planning. Such guidelines require 2 cm margins around visible tumor boundaries, and any artificially generated segmentation that reduces this margin would be subject to rigorous clinical review. As such, all segmentation outputs, regardless of whether they are clinician-derived or model-assisted, undergo final approval by the radiation oncology team, including the attending physician, dosimetrists, and medical physicists. Outputs must comply with established standards of care in the absence of compelling evidence substantiated by randomized controlled trials.
In addition to technical concerns, there are also critical workflow and human-factor limitations that arise once artificial intelligence systems are introduced into clinical practice. One key risk is automation bias, where clinicians may place excessive trust in artificially generated contours or treatment plans, potentially accepting results without sufficient review [87]. This becomes especially problematic when the system’s output appears precise but lacks contextual nuance or clinical appropriateness. Poorly designed user interfaces can further amplify this issue by making artificial intelligence tools difficult to navigate or inefficient to use, which can lead to passive acceptance rather than informed engagement. Even when models perform well in controlled environments, they may still fall short of being clinically acceptable if they are not designed to integrate smoothly into real-world workflows, support clinician autonomy, and promote active decision-making. A concise overview of the pros and cons of artificial intelligence-based methods vs. manual methods is provided in Table 2.
Deep learning methods have served as state-of-the-art architecture in medical computer vision for over a decade. Numerous studies have demonstrated their effectiveness in tumor segmentation, radiogenomic prediction, and dose planning. Vision transformers are now emerging as a disruptive alternative to auto segmentation models, with growing evidence suggesting their ability to outperform convolutional neural networks due to their improved contextual understanding across spatially distant regions of an image. A significant limitation of vision transformers is their high data demand, requiring expansive, high-quality annotated datasets. As larger annotated imaging datasets become available through shared institutional repositories and as data augmentation techniques continue to evolve, vision transformers are likely to emerge as a viable competitor for auto segmentation in the coming years. This transition will hinge on continued investment in data infrastructure and collaborative research networks.
Ultimately, the integration of biologically informed mathematical modeling, multimodal imaging, and genomic data represents a significant evolution in glioblastoma radiotherapy. By capturing the heterogeneity of tumor infiltration and biological behavior, these frameworks can enhance tumor targeting, minimize toxicity, and advance traditional approaches that have failed to improve patient survival for decades [41]. In parallel, coordinating standards for data curation and artificial intelligence validation within radiation oncology and neuro-oncology communities will be essential for clinical translation. Organizations such as the American Society of Clinical Oncology’s information technology and artificial intelligence initiatives, the National Cancer Institute’s Imaging Data Commons, and the American Society for Radiation Oncology are well-positioned to serve as connective tissue between academic medical centers and large practice groups. Establishing such a formalized community of practice could accelerate adoption, minimize redundancy, and ensure that advances in artificial intelligence for glioblastoma radiotherapy are both reproducible and clinically actionable.

Conflict of Interest

Nikos Paragios is a stockholder and employee of TheraPanacea

Abbreviations

Artificial Intelligence (AI), Radiotherapy (RT), Clinical Target Volume (CTV), Gross Target Volume (GTV), Deep Learning (DL), Convolutional Neural Network (CNN), Machine Learning (ML), Dice Similarity Coefficient (DSC), Brain Tumor Segmentation (BRATS), Area Under the Curve (AUC)

References

  1. Kanderi T, Munakomi S, Gupta V. Glioblastoma Multiforme. StatPearls. Treasure Island (FL): StatPearls Publishing; 2025.
  2. McKinnon C, Nandhabalan M, Murray SA, Plaha P. Glioblastoma: clinical presentation, diagnosis, and management. Bmj. 2021;374:n1560. Epub 20210714. PubMed PMID: 34261630. [CrossRef]
  3. Brown TJ, Brennan MC, Li M, Church EW, Brandmeir NJ, Rakszawski KL, Patel AS, Rizk EB, Suki D, Sawaya R, Glantz M. Association of the Extent of Resection With Survival in Glioblastoma: A Systematic Review and Meta-analysis. JAMA Oncol. 2016;2(11):1460–9. PubMed PMID: 27310651; PMCID: PMC6438173. [CrossRef]
  4. Weller M, van den Bent M, Preusser M, Le Rhun E, Tonn JC, Minniti G, Bendszus M, Balana C, Chinot O, Dirven L, French P, Hegi ME, Jakola AS, Platten M, Roth P, Rudà R, Short S, Smits M, Taphoorn MJB, von Deimling A, Westphal M, Soffietti R, Reifenberger G, Wick W. EANO guidelines on the diagnosis and treatment of diffuse gliomas of adulthood. Nat Rev Clin Oncol. 2021;18(3):170–86. Epub 20201208. PubMed PMID: 33293629; PMCID: PMC7904519. [CrossRef]
  5. Peeken JC, Molina-Romero M, Diehl C, Menze BH, Straube C, Meyer B, Zimmer C, Wiestler B, Combs SE. Deep learning derived tumor infiltration maps for personalized target definition in Glioblastoma radiotherapy. Radiotherapy and Oncology. 2019;138:166–72. [CrossRef]
  6. Rončević A, Koruga N, Soldo Koruga A, Rončević R, Rotim T, Šimundić T, Kretić D, Perić M, Turk T, Štimac D. Personalized Treatment of Glioblastoma: Current State and Future Perspective. Biomedicines. 2023;11(6):1579. [CrossRef]
  7. Erices JI, Bizama C, Niechi I, Uribe D, Rosales A, Fabres K, Navarro-Martínez G, Torres Á, San Martín R, Roa JC, Quezada-Monrás C. Glioblastoma Microenvironment and Invasiveness: New Insights and Therapeutic Targets. Int J Mol Sci. 2023;24(8). Epub 20230411. PubMed PMID: 37108208; PMCID: PMC10139189. [CrossRef]
  8. Fathi Kazerooni A, Nabil M, Zeinali Zadeh M, Firouznia K, Azmoudeh-Ardalan F, Frangi AF, Davatzikos C, Saligheh Rad H. Characterization of active and infiltrative tumorous subregions from normal tissue in brain gliomas using multiparametric MRI. J Magn Reson Imaging. 2018;48(4):938–50. PubMed PMID: 29412496; PMCID: PMC6081259. Epub 20180207. [CrossRef]
  9. Kruser TJ, Bosch WR, Badiyan SN, Bovi JA, Ghia AJ, Kim MM, Solanki AA, Sachdev S, Tsien C, Wang TJC, Mehta MP, McMullen KP. NRG brain tumor specialists consensus guidelines for glioblastoma contouring. J Neurooncol. 2019;143(1):157–66. [CrossRef]
  10. Poel R, Rüfenacht E, Ermis E, Müller M, Fix MK, Aebersold DM, Manser P, Reyes M. Impact of random outliers in auto-segmented targets on radiotherapy treatment plans for glioblastoma. Radiat Oncol. 2022;17(1):170. [CrossRef]
  11. Sidibe I, Tensaouti F, Gilhodes J, Cabarrou B, Filleron T, Desmoulin F, Ken S, Noël G, Truc G, Sunyach MP, Charissoux M, Magné N, Lotterie JA, Roques M, Péran P, Cohen-Jonathan Moyal E, Laprie A. Pseudoprogression in GBM versus true progression in patients with glioblastoma: A multiapproach analysis. Radiother Oncol. 2023;181:109486. Epub 20230124. PubMed PMID: 36706959. [CrossRef]
  12. Cramer CK, Cummings TL, Andrews RN, Strowd R, Rapp SR, Shaw EG, Chan MD, Lesser GJ. Treatment of Radiation-Induced Cognitive Decline in Adult Brain Tumor Patients. Curr Treat Options Oncol. 2019;20(5):42. Epub 20190408. PubMed PMID: 30963289; PMCID: PMC6594685. [CrossRef]
  13. Kickingereder P, Isensee F, Tursunova I, Petersen J, Neuberger U, Bonekamp D, Brugnara G, Schell M, Kessler T, Foltyn M, Harting I, Sahm F, Prager M, Nowosielski M, Wick A, Nolden M, Radbruch A, Debus J, Schlemmer H-P, Heiland S, Platten M, Von Deimling A, Van Den Bent MJ, Gorlia T, Wick W, Bendszus M, Maier-Hein KH. Automated quantitative tumour response assessment of MRI in neuro-oncology with artificial neural networks: a multicentre, retrospective study. The Lancet Oncology. 2019;20(5):728–40. [CrossRef]
  14. Mansoorian S, Schmidt M, Weissmann T, Delev D, Heiland DH, Coras R, Stritzelberger J, Saake M, Höfler D, Schubert P, Schmitter C, Lettmaier S, Filimonova I, Frey B, Gaipl US, Distel LV, Semrau S, Bert C, Eze C, Schönecker S, Belka C, Blümcke I, Uder M, Schnell O, Dörfler A, Fietkau R, Putz F. Reirradiation for recurrent glioblastoma: the significance of the residual tumor volume. J Neurooncol. 2025. [CrossRef]
  15. Shaver M, Kohanteb P, Chiou C, Bardis M, Chantaduly C, Bota D, Filippi C, Weinberg B, Grinband J, Chow D, Chang P. Optimizing Neuro-Oncology Imaging: A Review of Deep Learning Approaches for Glioma Imaging. Cancers. 2019;11(6):829. [CrossRef]
  16. Lu S-L, Xiao F-R, Cheng JC-H, Yang W-C, Cheng Y-H, Chang Y-C, Lin J-Y, Liang C-H, Lu J-T, Chen Y-F, Hsu F-M. Randomized multi-reader evaluation of automated detection and segmentation of brain tumors in stereotactic radiosurgery with deep neural networks. Neuro-Oncology. 2021;23(9):1560–8. [CrossRef]
  17. Doolan PJ, Charalambous S, Roussakis Y, Leczynski A, Peratikou M, Benjamin M, Ferentinos K, Strouthos I, Zamboglou C, Karagiannis E. A clinical evaluation of the performance of five commercial artificial intelligence contouring systems for radiotherapy. Front Oncol. 2023;13:1213068. Epub 20230804. PubMed PMID: 37601695; PMCID: PMC10436522. [CrossRef]
  18. Menze BH, Jakab A, Bauer S, Kalpathy-Cramer J, Farahani K, Kirby J, Burren Y, Porz N, Slotboom J, Wiest R, Lanczi L, Gerstner E, Weber M-A, Arbel T, Avants BB, Ayache N, Buendia P, Collins DL, Cordier N, Corso JJ, Criminisi A, Das T, Delingette H, Demiralp C, Durst CR, Dojat M, Doyle S, Festa J, Forbes F, Geremia E, Glocker B, Golland P, Guo X, Hamamci A, Iftekharuddin KM, Jena R, John NM, Konukoglu E, Lashkari D, Mariz JA, Meier R, Pereira S, Precup D, Price SJ, Raviv TR, Reza SMS, Ryan M, Sarikaya D, Schwartz L, Shin H-C, Shotton J, Silva CA, Sousa N, Subbanna NK, Szekely G, Taylor TJ, Thomas OM, Tustison NJ, Unal G, Vasseur F, Wintermark M, Ye DH, Zhao L, Zhao B, Zikic D, Prastawa M, Reyes M, Van Leemput K. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Trans Med Imaging. 2015;34(10):1993–2024. [CrossRef]
  19. Marey A, Arjmand P, Alerab ADS, Eslami MJ, Saad AM, Sanchez N, Umair M. Explainability, transparency and black box challenges of AI in radiology: impact on patient care in cardiovascular radiology. Egypt J Radiol Nucl Med. 2024;55(1):183. [CrossRef]
  20. Bakas S, Reyes M, Jakab A, Bauer S, Rempfler M, Crimi A, Shinohara RT, Berger C, Ha SM, Rozycki M, Prastawa M, Alberts E, Lipkova J, Freymann J, Kirby J, Bilello M, Fathallah-Shaykh H, Wiest R, Kirschke J, Wiestler B, Colen R, Kotrotsou A, Lamontagne P, Marcus D, Milchenko M, Nazeri A, Weber M-A, Mahajan A, Baid U, Gerstner E, Kwon D, Acharya G, Agarwal M, Alam M, Albiol A, Albiol A, Albiol FJ, Alex V, Allinson N, Amorim PHA, Amrutkar A, Anand G, Andermatt S, Arbel T, Arbelaez P, Avery A, Azmat M, Pranjal B, Bai W, Banerjee S, Barth B, Batchelder T, Batmanghelich K, Battistella E, Beers A, Belyaev M, Bendszus M, Benson E, Bernal J, Bharath HN, Biros G, Bisdas S, Brown J, Cabezas M, Cao S, Cardoso JM, Carver EN, Casamitjana A, Castillo LS, Catà M, Cattin P, Cerigues A, Chagas VS, Chandra S, Chang Y-J, Chang S, Chang K, Chazalon J, Chen S, Chen W, Chen JW, Chen Z, Cheng K, Choudhury AR, Chylla R, Clérigues A, Colleman S, Colmeiro RGR, Combalia M, Costa A, Cui X, Dai Z, Dai L, Daza LA, Deutsch E, Ding C, Dong C, Dong S, Dudzik W, Eaton-Rosen Z, Egan G, Escudero G, Estienne T, Everson R, Fabrizio J, Fan Y, Fang L, Feng X, Ferrante E, Fidon L, Fischer M, French AP, Fridman N, Fu H, Fuentes D, Gao Y, Gates E, Gering D, Gholami A, Gierke W, Glocker B, Gong M, González-Villá S, Grosges T, Guan Y, Guo S, Gupta S, Han W-S, Han IS, Harmuth K, He H, Hernández-Sabaté A, Herrmann E, Himthani N, Hsu W, Hsu C, Hu X, Hu X, Hu Y, Hu Y, Hua R, Huang T-Y, Huang W, Huffel SV, Huo Q, Vivek HV, Iftekharuddin KM, Isensee F, Islam M, Jackson AS, Jambawalikar SR, Jesson A, Jian W, Jin P, Jose VJM, Jungo A, Kainz B, Kamnitsas K, Kao P-Y, Karnawat A, Kellermeier T, Kermi A, Keutzer K, Khadir MT, Khened M, Kickingereder P, Kim G, King N, Knapp H, Knecht U, Kohli L, Kong D, Kong X, Koppers S, Kori A, Krishnamurthi G, Krivov E, Kumar P, Kushibar K, Lachinov D, Lambrou T, Lee J, Lee C, Lee Y, Lee M, Lefkovits S, Lefkovits L, Levitt J, Li T, Li H, Li W, Li H, Li X, Li Y, Li H, Li Z, Li X, Li Z, Li X, Lin Z-S, Lin F, Lio P, Liu C, Liu B, Liu X, Liu M, Liu J, Liu L, Llado X, Lopez MM, Lorenzo PR, Lu Z, Luo L, Luo Z, Ma J, Ma K, Mackie T, Madabushi A, Mahmoudi I, Maier-Hein KH, Maji P, Mammen CP, Mang A, Manjunath BS, Marcinkiewicz M, McDonagh S, McKenna S, McKinley R, Mehl M, Mehta S, Mehta R, Meier R, Meinel C, Merhof D, Meyer C, Miller R, Mitra S, Moiyadi A, Molina-Garcia D, Monteiro MAB, Mrukwa G, Myronenko A, Nalepa J, Ngo T, Nie D, Ning H, Niu C, Nuechterlein NK, Oermann E, Oliveira A, Oliveira DDC, Oliver A, Osman AFI, Ou Y-N, Ourselin S, Paragios N, Park MS, Paschke B, Pauloski JG, Pawar K, Pawlowski N, Pei L, Peng S, Pereira SM, Perez-Beteta J, Perez-Garcia VM, Pezold S, Pham B, Phophalia A, Piella G, Pillai GN, Piraud M, Pisov M, Popli A, Pound MP, Pourreza R, Prasanna P, Prkovska V, Pridmore TP, Puch S, Puybareau É, Qian B, Qiao X, Rajchl M, Rane S, Rebsamen M, Ren H, Ren X, Revanuru K, Rezaei M, Rippel O, Rivera LC, Robert C, Rosen B, Rueckert D, Safwan M, Salem M, Salvi J, Sanchez I, Sánchez I, Santos HM, Sartor E, Schellingerhout D, Scheufele K, Scott MR, Scussel AA, Sedlar S, Serrano-Rubio JP, Shah NJ, Shah N, Shaikh M, Shankar BU, Shboul Z, Shen H, Shen D, Shen L, Shen H, Shenoy V, Shi F, Shin HE, Shu H, Sima D, Sinclair M, Smedby O, Snyder JM, Soltaninejad M, Song G, Soni M, Stawiaski J, Subramanian S, Sun L, Sun R, Sun J, Sun K, Sun Y, Sun G, Sun S, Suter YR, Szilagyi L, Talbar S, Tao D, Teng Z, Thakur S, Thakur MH, Tharakan S, Tiwari P, Tochon G, Tran T, Tsai YM, Tseng K-L, Tuan TA, Turlapov V, Tustison N, Vakalopoulou M, Valverde S, Vanguri R, Vasiliev E, Ventura J, Vera L, Vercauteren T, Verrastro CA, Vidyaratne L, Vilaplana V, Vivekanandan A, Wang G, Wang Q, Wang CJ, Wang W, Wang D, Wang R, Wang Y, Wang C, Wen N, Wen X, Weninger L, Wick W, Wu S, Wu Q, Wu Y, Xia Y, Xu Y, Xu X, Xu P, Yang T-L, Yang X, Yang H-Y, Yang J, Yang H, Yang G, Yao H, Ye X, Yin C, Young-Moxon B, Yu J, Yue X, Zhang S, Zhang A, Zhang K, Zhang X, Zhang L, Zhang X, Zhang Y, Zhang L, Zhang J, Zhang X, Zhang T, Zhao S, Zhao Y, Zhao X, Zhao L, Zheng Y, Zhong L, Zhou C, Zhou X, Zhou F, Zhu H, Zhu J, Zhuge Y, Zong W, Kalpathy-Cramer J, Farahani K, Davatzikos C, Leemput KV, Menze B. Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge. Apollo—University of Cambridge Repository; 2018.
  21. Dora L, Agrawal S, Panda R, Abraham A. State-of-the-Art Methods for Brain Tissue Segmentation: A Review. IEEE Rev Biomed Eng. 2017;10:235–49. [CrossRef]
  22. Işın A, Direkoğlu C, Şah M. Review of MRI-based Brain Tumor Image Segmentation Using Deep Learning Methods. Procedia Computer Science. 2016;102:317–24. [CrossRef]
  23. Baid U, Ghodasara S, Mohan S, Bilello M, Calabrese E, Colak E, Farahani K, Kalpathy-Cramer J, Kitamura FC, Pati S, Prevedello L, Rudie J, Sako C, Shinohara R, Bergquist T, Chai R, Eddy J, Elliott J, Reade W, Schaffter T, Yu T, Zheng J, Davatzikos C, Mongan J, Hess C, Cha S, Villanueva-Meyer J, Freymann JB, Kirby JS, Wiestler B, Crivellaro P, Colen RR, Kotrotsou A, Marcus D, Milchenko M, Nazeri A, Fathallah-Shaykh H, Wiest R, Jakab A, Weber M-A, Mahajan A, Menze B, Flanders AE, Bakas S. RSNA-ASNR-MICCAI-BraTS-2021. The Cancer Imaging Archive; 2023.
  24. Isensee F, Jaeger PF, Kohl SAA, Petersen J, Maier-Hein KH. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods. 2021;18(2):203–11. [CrossRef]
  25. Zeineldin RA, Karar ME, Burgert O, Mathis-Ullrich F. Multimodal CNN Networks for Brain Tumor Segmentation in MRI: A BraTS 2022 Challenge Solution. In: Bakas S, Crimi A, Baid U, Malec S, Pytlarz M, Baheti B, Zenk M, Dorent R, editors. Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. Cham: Springer Nature Switzerland; 2023. p. 127–37.
  26. Ferreira A, Solak N, Li J, Dammann P, Kleesiek J, Alves V, Egger J. Enhanced Data Augmentation Using Synthetic Data for Brain Tumour Segmentation. In: Baid U, Dorent R, Malec S, Pytlarz M, Su R, Wijethilake N, Bakas S, Crimi A, editors. Brain Tumor Segmentation, and Cross-Modality Domain Adaptation for Medical Image Segmentation. Cham: Springer Nature Switzerland; 2024. p. 79–93.
  27. Moradi N, Ferreira A, Puladi B, Kleesiek J, Fatemizadeh E, Luijten G, Alves V, Egger J. Comparative Analysis of nnUNet and MedNeXt for Head and Neck Tumor Segmentation in MRI-Guided Radiotherapy. In: Wahid KA, Dede C, Naser MA, Fuller CD, editors. Head and Neck Tumor Segmentation for MR-Guided Applications. Cham: Springer Nature Switzerland; 2025. p. 136–53.
  28. Xue J, Wang B, Ming Y, Liu X, Jiang Z, Wang C, Liu X, Chen L, Qu J, Xu S, Tang X, Mao Y, Liu Y, Li D. Deep learning–based detection and segmentation-assisted management of brain metastases. Neuro-Oncology. 2020;22(4):505–14. [CrossRef]
  29. Naceur MB, Saouli R, Akil M, Kachouri R. Fully Automatic Brain Tumor Segmentation using End-To-End Incremental Deep Neural Networks in MRI images. Computer Methods and Programs in Biomedicine. 2018;166:39–49. [CrossRef]
  30. Chang K, Beers AL, Bai HX, Brown JM, Ly KI, Li X, Senders JT, Kavouridis VK, Boaro A, Su C, Bi WL, Rapalino O, Liao W, Shen Q, Zhou H, Xiao B, Wang Y, Zhang PJ, Pinho MC, Wen PY, Batchelor TT, Boxerman JL, Arnaout O, Rosen BR, Gerstner ER, Yang L, Huang RY, Kalpathy-Cramer J. Automatic assessment of glioma burden: a deep learning algorithm for fully automated volumetric and bidimensional measurement. Neuro-Oncology. 2019;21(11):1412–22. [CrossRef]
  31. Ranjbarzadeh R, Bagherian Kasgari A, Jafarzadeh Ghoushchi S, Anari S, Naseri M, Bendechache M. Brain tumor segmentation based on deep learning and an attention mechanism using MRI multi-modalities brain images. Sci Rep. 2021;11(1):10930. [CrossRef]
  32. Deng W, Shi Q, Luo K, Yang Y, Ning N. Brain Tumor Segmentation Based on Improved Convolutional Neural Network in Combination with Non-quantifiable Local Texture Feature. J Med Syst. 2019;43(6):152. [CrossRef]
  33. Zhuge Y, Krauze AV, Ning H, Cheng JY, Arora BC, Camphausen K, Miller RW. Brain tumor segmentation using holistically nested neural networks in MRI images. Medical Physics. 2017;44(10):5234–43. [CrossRef]
  34. Isensee F, Kickingereder P, Wick W, Bendszus M, Maier-Hein KH. Brain Tumor Segmentation and Radiomics Survival Prediction: Contribution to the BRATS 2017 Challenge. In: Crimi A, Bakas S, Kuijf H, Menze B, Reyes M, editors. Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. Cham: Springer International Publishing; 2018. p. 287–97.
  35. Pereira S, Pinto A, Alves V, Silva CA. Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images. IEEE Trans Med Imaging. 2016;35(5):1240–51. [CrossRef]
  36. Havaei M, Davy A, Warde-Farley D, Biard A, Courville A, Bengio Y, Pal C, Jodoin P-M, Larochelle H. Brain tumor segmentation with Deep Neural Networks. Medical Image Analysis. 2017;35:18–31. [CrossRef]
  37. Soltaninejad M, Zhang L, Lambrou T, Allinson N, Ye X. Multimodal MRI brain tumor segmentation using random forests with features learned from fully convolutional neural network. arXiv; 2017.
  38. Hussain S, Anwar SM, Majid M, editors. Brain tumor segmentation using cascaded deep convolutional neural network. 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); 2017 2017/07//. Seogwipo: IEEE.
  39. Tian S, Liu Y, Mao X, Xu X, He S, Jia L, Zhang W, Peng P, Wang J. A multicenter study on deep learning for glioblastoma auto-segmentation with prior knowledge in multimodal imaging. Cancer Science. 2024;115(10):3415–25. [CrossRef]
  40. Bibault J-E, Giraud P. Deep learning for automated segmentation in radiotherapy: a narrative review. British Journal of Radiology. 2024;97(1153):13–20. [CrossRef]
  41. Unkelbach J, Bortfeld T, Cardenas CE, Gregoire V, Hager W, Heijmen B, Jeraj R, Korreman SS, Ludwig R, Pouymayou B, Shusharina N, Söderberg J, Toma-Dasu I, Troost EGC, Vasquez Osorio E. The role of computational methods for automating and improving clinical target volume definition. Radiotherapy and Oncology. 2020;153:15–25. [CrossRef]
  42. Metz M-C, Ezhov I, Peeken JC, Buchner JA, Lipkova J, Kofler F, Waldmannstetter D, Delbridge C, Diehl C, Bernhardt D, Schmidt-Graf F, Gempt J, Combs SE, Zimmer C, Menze B, Wiestler B. Toward image-based personalization of glioblastoma therapy: A clinical and biological validation study of a novel, deep learning-driven tumor growth model. Neurooncol Adv. 2024;6(1):vdad171. [CrossRef]
  43. Lipkova J, Angelikopoulos P, Wu S, Alberts E, Wiestler B, Diehl C, Preibisch C, Pyka T, Combs SE, Hadjidoukas P, Van Leemput K, Koumoutsakos P, Lowengrub J, Menze B. Personalized Radiotherapy Design for Glioblastoma: Integrating Mathematical Tumor Models, Multimodal Scans, and Bayesian Inference. IEEE Trans Med Imaging. 2019;38(8):1875–84. [CrossRef]
  44. Dextraze K, Saha A, Kim D, Narang S, Lehrer M, Rao A, Narang S, Rao D, Ahmed S, Madhugiri V, Fuller CD, Kim MM, Krishnan S, Rao G, Rao A. Spatial habitats from multiparametric MR imaging are associated with signaling pathway activities and survival in glioblastoma. Oncotarget. 2017;8(68):112992–3001. [CrossRef]
  45. Hanahan D. Hallmarks of Cancer: New Dimensions. Cancer Discovery. 2022;12(1):31–46. [CrossRef]
  46. Kwak S, Akbari H, Garcia JA, Mohan S, Dicker Y, Sako C, Matsumoto Y, Nasrallah MP, Shalaby M, O’Rourke DM, Shinohara RT, Liu F, Badve C, Barnholtz-Sloan JS, Sloan AE, Lee M, Jain R, Cepeda S, Chakravarti A, Palmer JD, Dicker AP, Shukla G, Flanders AE, Shi W, Woodworth GF, Davatzikos C. Predicting peritumoral glioblastoma infiltration and subsequent recurrence using deep-learning–based analysis of multi-parametric magnetic resonance imaging. J Med Imag. 2024;11(05). [CrossRef]
  47. Hong JC, Eclov NCW, Dalal NH, Thomas SM, Stephens SJ, Malicki M, Shields S, Cobb A, Mowery YM, Niedzwiecki D, Tenenbaum JD, Palta M. System for High-Intensity Evaluation During Radiation Therapy (SHIELD-RT): A Prospective Randomized Study of Machine Learning–Directed Clinical Evaluations During Radiation and Chemoradiation. JCO. 2020;38(31):3652–61. [CrossRef]
  48. Gutsche R, Lohmann P, Hoevels M, Ruess D, Galldiks N, Visser-Vandewalle V, Treuer H, Ruge M, Kocher M. Radiomics outperforms semantic features for prediction of response to stereotactic radiosurgery in brain metastases. Radiotherapy and Oncology. 2022;166:37–43. [CrossRef]
  49. Yang Z, Zamarud A, Marianayagam NJ, Park DJ, Yener U, Soltys SG, Chang SD, Meola A, Jiang H, Lu W, Gu X. Deep learning-based overall survival prediction in patients with glioblastoma: An automatic end-to-end workflow using pre-resection basic structural multiparametric MRIs. Computers in Biology and Medicine. 2025;185:109436. [CrossRef]
  50. Tsang DS, Tsui G, McIntosh C, Purdie T, Bauman G, Dama H, Laperriere N, Millar B-A, Shultz DB, Ahmed S, Khandwala M, Hodgson DC. A pilot study of machine-learning based automated planning for primary brain tumours. Radiat Oncol. 2022;17(1):3. [CrossRef]
  51. Di Nunno V, Fordellone M, Minniti G, Asioli S, Conti A, Mazzatenta D, Balestrini D, Chiodini P, Agati R, Tonon C, Tosoni A, Gatto L, Bartolini S, Lodi R, Franceschi E. Machine learning in neuro-oncology: toward novel development fields. J Neurooncol. 2022;159(2):333–46. [CrossRef]
  52. Chang K, Bai HX, Zhou H, Su C, Bi WL, Agbodza E, Kavouridis VK, Senders JT, Boaro A, Beers A, Zhang B, Capellini A, Liao W, Shen Q, Li X, Xiao B, Cryan J, Ramkissoon S, Ramkissoon L, Ligon K, Wen PY, Bindra RS, Woo J, Arnaout O, Gerstner ER, Zhang PJ, Rosen BR, Yang L, Huang RY, Kalpathy-Cramer J. Residual Convolutional Neural Network for the Determination of <i>IDH</i> Status in Low- and High-Grade Gliomas from MR Imaging. Clinical Cancer Research. 2018;24(5):1073–81. [CrossRef]
  53. Wong QH-W, Li KK-W, Wang W-W, Malta TM, Noushmehr H, Grabovska Y, Jones C, Chan AK-Y, Kwan JS-H, Huang QJ-Q, Wong GC-H, Li W-C, Liu X-Z, Chen H, Chan DT-M, Mao Y, Zhang Z-Y, Shi Z-F, Ng H-K. Molecular landscape of IDH-mutant primary astrocytoma Grade IV/glioblastomas. Modern Pathology. 2021;34(7):1245–60. [CrossRef]
  54. Ding J, Zhao R, Qiu Q, Chen J, Duan J, Cao X, Yin Y. Developing and validating a deep learning and radiomic model for glioma grading using multiplanar reconstructed magnetic resonance contrast-enhanced T1-weighted imaging: a robust, multi-institutional study. Quant Imaging Med Surg. 2022;12(2):1517–28. [CrossRef]
  55. Zhang X, Yan L-F, Hu Y-C, Li G, Yang Y, Han Y, Sun Y-Z, Liu Z-C, Tian Q, Han Z-Y, Liu L-D, Hu B-Q, Qiu Z-Y, Wang W, Cui G-B. Optimizing a machine learning based glioma grading system using multi-parametric MRI histogram and texture features. Oncotarget. 2017;8(29):47816–30. [CrossRef]
  56. Fathi Kazerooni A, Akbari H, Hu X, Bommineni V, Grigoriadis D, Toorens E, Sako C, Mamourian E, Ballinger D, Sussman R, Singh A, Verginadis II, Dahmane N, Koumenis C, Binder ZA, Bagley SJ, Mohan S, Hatzigeorgiou A, O’Rourke DM, Ganguly T, De S, Bakas S, Nasrallah MP, Davatzikos C. The radiogenomic and spatiogenomic landscapes of glioblastoma and their relationship to oncogenic drivers. Commun Med. 2025;5(1):55. [CrossRef]
  57. Lu J, Zhang Z-Y, Zhong S, Deng D, Yang W-Z, Wu S-W, Cheng Y, Bai Y, Mou Y-G. Evaluating the Diagnostic and Prognostic Value of Peripheral Immune Markers in Glioma Patients: A Prospective Multi-Institutional Cohort Study of 1282 Patients. JIR. 2025;Volume 18:7477–92. [CrossRef]
  58. Aman RA, Pratama MG, Satriawan RR, Ardiansyah IR, Suanjaya IKA. Diagnostic and Prognostic Values of miRNAs in High-Grade Gliomas: A Systematic Review. F1000Res. 2025;13:796. [CrossRef]
  59. Hasani F, Masrour M, Jazi K, Ahmadi P, Hosseini SS, Lu VM, Alborzi A. MicroRNA as a potential diagnostic and prognostic biomarker in brain gliomas: a systematic review and meta-analysis. Front Neurol. 2024;15:1357321. [CrossRef]
  60. Lakomy R, Sana J, Hankeova S, Fadrus P, Kren L, Lzicarova E, Svoboda M, Dolezelova H, Smrcka M, Vyzula R, Michalek J, Hajduch M, Slaby O. MiR-195, miR-196b, miR-181c, miR-21 expression levels and <i>O</i> -6-methylguanine-DNA methyltransferase methylation status are associated with clinical outcome in glioblastoma patients. Cancer Science. 2011;102(12):2186–90. [CrossRef]
  61. Lan F, Yue X, Xia T. Exosomal microRNA-210 is a potentially non-invasive biomarker for the diagnosis and prognosis of glioma. Oncol Lett. 2020. [CrossRef]
  62. Zhou Q, Liu J, Quan J, Liu W, Tan H, Li W. MicroRNAs as potential biomarkers for the diagnosis of glioma: A systematic review and meta-analysis. Cancer Science. 2018;109(9):2651–9. [CrossRef]
  63. Velu U, Singh A, Nittala R, Yang J, Vijayakumar S, Cherukuri C, Vance GR, Salvemini JD, Hathaway BF, Grady C, Roux JA, Lewis S. Precision Population Cancer Medicine in Brain Tumors: A Potential Roadmap to Improve Outcomes and Strategize the Steps to Bring Interdisciplinary Interventions. Cureus. 2024. [CrossRef]
  64. Silva PJ, Silva PA, Ramos KS. Genomic and Health Data as Fuel to Advance a Health Data Economy for Artificial Intelligence. BioMed Research International. 2025;2025(1):6565955. [CrossRef]
  65. Silva PJ, Rahimzadeh V, Powell R, Husain J, Grossman S, Hansen A, Hinkel J, Rosengarten R, Ory MG, Ramos KS. Health equity innovation in precision medicine: data stewardship and agency to expand representation in clinicogenomics. Health Research Policy and Systems. 2024;22(1):170. [CrossRef]
  66. Silva P, Janjan N, Ramos KS, Udeani G, Zhong L, Ory MG, Smith ML. External control arms: COVID-19 reveals the merits of using real world evidence in real-time for clinical and public health investigations. Front Med (Lausanne). 2023;10:1198088. Epub 20230706. PubMed PMID: 37484840; PMCID: PMC10359981. [CrossRef]
  67. d’Este SH, Nielsen MB, Hansen AE. Visualizing Glioma Infiltration by the Combination of Multimodality Imaging and Artificial Intelligence, a Systematic Review of the Literature. Diagnostics. 2021;11(4):592. [CrossRef]
  68. Holzinger A, Langs G, Denk H, Zatloukal K, Müller H. Causability and explainability of artificial intelligence in medicine. WIREs Data Min & Knowl. 2019;9(4):e1312. [CrossRef]
  69. AlBadawy EA, Saha A, Mazurowski MA. Deep learning for segmentation of brain tumors: Impact of cross-institutional training and testing. Medical Physics. 2018;45(3):1150–8. [CrossRef]
  70. Bologna G, Hayashi Y. Characterization of Symbolic Rules Embedded in Deep DIMLP Networks: A Challenge to Transparency of Deep Learning. Journal of Artificial Intelligence and Soft Computing Research. 2017;7(4):265–86. [CrossRef]
  71. Cui S, Traverso A, Niraula D, Zou J, Luo Y, Owen D, El Naqa I, Wei L. Interpretable artificial intelligence in radiology and radiation oncology. The British Journal of Radiology. 2023;96(1150):20230142. [CrossRef]
  72. Sangwan H. QUANTIFYING EXPLAINABLE AI METHODS IN MEDICAL DIAGNOSIS: A STUDY IN SKIN CANCER. Health Informatics; 2024.
  73. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. Int J Comput Vis. 2020;128(2):336–59. [CrossRef]
  74. Lundberg S, Lee S-I. A Unified Approach to Interpreting Model Predictions. arXiv; 2017.
  75. Sahlsten J, Jaskari J, Wahid KA, Ahmed S, Glerean E, He R, Kann BH, Mäkitie A, Fuller CD, Naser MA, Kaski K. Application of simultaneous uncertainty quantification and segmentation for oropharyngeal cancer use-case with Bayesian deep learning. Commun Med. 2024;4(1):110. [CrossRef]
  76. Alruily M, Mahmoud AA, Allahem H, Mostafa AM, Shabana H, Ezz M. Enhancing Breast Cancer Detection in Ultrasound Images: An Innovative Approach Using Progressive Fine-Tuning of Vision Transformer Models. International Journal of Intelligent Systems. 2024;2024(1):6528752. [CrossRef]
  77. Marin T, Zhuo Y, Lahoud RM, Tian F, Ma X, Xing F, Moteabbed M, Liu X, Grogg K, Shusharina N, Woo J, Lim R, Ma C, Chen YE, El Fakhri G. Deep learning-based GTV contouring modeling inter- and intra- observer variability in sarcomas. Radiother Oncol. 2022;167:269–76. Epub 20211119. PubMed PMID: 34808228; PMCID: PMC8934266. [CrossRef]
  78. Abbasi S, Lan H, Choupan J, Sheikh-Bahaei N, Pandey G, Varghese B. Deep learning for the harmonization of structural MRI scans: a survey. Biomed Eng Online. 2024;23(1):90. Epub 20240831. PubMed PMID: 39217355; PMCID: PMC11365220. [CrossRef]
  79. Ensenyat-Mendez M, Íñiguez-Muñoz S, Sesé B, Marzese DM. iGlioSub: an integrative transcriptomic and epigenomic classifier for glioblastoma molecular subtypes. BioData Mining. 2021;14(1):42. [CrossRef]
  80. Dona Lemus OM, Cao M, Cai B, Cummings M, Zheng D. Adaptive Radiotherapy: Next-Generation Radiotherapy. Cancers. 2024;16(6):1206. [CrossRef]
  81. Weykamp F, Meixner E, Arians N, Hoegen-Saßmannshausen P, Kim J-Y, Tawk B, Knoll M, Huber P, König L, Sander A, Mokry T, Meinzer C, Schlemmer H-P, Jäkel O, Debus J, Hörner-Rieber J. Daily AI-Based Treatment Adaptation under Weekly Offline MR Guidance in Chemoradiotherapy for Cervical Cancer 1: The AIM-C1 Trial. JCM. 2024;13(4):957. [CrossRef]
  82. Vuong W, Gupta S, Weight C, Almassi N, Nikolaev A, Tendulkar RD, Scott JG, Chan TA, Mian OY. Trial in Progress: Adaptive RADiation Therapy with Concurrent Sacituzumab Govitecan (SG) for Bladder Preservation in Patients with MIBC (RAD-SG). International Journal of Radiation Oncology*Biology*Physics. 2023;117(2):e447–e8. [CrossRef]
  83. Guevara B, Cullison K, Maziero D, Azzam GA, De La Fuente MI, Brown K, Valderrama A, Meshman J, Breto A, Ford JC, Mellon EA. Simulated Adaptive Radiotherapy for Shrinking Glioblastoma Resection Cavities on a Hybrid MRI-Linear Accelerator. Cancers (Basel). 2023;15(5). Epub 20230302. PubMed PMID: 36900346; PMCID: PMC10000839. [CrossRef]
  84. Paschali M, Chen Z, Blankemeier L, Varma M, Youssef A, Bluethgen C, Langlotz C, Gatidis S, Chaudhari A. Foundation Models in Radiology: What, How, Why, and Why Not. Radiology. 2025;314(2):e240597. PubMed PMID: 39903075; PMCID: PMC11868850. [CrossRef]
  85. Putz F, Beirami S, Schmidt MA, May MS, Grigo J, Weissmann T, Schubert P, Höfler D, Gomaa A, Hassen BT, Lettmaier S, Frey B, Gaipl US, Distel LV, Semrau S, Bert C, Fietkau R, Huang Y. The Segment Anything foundation model achieves favorable brain tumor auto-segmentation accuracy in MRI to support radiotherapy treatment planning. Strahlenther Onkol. 2025;201(3):255–65. Epub 20241106. PubMed PMID: 39503868; PMCID: PMC11839838. [CrossRef]
  86. Kebaili A, Lapuyade-Lahorgue J, Vera P, Ruan S. Multi-modal MRI synthesis with conditional latent diffusion models for data augmentation in tumor segmentation. Comput Med Imaging Graph. 2025;123:102532. Epub 20250321. PubMed PMID: 40121926. [CrossRef]
  87. Baroudi H, Brock KK, Cao W, Chen X, Chung C, Court LE, El Basha MD, Farhat M, Gay S, Gronberg MP, Gupta AC, Hernandez S, Huang K, Jaffray DA, Lim R, Marquez B, Nealon K, Netherton TJ, Nguyen CM, Reber B, Rhee DJ, Salazar RM, Shanker MD, Sjogreen C, Woodland M, Yang J, Yu C, Zhao Y. Automated Contouring and Planning in Radiation Therapy: What Is ‘Clinically Acceptable’? Diagnostics. 2023;13(4):667. [CrossRef]
  88. Hussain S, Anwar SM, Majid M. Segmentation of glioma tumors in brain using deep convolutional neural network. Neurocomputing. 2018;282:248–61. [CrossRef]
Figure 1. Example segmentation for radiotherapy for patient with glioblastoma. Red: GTV (T1c). Green: GTV (FLAIR). Purple: GTVT1c + margin (CTV60) tucked away from brainstem. Outer red: planning target volume 60 (3mm margin on CTV60). Light blue: GTVFLAIR + 1cm (CTV50) tucked away from contralateral brain [falx cerebri serving as an anatomic barrier]. Salmon: planning target volume 50 (3mm margin on CTV50). A: T1-weighted MRI (axial view). B: FLAIR MRI (axial view). C: Isodose level color key (cGy). D: CT (axial view). E: CT (coronal view). F: CT (sagittal view).
Figure 1. Example segmentation for radiotherapy for patient with glioblastoma. Red: GTV (T1c). Green: GTV (FLAIR). Purple: GTVT1c + margin (CTV60) tucked away from brainstem. Outer red: planning target volume 60 (3mm margin on CTV60). Light blue: GTVFLAIR + 1cm (CTV50) tucked away from contralateral brain [falx cerebri serving as an anatomic barrier]. Salmon: planning target volume 50 (3mm margin on CTV50). A: T1-weighted MRI (axial view). B: FLAIR MRI (axial view). C: Isodose level color key (cGy). D: CT (axial view). E: CT (coronal view). F: CT (sagittal view).
Preprints 180816 g001
Figure 2. Personalized precision medicine for patient-specific radiotherapy treatment.
Figure 2. Personalized precision medicine for patient-specific radiotherapy treatment.
Preprints 180816 g002
Table 1. Summary of DL models for auto segmentation of brain tumors from 2016-2025.
Table 1. Summary of DL models for auto segmentation of brain tumors from 2016-2025.
Preprints 180816 i001
Table 2. Summary of pros and cons of manual vs. AI-based radiation treatment planning.
Table 2. Summary of pros and cons of manual vs. AI-based radiation treatment planning.
Preprints 180816 i002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated