1. Introduction
The brain and spinal cord collectively known as the Central Nervous System (CNS)—play a vital role in regulating essential biological processes, including organizing, analyzing, decision-making, and communication across bodily systems [
1]. Due to its complex anatomical and functional structure, the human brain is especially vulnerable to disorders that can disrupt these processes [
2]. Diseases affecting the CNS, such as stroke, infections, migraines, and particularly brain tumors, pose significant diagnostic and therapeutic challenges[
3].
Brain tumors arise from the abnormal proliferation of brain cells and result in mass formation that interferes with normal neurological function. These tumors are broadly classified into primary tumors, which originate within the brain, and secondary (or metastatic) tumors that spread from other parts of the body [
4]. While primary brain tumors may be benign or malignant and often stem from glial or neural cell structures [
5], secondary brain tumors are invariably malignant and represent the most prevalent type of CNS cancer [
6].
Globally, brain tumors affect approximately 700,000 people, with 86,000 new cases and 16,380 deaths reported in 2019 [
7]. Among malignant brain tumors, gliomas are the most common, accounting for nearly 80% of cases [
8]. Early and accurate detection of brain tumors is critical for effective treatment planning, yet it remains a major hurdle due to the heterogeneous nature of tumors, which can exhibit both low-grade and high-grade features. Manual diagnosis using Magnetic Resonance Imaging (MRI), although standard, is time-intensive, prone to variability, and highly dependent on radiologist expertise [
8].
In recent years, Computer-Aided Diagnosis (CAD) systems have emerged as promising tools to overcome these limitations. By leveraging automation and pattern recognition, CAD systems offer potential benefits such as faster processing, consistent segmentation, and improved diagnostic accuracy—ultimately helping to reduce mortality and the burden on healthcare providers [
9]. Accordingly, numerous CAD-based medical image segmentation approaches are being developed and evaluated for brain tumor detection [
10].
The advent of Convolutional Neural Networks (CNNs) has significantly advanced the field of medical image analysis. CNN-based architectures, particularly those designed for semantic segmentation, have demonstrated remarkable accuracy and adaptability in various clinical applications, including brain tumor segmentation [
11]. Among these, the U-Net architecture has shown exceptional performance in biomedical image segmentation due to its encoder- decoder structure and skip connections that preserve spatial information.
In this study, we focus on brain tumor segmentation using multimodal MRI scans, which poses one of the most complex challenges in medical imaging due to the need for high spatial resolution and tissue differentiation. To address this, we utilize the BraTS 2020 dataset, which provides four different MRI modalities (T1, T1c, T2, and FLAIR) for each patient along with corresponding ground truth annotations of tumor subregions.
The main contributions of our study are as follows:
- ➢
We develop and implement a U-Net-based deep learning model specifically designed for accurate and efficient segmentation of brain tumors using 3D multimodal MRI scans from the BraTS 2020 dataset. The architecture incorporates skip connections that preserve spatial details and enhance the segmentation of complex tumor structures.
- ➢
Our model leverages optimized training strategies and hyperparameter tuning techniques to achieve high segmentation performance while minimizing overfitting. This adaptive approach significantly improves model generalization across diverse MRI scans.
- ➢
We provide comprehensive evaluation through visualizations and performance metrics, confirming the model’s ability to closely match expert-labeled ground truths and reducing the dependency on manual segmentation.
Through this approach, we aim to demonstrate that deep learning models can significantly reduce the time and labor involved in tumor segmentation, while maintaining clinical-grade accuracy suitable for integration into AI-assisted diagnostic systems.
2. Literature Review
According to [
12], Automatic brain tumor segmentation using 3D MRI images might benefit clinical professionals in diagnosis, since manual segmentation is difficult, repetitive, and error- prone. 3D convolutions are memory- and computational-intensive. The proposed work uses 2D U-net architecture using BraTS2020 dataset to extract tumor areas from healthy tissue in an automated manner. All MRI sequences are tested using the model to identify the best results. We achieved 99.41% accuracy and 93% DSC on T1 MRI sequence using optimizer Adam with learning rate 0.001, proving the efficiency of our technique. Different hyper-parameters are used to train the model for resilience and performance consistency.
Ref [
13] found that physician experience and expertise affect brain cancer therapy. An automated tumor detection system is crucial for radiologists and clinicians to identify brain tumors. The approach consists of three stages: preprocessing, ELM-LRF-based tumor classification, and image processing-based tumor area extraction. Initial noise removal was done using nonlocal means and local smoothing approaches. The second step included utilizing ELM-LRF to classify cranial magnetic resonance (MR) images as benign or malignant. Third, tumors were segmented. The research exclusively used mass-containing cranial MR images to save the doctor time. Cranial MR image categorization accuracy is 97.18% in experiments. The suggested technique outperformed other recent research in the literature. Effectiveness of the suggested strategy in computer-aided brain tumor identification was shown via experiments
.
Ref [
14] used a CNN model to offer an automated segmentation method. Using the BraTS 2013 dataset, their method was evaluated and found to be effective, with dice coefficient (DC) estimates of 88, 83, and 77% for the core, augmenting, and total regions, respectively.
Ref [
15] introduced a computerized method that can identify aberrant brain tumors as either low- grade glioma (LGG) or high-grade glioma (HGG) and differentiate between a normal and an irregular brain. Their approach successfully detected HGGs and LGGs with individual accuracy, specificity, and sensitivity ratings of 99%, 98.03%, and 100%.
Ref [
16] revealed that computer-aided brain tumor identification and surgery planning need correct tumor segmentation. Clinical diagnosis and therapy commonly involve subjective segmentations; however, they are inaccurate and unreliable. An objective, automated brain tumor segmentation method is expected. Reduced segmentation accuracy, prior knowledge, and human involvement remain issues. This study presents a novel coarse-to-fine brain tumor segmentation method. The hierarchical system incorporates preprocessing, deep learning network classification, and postprocessing. The preprocessing approach isolates MR image patches and gives the deep learning network gray level sequences. A stacked auto encoder network extracts high-level abstract characteristics from input and categorizes image patches in deep learning network-based categorization. A morphological filter segments the binary image after categorization. An actual patient dataset was used to segment brain tumors using the described method. The final result shows the brain tumor segmentation method's improved accuracy and efficiency.
Ref [
17] found that gliomas are the most common brain cancers. Successful diagnosis, treatment, and risk factor identification need accurate tumor segmentation and patient survival rate calculation. Our deep learning system uses MRI data to properly separate brain tumors and predict survival in glioma patients. Strong and accurate tumor segmentation is achieved using 2D volumetric convolution neural network designs with a majority rule. This strategy greatly reduces model bias and boosts performance. Our survival rate prediction method involves extracting radiomic characteristics from segmented tumor locations and using a Deep Learning Inspired 3D replicator neural network to determine the most effective features. The model in this research accurately segments brain tumors and predicts the fate of augmenting tumors. The model was assessed using the BRATS2020 benchmarks dataset, yielding reliable and promising results.
Ref [
18] found that MRI brain tumor segmentation aids prognosis, treatment planning, tumor density analysis, and patient care. Brain tumors have different forms, shapes, locations, and visual features, making segmentation difficult. Deep Neural Networks (DNNs) can classify images; however, their training requires a lot of processing power and gradient diffusion difficulties. The Improved Residual Network (ResNet) is used to segment brain tumors efficiently in this research. Information flow, residual blocks, and projection shortcuts are refined in the upgraded ResNet, speeding learning and improving accuracy. Using the BRATS 2020 MRI dataset, the proposed model beats CNN and Fully Convolutional Networks (FCN) by over 10% in accuracy, recall, and F-measure.
Ref [
19] spoke about a binary classification challenge using brain tumor MRI pictures. They used VGG16 and ALEXnet for feature extraction, and recurrent feature elimination (RFE) was carried out. Ultimately, they used a Support Vector Machine (SVM) for the classification job, yielding an overall accuracy of 96%.
Ref [
20] used transfer learning and superpixel approaches for tumor identification and segmentation. The superpixel technique was used to divide the tumor into two groups. This produced 0.93 of the average dice index, which is different from ground truth data.
Ref [
7] suggested a 3D CNN (pre-trained VGG19) framework for tumor extraction and transfer learning was utilized for classification that provided an accuracy of 98.32 on the BRATS 2015 dataset.
Ref [
21] suggested a method in which the segmentation processes are carried out using an OKM method. Otsu thresholding and K-Means clustering, two common concepts, are the essential elements of the OKM approach. Outcomes revealed a dice coefficient of more than 0.70 for the cases.
Ref [
22] found that brain tumor segmentation from 3D pictures is one of the most significant and challenging medical image processing problems because manual human classification may lead to inaccurate prediction and diagnosis. When there is a lot of data, it is harder. Due to their range of appearances and similarity to normal tissues, brain tumor areas are difficult to extract from MRI scans. This research presents a modified U-Net architecture with a deep-learning framework for brain tumor identification and segmentation from MRI images. Image datasets from Medical Image Computing and Computer-Assisted Interventions BRATS 2020 were used to test the model. With the above dataset, test accuracy is 99.4%. A comparison with previous articles reveals our U-Net-based approach outperforms deep learning techniques.
3. Methodology
The main research approach employed in this study is segmentation-based experimental research, specifically the identification and outlining of brain tumor areas within 3D multimodal MRI images. The study makes use of the BraTS 2020 dataset, which is available to the public and extensively utilized in brain tumor research. The dataset is pre-divided into training and testing subsets and comprises four MRI modalities per patient (T1, T1ce, T2, and FLAIR), together with expert-annotated segmentation masks outlining the tumor subregions.
To carry out tumor segmentation, we used a U-Net-based convolutional neural network specifically tailored for biomedical image segmentation. The model structure includes an encoder path (down sampling) and a decoder path (up sampling). In the encoder path, the model successively decreases the spatial resolution of the input images while extracting high-level features—basically learning what is in the image. This aids in the detection of relevant anatomical structures and tumor features. In the decoder path, the model uses transposed convolutions to upscale the feature maps and combines them with matching encoder features using skip connections. This step allows the model to recover where the tumor areas are in the image and reconstructs the spatial details accurately, generating segmentation masks of the same dimensions as the input images.
This pixel-level method allows the model to accurately separate different parts of the tumor from the surrounding brain tissue. The resulting segmentation maps are then evaluated using performance measures like Dice score, IoU, precision, recall, and F1 score to ensure the results are reliable and useful for clinical purposes.
Figure 1.
Block diagram of proposed methodology.
Figure 1.
Block diagram of proposed methodology.
Once the BraTS 2020 dataset was gathered, the MRI scans were pre-processed, including normalization, resizing, and ground truth mask encoding. Segmentation was done to detect various regions of the tumor like edema, necrotic core, and enhancing tumor. At the time of training, the U-Net model learned features associated with these tumor regions through its encoder-decoder structure. The model was trained to segment each region precisely at the pixel level. After training was finished, the model produced segmentation results, which were stored for testing. The results were then tested with performance metrics and comparisons based on visually observed data to confirm the accuracy of the model. The process flow diagram shown in
Figure 2 displays the entire pipeline of our setup, from preparation of data to model testing.
3.1. Model Architecture
A Fully Convolution Network Model is used in the semantic segmentation architecture known as U-Net, which was created by [
23]. Semantic image segmentation aims to assign a class that represents something to each and every pixel in an image. Because we are predicting every pixel in the picture, this job is sometimes referred to as a dense prediction [
24]. U-Net is widely used for segmenting medical images (
Figure 3). Machines that use U-Net may augment radiologists' analysis, significantly cutting down on the time needed to perform diagnostic operations.
The model is composed of the growing route and the shrinking path. The reducing/contracting route down samples a picture according to the standard design of a convolutional network. It consists of two unpadded 3x3 convolutions that are performed repeatedly, followed by a rectified linear unit (ReLU) [
25] for down sampling and a 2x2 max pooling operation with stride 2. The number of feature channels doubles at each step of the down sampling procedure. Each step of the expansive path involves up sampling the feature map, concatenating it with the proportionally cropped feature map from the contracting path, performing a 2x2 convolution (also known as a "up-convolution") that reduces the number of feature channels in half, and then performing a ReLU after each of the two 3x3 convolutions. Since every convolution causes the loss of boundary pixels, cropping is necessary. The last layer splits each 64-component feature vector into the appropriate number of classes using a 1x1 convolution. There are a total of 23 convolutional layers in the provided network. Softmax has been employed as activation in the last layer. The U-net model architecture utilized in this work is shown in
Figure 4.
The categorical cross-entropy, which is often used in multi-class classification problems, was employed here [
26] as the loss function and is provided in Eq. (1).
where
pi is the probability of Softmax for the i
th class,
li is the truth label and n stands for the number of classes. The most commonly used metrics in semantic segmentation is the Intersection-Over-Union (IoU), or the Jaccard Index given in Eq. (2).
To calculate the mean IoU, the overlapped area between the predicted segmentation and the ground truth is divided by the area of union between the predicted segmentation and the ground truth [
27]. The dice coefficient was also used here [
28] given in Eq. (3) to evaluate it as a performance metric.
where TP is a true positive, FP is false positive and FN is a false negative, Dice loss accounts for both local and global loss data. The Dice coefficient and the Mean IoU are quite similar.
3.2. Training Configuration
The U-Net model was trained over 10 epochs, showing consistent improvements in accuracy and loss during the process of training. Training was carried out with the following settings:
Loss Function: The model applied a mixture of Dice Loss and Categorical Cross-Entropy, which served to optimize segmentation accuracy while dealing effectively with class imbalance between tumor and non-tumor areas.
Optimizer: The Adam optimizer was used for its effectiveness in adapting learning rates during training and leading to quick and stable convergence.
Batch Size and Validation Split: These were selected based on system memory requirements and the need to balance generalization with training stability. The dataset was partitioned to save a portion of it for validation to track performance and prevent overfitting.
4. Results
This research evaluates the performance of deep learning-segmentation model through visual inspection, accuracy patterns, and critical performance metrics. Results show high segmentation accuracy with strong correspondence between predicted and true masks. Measures such as the Dice coefficient, precision, and recall confirm the effectiveness of the model in detecting brain tumors. The discussion highlights the clinical relevance, emphasizing the role of automated segmentation in improving diagnostic effectiveness, reducing radiologist workload, and enabling real-time use in medical imaging and telemedicine.
4.1. Visualization of Prediction
The image shown here displays a representative prediction of the U-Net model for MRI segmentation. It consists of three sub-images: the original MRI scan, the ground truth mask, and the predicted mask. The first image displays a grayscale MRI scan where anatomical structures are clearly visible. The ground truth mask, shown in red over a blue background, marks the region of interest that has been manually labeled by experts. The predicted mask, shown in red over a blue background, represents the segmentation output generated by the model. The predicted mask is very similar to the ground truth mask, which means that the model has successfully detected the target region. Even if there are slight discrepancies, the model demonstrates strong performance in segmenting the intended structures, as seen through the high evaluation metrics. The visualization again supports the effectiveness of the trained U-Net model for medical image segmentation purposes.
Figure 5.
Sample prediction.
Figure 5.
Sample prediction.
4.2. Accuracy and Loss Curves
The graph has two-line plots: the left plot is training and validation accuracy, and the right plot is training and validation loss over 10 epochs. In the accuracy plot, both the training and validation accuracy commence at around 0.9700 in the first epoch. At epoch 2, the validation accuracy reaches about 0.9825, while the training accuracy increases very rapidly, reaching a high of more than 0.9830. At epoch 5, the training accuracy reaches about 0.9865, and the validation accuracy is about 0.9860. In the following epochs, both values show progressive improvement, resulting in a last training accuracy of about 0.9900 and a validation accuracy slightly less at about 0.9895, indicating strong generalization. The plot of loss shows that the training loss starts at about 0.145 in the first epoch and decreases rapidly, reaching about 0.060 by the second epoch. The validation loss follows a similar pattern, starting at about 0.062 in the first epoch and decreasing to about 0.045 by the fourth epoch. The training loss is about 0.035 and the validation loss is about 0.038 at epoch 7. At the final era, training loss reaches around 0.025, and the validation loss levels at around 0.030. The trend suggests that the model is learning effectively, since training and validation curves are very similar without significant overfitting. The end model reaches high accuracy with minimal loss, making it extremely suitable for segmentation operations.
Figure 6.
Accuracy and loss over epochs.
Figure 6.
Accuracy and loss over epochs.
4.3. Performance Metrics
The bar plot shows the performance measures of the trained U-Net model on the test set, as well as the segmentation results. The test accuracy is 0.9902, which means that the model correctly classifies about 99% of the segmentation results. The dice coefficient, which measures the overlap between predicted and true segmentation masks, is 0.9858, showing a high level of similarity. The precision is 0.9935, showing low rates of false positives, and recall is 0.9873, showing the performance of the model in good identification of most of the positive cases. The F1 score, balancing precision and recall, is 0.9904, which shows high overall performance. The average Intersection over Union (IoU) is 0.9811, showing that the predicted segmentations of the model are very close to the ground truth. The segmentation results further validate the effectiveness of the model, as the resultant masks have a close match with the true regions of interest in the MRI images. The visual inspection of the segmented outputs proves that the model accurately outlines the structures with minimal noise or false detections. The findings prove that the U-Net model is highly reliable for MRI segmentation tasks, achieving precise and stable performance in various test conditions.
Table 1.
Results Table.
| Metric |
Value |
| Test Accuracy |
0.9902 |
| Dice Coefficient |
0.9858 |
| Precision |
0.9935 |
| Recall |
0.9873 |
| F1 Score |
0.9904 |
| Mean IoU |
0.9811 |
Figure 7.
Performance Metrics.
Figure 7.
Performance Metrics.
5. Discussion
This work illustrates that a deep learning approach for brain tumor segmentation from 3D MRI scans has high accuracy and reliability, making it a valuable tool for real-world clinical applications. The U-Net model's segmentation outputs closely match the manually labelled ground truth masks, promising minimal variability in outlining tumor boundaries. The model shows excellent performance in tumor detection and outlining, as reflected in a test accuracy of 0.9902 and a Dice coefficient of 0.9858, both of which are critical for treatment planning and early diagnosis. The precision of 0.9935 indicates an extremely low incidence of false positives, thus reducing the likelihood of unnecessary interventions, while the recall of 0.9873 ensures that most tumors are successfully detected. The Mean IoU of 0.9811 also supports the effectiveness of the model in segmenting tumor structures with fine accuracy. The results show that deep learning-based segmentation can significantly reduce the workload of radiologists through automatic employment of the time-consuming process of manual tumor annotation. In clinical use, automated segmentation improves efficiency and reproducibility and hence avoids inter-observer variability normally found in manual assessments. The rapid processing ability of these models makes them appropriate for integration into real-time diagnostic systems, where timely decision-making is critical, especially in emergency situations. The model's ability to generalize across different MRI scans suggests its applicability to heterogeneous patient populations, which is important for enabling widespread clinical implementation. In addition, integrating AI-based segmentation models into telemedicine platforms can improve medical accessibility, enabling remote diagnosis and consultation for patients in regions with a lack of radiology expertise.
6. Conclusions
This research highlights the significance of deep learning in medical imaging, particularly brain tumor segmentation, where efficiency and accuracy are of paramount importance. The near- perfect capability of the U-Net model with little loss indicates its practical applicability in hospitals and diagnostic centers. One main advantage of automated segmentation is its ability to assist radiologists by providing reproducible and consistent tumor delineations, thereby reducing human errors, and lowering diagnostic discrepancies. Reduced processing time allows for streamlined treatment planning, improving patient outcomes through early interventions. The robust performance of the model on various MRI scans shows its scalability, making it suitable for integration into AI-assisted radiology pipelines. Subsequent studies can explore the strengthening of the model's performance by incorporating multi-modal imaging data, such as the combination of MRI and PET scans, to provide improved tumor characterization. In addition, explainability methods can be incorporated to provide clinicians with greater insights into the decision-making capability of the model, thus enhancing trust and making it more adoptable in clinical practice. Another avenue for advancement is through the deployment of the model on edge devices or on cloud platforms, enabling real-time availability of AI-based diagnostics even in resource-constrained healthcare settings. The integration of these models with electronic health records (EHRs) and hospital management systems can improve patient care by enabling seamless access to automated segmentation results. This research demonstrates the viability of using deep learning for segmentation of brain tumors and makes it feasible for future integration in standard medical imaging software, thereby improving accuracy, efficacy, and convenience in health care solutions.
Author Contributions
N. Deena Nepolian: Writing - Original Draft Preparation, Writing - Review & Editing, Conceptualization, Methodology; M. Mary Synthuja Jain Preetha: Writing - Review & Editing, Formal Analysis; K. S. Vijula Grace: Writing - Original Draft Preparation, Writing – Review & Editing, Supervision. All authors read and approved the final manuscript.
Funding
The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.
Institutional Review Board Statement
This article does not contain any studies with human participants or animals performed by any of the authors.
Data Availability Statement
The data used to support the findings of this study are included in the article.
Acknowledgments
The authors would like to thank all the subjects who participated in this research study.
Conflicts of Interest
The authors declare that they have no conflicts of interest regarding the publication of this paper.
References
- D. Y. Lee, “Roles of mTOR signaling in brain development,” Exp. Neurobiol., vol. 24, no. 3, p. 177, 2015. [CrossRef]
- M. Mumtaz Zahoor, S. A. Qureshi, S. Hussain Khan, and A. Khan, “A New Deep Hybrid Boosted and Ensemble Learning-based Brain Tumor Analysis using MRI,” arXiv e-prints, p. arXiv-2201, 2022.
- M. Arabahmadi, R. Farahbakhsh, and J. Rezazadeh, “Deep learning for smart Healthcare—A survey on brain tumor detection from medical imaging,” Sensors, vol. 22, no. 5, p. 1960, 2022. [CrossRef]
- D. V. Gore and V. Deshpande, “Comparative study of various techniques using deep Learning for brain tumor detection,” in 2020 International conference for emerging technology (INCET), IEEE, 2020, pp. 1–4.
- S. Mahajan, A. K. Sahoo, P. K. Sarangi, L. Rani, and D. Singh, “MRI Image Segmentation: Brain Tumor Detection and Classification Using Machine Learning,” in International Conference on Data Analytics & Management, Springer, 2023, pp. 125–139.
- T. A. Soomro et al., “Image segmentation for MR brain tumor detection using machine learning: a review,” IEEE Rev. Biomed. Eng., vol. 16, pp. 70–90, 2022. [CrossRef]
- A.Rehman, M. A. Khan, T. Saba, Z. Mehmood, U. Tariq, and N. Ayesha, “Microscopic brain tumor detection and classification using 3D CNN and feature selection architecture,” Microsc. Res. Tech., vol. 84, no. 1, pp. 133–149, 2021. [CrossRef]
- S. Musallam, A. S. Sherif, and M. K. Hussein, “A new convolutional neural network architecture for automatic detection of brain tumors in magnetic resonance imaging images,” IEEE access, vol. 10, pp. 2775–2782, 2022. [CrossRef]
- M. I. Sharif, M. A. Khan, M. Alhussein, K. Aurangzeb, and M. Raza, “A decision support system for multimodal brain tumor classification using deep learning,” Complex Intell. Syst., pp. 1–14, 2021. [CrossRef]
- H. Kaldera, S. R. Gunasekara, and M. B. Dissanayake, “Brain tumor classification and segmentation using faster R-CNN,” in 2019 Advances in Science and Engineering Technology International Conferences (ASET), IEEE, 2019, pp. 1–6.
- A.Crimi and S. Bakas, Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Revised Selected Papers, Part I, vol. 12658. Springer Nature, 2021.
- S. Montaha, S. Azam, A. K. M. Rakibul Haque Rafid, M. Z. Hasan, and A. Karim, “Brain tumor segmentation from 3d mri scans using u-net,” SN Comput. Sci., vol. 4, no. 4, p. 386, 2023. [CrossRef]
- A.Ari and D. Hanbay, “Deep learning based brain tumor classification and detection system,” Turkish J. Electr. Eng. Comput. Sci., vol. 26, no. 5, pp. 2275–2286, 2018. [CrossRef]
- S. Pereira, A. Pinto, V. Alves, and C. A. Silva, “Brain tumor segmentation using convolutional neural networks in MRI images,” IEEE Trans. Med. Imaging, vol. 35, no. 5, pp. 1240–1251, 2016. [CrossRef]
- N. Bhardwaj, M. Sood, and S. S. Gill, “Deep learning framework using CNN for brain tumor classification,” in 2022 5th International Conference on Multimedia, Signal Processing and Communication Technologies (IMPACT), IEEE, 2022, pp. 1–5.
- Z. Xiao et al., “A deep learning-based segmentation method for brain tumor in MR images,” in 2016 IEEE 6th international conference on computational advances in bio and medical sciences (ICCABS), IEEE, 2016, pp. 1–6.
- D. Rastogi et al., “Deep learning-integrated MRI brain tumor analysis: feature extraction, segmentation, and Survival Prediction using Replicator and volumetric networks,” Sci. Rep., vol. 15, no. 1, p. 1437, 2025. [CrossRef]
- M. Aggarwal, A. K. Tiwari, M. P. Sarathi, and A. Bijalwan, “An early detection and segmentation of Brain Tumor using Deep Neural Network,” BMC Med. Inform. Decis. Mak., vol. 23, no. 1, p. 78, 2023. [CrossRef]
- M. Toğaçar, Z. Cömert, and B. Ergen, “Classification of brain MRI using hyper column technique with convolutional neural network and feature selection method,” Expert Syst. Appl., vol. 149, p. 113274, 2020. [CrossRef]
- S. Ahuja, B. K. Panigrahi, and T. Gandhi, “Transfer learning based brain tumor detection and segmentation using superpixel technique,” in 2020 international conference on contemporary computing and applications (IC3A), IEEE, 2020, pp. 244–249.
- P. Tripathi, V. K. Singh, and M. C. Trivedi, “Brain tumor segmentation in magnetic resonance imaging using OKM approach,” Mater. Today Proc., vol. 37, pp. 1334– 1340, 2021. [CrossRef]
- S. Sangui, T. Iqbal, P. C. Chandra, S. K. Ghosh, and A. Ghosh, “3D MRI Segmentation using U-Net Architecture for the detection of Brain Tumor,” Procedia Comput. Sci., vol. 218, pp. 542–553, 2023. [CrossRef]
- O.Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18, Springer, 2015, pp. 234–241.
- J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440.
- A.F. Agarap, “Deep learning using rectified linear units (relu),” arXiv Prepr. arXiv1803.08375, 2018.
- M. Yeung, E. Sala, C.-B. Schönlieb, and L. Rundo, “Unified focal loss: Generalising dice and cross entropy-based losses to handle class imbalanced medical image segmentation,” Comput. Med. Imaging Graph., vol. 95, p. 102026, 2022. [CrossRef]
- A.Shaban, S. Bansal, Z. Liu, I. Essa, and B. Boots, “One-shot learning for semantic segmentation,” arXiv Prepr. arXiv1709.03410, 2017. [CrossRef]
- S. K. Ghosh, A. Mitra, and A. Ghosh, “A novel intuitionistic fuzzy soft set entrenched mammogram segmentation under multigranulation approximation for breast cancer detection in early stages,” Expert Syst. Appl., vol. 169, p. 114329, 2021. [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).