Brain tumor segmentation is a critical process in medical imaging, enabling early diagnosis and effective treatment planning. Traditional manual segmentation is time-consuming, prone to inter-observer variability, and requires skilled radiologists. While deep learning has shown promising results in medical image analysis, existing research often faces generalization challenges, suffers from class imbalance, or lacks effective architectures specifically designed for 3D MRI data. Many existing works focus on 2D segmentation, limiting spatial contextual understanding, or employ models with suboptimal accuracy and computational complexity. This work fills these gaps by employing a U-Net-based deep learning model that is tailored for 3D MRI scans of the BraTS 2020 dataset. Our approach achieves outstanding segmentation quality, as proved by a Dice coefficient of 0.9858 and Mean IoU of 0.9811, significantly better compared to conventional methods. Our model significantly reduces false positives (precision: 0.9935) at high sensitivity levels (recall: 0.9873). The innovation of this study is the inclusion of an optimized deep learning process that enhances consistency in segmentation, accelerates computation time, and decreases human labour. The model demonstrates strong generalization in different MRI scans, making it suitable for real-world clinical deployment. The model, as described by its high reliability and flexibility, can be easily integrated into AI-facilitated radiology tools, telemedicine systems, and automated diagnostic pipelines, thus boosting the accessibility and efficiency of high-end medical imaging solutions. Future work could explore the integration of multi-modal imaging and applying models to edge devices to enable real- time diagnosis in resource-constrained healthcare environments.