Overexposure, and severe noise in aerial images taken by monocular UAVs under complex lighting conditions (such as dusk and backlight) . A three-stage adaptive enhancement and restoration algorithm based on a "divide and conquer" strategy is proposed. The core innovation of this scheme lies in firstly, using a lightweight U-Net network to perform precise semantic segmentation of the illumination component of the input image, generating a mask that divides the image pixels into four regions: underexposed, normal, exposed , and overexposed. This mask serves as a navigation map for subsequent differential processing. For underexposed regions , the algorithm employs a Retinex -guided illumination decomposition method, decomposing them into reflectance and illumination maps, and then correcting them through reflectance recovery and illumination adjustment networks respectively to improve brightness and restore details. To address the noise introduced during the enhancement process, a two-stage trained Generative Adversarial Network (GAN) is specifically designed as an image enhancement module, effectively denoising and improving visual realism. For severely overexposed regions , they are treated as occlusions, and image restoration is performed using contextual information through another complex GAN framework to intelligently reconstruct lost textures. Experimental results show that the proposed algorithm performs excellently on both self-built datasets and multiple public datasets in terms of objective metrics (such as NIQE) and subjective visual quality, especially demonstrating significant advantages in noise suppression and overexposed area restoration. This provides higher-quality image input for subsequent tasks such as object detection and 3D reconstruction. Ablation experiments further validated the effectiveness of each module.