Cloud cover remains a persistent challenge in optical remote sensing, limiting the usability of optical satellite imagery for continuous Earth observation. Synthetic Aperture Radar (SAR) data, in contrast, provides cloud-penetrating, all-weather imaging but lacks the spectral richness and is less visually interpretable compared to optical observations. Bridging these complementary modalities, this study investigates SAR-to-optical image translation using the Pix2Pix conditional generative adversarial network (cGAN). While existing research predominantly focuses on reconstructing only the visible (RGB) or near-infrared bands, this work employs the winter subset of the SEN12-MS dataset to address the full spectral range. The objectives are threefold: (i) to validate SAR-to-optical translation across all 13 Sentinel-2 spectral bands; (ii) to assess the reliability and reconstructability of each individual band; and (iii) to evaluate the performance of the translation model for cloud removal. Experimental results show that the model effectively learns the SAR-to-optical mapping and achieves high reconstruction quality across all spectral bands, though bandwise analysis reveals that reconstruction accuracy varies with spectral characteristics. When applied to the SEN12-MS-CR dataset, the model successfully generates cloud-free optical imagery that closely matches reference data, achieving performance competitive with state-of-the-art models such as DiffCR. Overall, the findings confirm the viability of SAR-to-optical translation for producing spectrally consistent, cloud-free optical imagery, thus enhancing the temporal continuity of Earth observation data. Two ablation studies further analyze the impact of different loss functions and the exclusion of 60 m bands.