The paper presents a solution to the limited accuracy of automated diagnostics for retinal pathologies, such as diabetic retinopathy and age-related macular degeneration. These challenges arise from difficulties in modeling comorbidities, a reliance on paired multimodal data, and issues related to class imbalance. The proposed solution features a novel hierarchical deep learning architecture designed for multi-label classification of optical coherence tomography (OCT) data. This architecture facilitates cross-modal knowledge transfer from fundus images without the need for paired fundus images. It was accomplished through the modular specialization of the architecture and the application of contrast equalization, which creates a latent “bridge” between the OCT and fundus data. The results demonstrate that the proposed approach achieves high accuracy (macro-F1 score of 0.989) and good calibration (Expected Calibration Error of 2.1%) in classification and staging tasks. Notably, it eliminates the need for fundus images for diabetic retinopathy staging in 96.1% of cases and surpasses traditional monolithic architectures on the macro-AUROC metric.