Whole-slide imaging has transformed histopathology into a data-intensive domain, with current approaches dominated by end-to-end deep learning that encode morphology implicitly within latent representations. This limits interpretability, reproducibility, and cross-dataset generalization. This review positions histomics as an intermediate phenotype representation layer that maps histological images to structured, multi-scale descriptors of tissue morphology, spatial organization, and architectural context. A unified taxonomy of histomic features across biological scales is presented, along with an analysis of artificial intelligence frameworks spanning classical machine learning, deep learning, weakly supervised learning, and multimodal integration. The review presents core failure modes in histomic pipelines, including segmentation dependence, feature instability, and domain shift, and examines their impact on robustness and generalization. Emerging trends in representation learning and multimodal modeling are analyzed in the context of phenotype-centric inference. Overall, this work reframes histomics as a representation-driven paradigm and outlines directions for developing stable, interpretable, and generalizable computational pathology systems.