We introduce a divergence-based framework for structural normalization and constrained reconstruction in generative models for poetic translation. The central hypothesis is that a text admits a contextualized, language-independent structural representation capturing semantic, prosodic, rhetorical, cultural, and aective invariants independently of surface linguistic form. A normalization operator embeds each text into a domain-dependent structural manifold conditional on a contextual knowledge state Kt. Reconstruction in a target language is formulated as divergence-minimizing projection under explicit constraint functionals. Structural preservation is quantied through domain-dependent divergence between probability measures induced by structural representations. Cross-linguistic transfer is interpreted as analogical alignment between contextualized structural states. Because structural representation depends on the contextual knowledge state, epistemic updates modify the geometry of structural comparison and may induce time-indexed optimal realizations. The proposed formulation establishes a mathematical perspective on translation as constrained structural projection in contextualized measure spaces, separating relational invariants from surface realization and enabling controllable generative reconstruction under explicit structural validation.