In this paper we consider the congruence problem that arises in the post-analysis of Bayesian network models reconstructed from different datasets. Apart from the structure, a typical network numerically encodes relationship intensities, assigning numerical score to network edges via the scoring criterion used in the reconstruction process. This scoring is rarely a directly interpretable quantity with proper units of measure and an absolute scale, and often comes short in desirable characteristics of a true metric. This leads to poor portability of edge magnitude considerations between similar networks, originating from different sources. In this work, we address this problem by estimating the effect that data-specific resolution limit has on conditional independence, as reflected by information-theoretic entropy, and by the appropriate modification of MDL score, which removes the inconsistency between the score components in both the meaning and units. We also numerically validate our findings and expose additional performance advantages obtained via this modification.