Conformal prediction (CP) provides distribution-free uncertainty quantification by constructing prediction sets with guaranteed coverage. In human-in-the-loop (HITL) decision systems, these sets naturally define deferral policies: cases with singleton sets proceed automatically, while those with multiple labels require human review. Mondrian CP, which calibrates separately per group, has been proposed to achieve group-conditional coverage validity, ensuring each demographic group meets the target coverage level. However, we demonstrate through extensive experiments (832K evaluations across 14K configurations, 6 datasets, 100 seeds) that improving coverage validity comes at a significant cost: Mondrian CP increases deferral disparity by 143% compared to global CP, despite reducing coverage disparity by 26% on average. This coverage-deferral trade-off is fundamental: it persists across all datasets (p < 0.001), is invariant to HITL parameters, and exhibits monotonic behavior with respect to the shrinkage interpolation parameter γ. We prove an analogous impossibility result for conformal prediction: under specific conditions, coverage parity and deferral parity cannot be simultaneously achieved when base rates differ between groups. We further demonstrate that standard fairness metrics (Equalized Odds, Average Odds Difference) are invariant to CP method choice, identifying deferral gap as a critical operational fairness metric that captures CP’s unique impact on who receives human review, a dimension invisible to standard EO metrics. Our findings provide actionable guidance: use Mondrian for group-conditional coverage validity, global CP for deferral fairness, or shrinkage for balanced trade-offs.