Artificial intelligence is increasingly embedded in decision-making across organizational and societal contexts, yet it remains unclear whether individuals remain cognitively aligned with decisions generated under algorithmic conditions. Existing research has emphasized trust, fairness, and transparency, but provides limited insight into the cognitive mechanisms that sustain coherent human judgment during system-mediated decision processes.Here we introduce perceptual integrity as a measurable construct capturing the extent to which individuals maintain interpretive coherence and decision authorship in human–AI interaction. We test this framework in a controlled experiment (N = 602) comparing algorithmic imposition with interpretive autonomy. Algorithmic imposition significantly reduced perceptual integrity relative to interpretive autonomy (t(600) = 4.21, p < 0.001, Cohen’s d = 0.38). Perceptual integrity was a significant predictor of trust in AI-assisted decisions (β = 0.36, p < 0.001) and partially mediated the relationship between decision condition and trust (indirect effect = 0.17, 95% CI [0.09, 0.27]).These findings identify perceptual integrity as a cognitive mechanism linking decision structure to trust under system-mediated conditions. More broadly, they suggest that effective integration of algorithmic systems depends not only on performance accuracy but on preserving cognitive alignment during decision formation. This work provides a generalizable framework for understanding how humans remain engaged with decisions in increasingly automated environments.