Human–artificial intelligence collaboration is increasingly treated as a static allocation problem—humans decide, machines compute—yet high-stakes workflows reveal a more fluid reality: leadership shifts multiple times within a single decision episode. This paper formalizes the Dynamic Authority Reversal (DAR) framework, which models intra-episode authority transitions across four states: Human-Leader/AI-Follower (HL), AI-Leader/Human-Follower (AL), Co-Leadership (CO), and Mutual Override (MO). Transitions are governed by four trigger classes—data superiority, contextual judgment requirements, risk thresholds, and ethics overrides—and are stabilized through hysteresis bands and safe-exit timers. The framework couples micro-level trust calibration with macro-level legitimacy by introducing the Reversal Register, an auditable log that binds each decision to the prevailing authority state, trigger conditions, and justificatory explanations. Ten falsifiable propositions are derived and linked to measurement constructs, prioritized by foundational importance and empirical tractability. Sector-specific implementation guidance is provided for healthcare and public administration, with attention to existing governance structures and regulatory frameworks. By operationalizing handovers rather than merely prescribing "human oversight," DAR advances both theory and practice: it equips researchers with testable hypotheses, furnishes practitioners with governance-ready instruments, and offers regulators an auditable architecture that preserves ultimate human accountability while enabling reversible AI leadership where contextually advantageous.