Robotic assistive devices, such as exoskeletons, are increasingly employed in walking rehabilitation. Therefore, the measurement of both movement kinematics and cognitive workload is important to understand this human-robot interaction in real-world contexts. To address this need this study presents the validation of a framework integrating inertial motion capture (Xsens) and eye-tracking sensor (Pupil Neon) within a Mixed Reality (Meta Quest 3) architecture. We developed an overground dual-task paradigm in which holographic numbers appear in the user’s peripheral vision. This setup actively stimulates visuospatial attention while quantifying kinematic and cognitive output. To validate the framework, the protocol has been tested on 30 healthy subjects across repeated exoskeleton training sessions. Statistical analyses revealed that the Multiple Correlation Coefficient (CMC) and Spectral Arc Length (SPARC), calculated on the shank angular velocity, together with the Step Length Variability exhibited significant time effects (p < 0.01), mapping the transition toward automated gait. Concurrently, pupillometric data demonstrated a measurable reduction in neurocognitive demand; specifically, the Task-Evoked Pupillary Response (TEPR) decreased significantly across progressive training sessions (p < 0.05). With this work, we validated a measurements protocol that aims to provide a novel methodology for objectively evaluating motor and cognitive adaptation in wearable assistive devices.