Neural decoding has demonstrated that population activity contains behaviorally relevant information, yet predictive accuracy alone does not constitute mechanistic explanation. Decoding models establish statistical mappings between neural responses and task variables but leave the underlying computational processes underdetermined. We argue that neural computation is more appropriately framed within a dynamical state-space perspective, in which population activity reflects the evolution of latent states governed by structured transition operators. Across empirical and theoretical work, neural trajectories increasingly appear as low-dimensional, nonlinear flows shaped by recurrent circuit structure and contextual inputs. This shift reframes the central scientific objective: not merely extracting representations, but learning the evolution operator that governs state transitions. However, even accurate reconstruction of latent dynamics does not guarantee mechanistic validity. Observational data typically constrain only an equivalence class of admissible operators, rendering the inferred dynamics structurally non-identifiable. We therefore propose that causal neural dynamics must be defined through perturbation and experimental design. By introducing directional constraints on state transitions, targeted interventions collapse equivalence classes and enable identification of operators that remain valid under manipulation. In this framework, evolution operators are treated as falsifiable hypotheses whose mechanistic status depends on predictive stability under perturbation. This perspective recasts neural modeling as the search for perturbation-validated dynamical laws governing population activity, moving the field from decoding-based description toward causal dynamical explanation.