Agentic AI systems operating autonomously over extended periods present challenges for human oversight, particularly when agents deviate from expected behavior. This paper explores PerformanceGrounded Interpretability (PGI), an architectural approach in which explanations reference documented execution traces that have been evaluated against explicit performance criteria and retained in memory, rather than being generated through post-hoc linguistic rationalization. We present HCIEDM (Human-Centered Interpretability via Evaluation-Driven Memory) as an implementation of PGI principles. In controlled simulation with 120 episodes across logistics optimization tasks, HCI-EDM showed improved trust calibration metrics (mean trust score 4.62/5.0 vs. 3.87/5.0 for chain-of-thought baseline, p < 0.001) and reduced decision comprehension time by 51% (20.7s vs. 42.3s, p < 0.001) under simulated oversight conditions. The system achieved 91% transparency (proportion of independently verifiable explanations) compared to 43% for narrative baselines in this controlled setting. These results suggest that grounding explanations in documented performance history may provide one viable approach toward interpretable oversight of autonomous agents. This work presents an architectural exploration and controlled evaluation; it does not claim safety guarantees, correctness proofs, or deployment readiness.