Preprint
Article

This version is not peer-reviewed.

Learning Stable Update Rules: Iteration-Consistency Beyond the Training Horizon

Submitted:

22 February 2026

Posted:

03 March 2026

You are already at the latest version

Abstract
We study time-generalization in neural networks by training a shared iterative cell under explicit supervision of computation length. Rather than treating depth as fixed or learned implicitly, we provide a deterministic target step schedule and penalize deviations from the prescribed execution length, enabling controlled evaluation beyond the training horizon.We perform experiments on three representative dynamical regimes: contracting (Euclidean GCD), attractor-aligned (Log-Fibonacci), and expanding additive (Log-Factorial). We find that models can generalize to larger inputs when the effective iteration depth remains within the trained regime, yet often fail when required computation length increases, even if input magnitudes are moderate. Failure modes track the stability properties of the underlying update dynamics: contraction dampens errors, attractor alignment bounds them over finite horizons, and additive accumulation induces systematic drift.These results suggest that algorithmic generalization depends not only on function approximation but on the stability of learned update rules under composition. Explicit step conditioning improves interpretability and stability of computation depth, but does not by itself guarantee robust extrapolation to longer iterative chains.
Keywords: 
;  ;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated