Preprint
Article

This version is not peer-reviewed.

All You May Need Is the AI Theorem: Entropic Limits of Computable AI and the Emergence of Dynamic‑State Architectures

Submitted:

21 February 2026

Posted:

06 March 2026

You are already at the latest version

Abstract
Contemporary large language models (LLMs) are radically stateless: at every inference step they recompute the entire context, retain no persistent state, and perform no local weight adaptation. This simplicity enables massive scaling but also imposes fundamental limits on stability, speed, and energy efficiency. Each generation step collapses a rich internal state into a single token, causing cumulative drift and extreme computational redundancy. I formulate the AI Theorem: no purely computational system that generates output iteratively and without an external source of negative entropy can maintain stable information for an unlimited number of steps. This represents an analogue of Shannon’s Data Processing Inequality for computational cognition and defines a theoretical boundary for all computable architectures. Building on this limit, I outline Dynamic‑State AI, an architecture with persistent state, local updates, and dynamic weights. It respects the AI Theorem while approaching its limit asymptotically, reducing drift and energy use. This paper proposes a conceptual limit and an architectural framework rather than empirical results.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated