Submitted:
01 June 2025
Posted:
04 June 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. A Unified Structure of Agent Derived from First Principles
3. Definition and Mathematical Expression of Standard Agent Model
3.1. The definitions of each functional module and its corresponding capability
1. Information Input function (In)
- represents the space of all information that Agent can, in principle, perceive from its environment E.
- represents the space of raw perceptual data or information representations acquired by Agent from its environment, equivalent to an input information buffer, becoming the initial form accessible for further processing by Agents storage space.
Information Output function (Out)
- : The core internal state space managed by the Dynamic Storage module.
- : The space constituted by all possible additive environmental state changes caused by Agents output. Each represents a specific change command.
Dynamic Storage function (DS)
- : The information space managed by the Dynamic Storage module.
- : The information space of the Information Input function (In).
- : The information space of the Information Creation function (Cr).
Information Creation function (Cr)
2. Control function (Con)
- : The information space managed by the Dynamic Storage (DS) module.
- : The meta-command information space managed by the Control (Con) module.
3.2. Theoretical Significance of Standard Agent Model
4. Classification and Relationships of Agent Based on Standard Agent Model
4.1. Classification of Agent
4.1.1. Three Fundamental Classifications of Agent Types
- 1.
- Absolute Zero Agent or Alpha Agent
- 2.
- Omniscient and Omnipotent Agent or Omega Agent
- 3.
- Finite Agent
4.1.2. Classification of 243 Agent Types
- State 0: Indicates the capability is zero ().
- State 1: Indicates the capability is finite ().
- State 2: Indicates the capability is infinite ().
- Agent of Index 1: Capability vector , corresponding to the aforementioned Absolute Zero Agent.
- Agent of Index 2: Capability vector , representing Agent possessing only finite input capability.
- Agent of Index 122: Capability vector , representing Agent with finite capabilities in input, dynamic storage, creation, output, and control.
- Agent of Index 241: Capability vector , representing Agent with infinite input, dynamic storage, creation, and control capabilities, but no output capability.
- Agent of Index 242: Capability vector , representing Agent with infinite input, dynamic storage, creation, and control capabilities, but finite output capability.
- Agent of Index 243: Capability vector , corresponding to the aforementioned Omniscient and Omnipotent Agent.
4.2.1. Perception Relationship
- Unaware: Agent A completely fails to detect the presence of Agent B.
- Indirect Perception: Agent A infers the existence or state of Agent B by observing changes in the shared environment or traces left by B.
- Direct Perception: Agent A can directly receive signals from Agent B or recognize its specific identifiers through its input channels.
4.2.2. Communication Relationship
4.2.3. Interaction Relationship
5. The Evolutionary Dynamics of Agent: A Mechanism Based on Standard Agent Model
5.1. Agent Capability Space
5.2. Pole Intelligent Field Model
5.3. Net Intelligent Evolution Force and Net Intelligent Evolution Field
- 1.
- Regions Tending Towards Omega Pole ():
- 2.
- Regions Tending Towards Alpha Pole ():
- 3.
- Equilibrium/Steady States ():
- Stable Equilibrium Points (Attractors): Nearby field lines converge towards these points; Agent tends to return to such a state after minor perturbations.
- Unstable Equilibrium Points (Repellers or Saddle Points): Nearby field lines diverge from these points; once Agent deviates, it will continue to move away from such a state. The existence and stability of equilibrium states determine whether Agents capability structure tends to solidify at a specific level or will continuously evolve.
5.4. Wisdom (W): The Intrinsic Metric of Agent Evolution and Its Dynamics Equation
- 1.
- When(Matthew Effect is dominant): whereis a positive proportionality constant
- 2.
- When(Resilience Effect is dominant): whereis a negative proportionality constant
5.5. System Construction of Generalized Agent Theory
6. Analysis of Intelligence and Consciousness Based on Generalized Agent Theory
6.1. Definition and Assessment Methods for Intelligence
6.2. Definition and Classification of Consciousness
-
Self-Consciousness ()
- Core Definition: Agents control space meta-commands are not empty, or they originate from inherent control meta-commands within the control space, or they originate from control meta-commands generated by its own Information Creation function (Cr) and integrated into .
- Characteristics: Reflects Agent controlling its own information processing capabilities (information input, output, dynamic storage, and creation) by following its own control commands.
- Mathematical Criterion:
-
Other-Consciousness ()
- Core Definition: Agents control space meta-commands are not empty and originate from control meta-commands acquired by Agent through Information Input and integrated into Dynamic Storage space .
- Characteristics: Reflects Agent essentially controlling its own information processing capabilities (information input, output, dynamic storage, and creation) by following control commands from other Agent instances.
- Mathematical Criterion:
-
Mixed Consciousness ()
- Core Definition: Agent simultaneously possesses self-consciousness and other-consciousness.
- Characteristics: Reflects Agent following control commands that include its own as well as those from other Agent instances, operating its own information processing capabilities (information input, output, dynamic storage, and creation).
- Mathematical Criterion: .
-
Unconsciousness ()
- Core Definition: The capability of Agents Control function (Con) is equal to zero, meaning its control function is completely missing or ineffective, unable to read Agents control space meta-commands. Or Agents control space meta-commands are empty.
- Characteristics: Agent does not possess control function, or lacks effective commands for the control function to call upon.
- Mathematical Criterion: .
7. Discussion of Important Problems in Physics Based on Generalized Agent Theory
7.1. Argument for the Universe as a Dynamically Evolving Agent
7.2. Interpretation of Objective Reality and Subjective Non-reality
- Agent As "Subjective Non-reality" ():
- Agent As "Objective Reality" ():
- Agent As "Objective Reality" ():
- Agent As "Subjective Non-reality" ():
7.3. Essential Interpretation of Certainty and Uncertainty
- based on the capitalization in the conceptual term 'UnCertainty'.,represents the level of uncertainty. Its range is .
- represents the comprehensive capability of Agent, its range is .
- represents the effective complexity of the environment, assumed to be always positive ().
- (delta) is a positive constant that adjusts the sensitivity of uncertainty to environmental complexity relative to capability.
- Capability Impact: The stronger Agents capability (when ), the lower the uncertainty , tending towards (complete certainty).
- Complexity Impact: The higher the environmental complexity (when ), the higher the uncertainty C, tending towards (high uncertainty).
-
Behavior of Boundary Agent Types:
- For Omega Agent (, when ): Uncertainty , exhibiting complete certainty.
- When Agent A approaches the capability of Alpha Agent (): Uncertainty , exhibiting extremely high uncertainty.
- For a strict Alpha Agent (, i. e., ): This formula is not directly defined at this Pole (as is in the denominator), which is consistent with the view in Generalized Agent Theory that the concepts of certainty and uncertainty themselves lose their conventional meaning at this stage.
7.4. Essential Analysis of Time and Space
7.5. Analysis of the Unification of Classical Mechanics, Relativity, and Quantum Mechanics
7.5.1. Capability Analysis of Observers in the Three Theories
7.5.2. Subjective Interpretation of Spacetime Curvature and Wave Function Collapse
7.5.3. Thought Experiment for Unifying the Three Theories by Adjusting Observer Capability
7.5.4. Principles for Unifying the Three Major Physical Theories Based on Generalized Agent Theory
- Principle 1: Classical Mechanics – Originating from Omniscient and Passive Observation. When Observer Agents capability configuration approaches that of "Omniscient Agent" (), i. e., possessing infinite information acquisition and processing capabilities () and not intervening in the observed system (), the physical model it constructs naturally adheres to the laws of classical mechanics. From the perspective of this idealized observer, all phenomena in the Universe evolve in a strictly deterministic manner, perfectly predictable against an absolute spacetime background.
- Principle 2: Relativity – Originating from Information-Acquisition-Limited Passive Observation. If Observer Agents capability shifts to being information-input-limited (), for example, due to the fundamental limit imposed by the speed of light on information propagation and cognitive limitations arising from the local equivalence principle, but its dynamic storage, creation, and control capabilities remain infinite, and it still does not intervene in the system (), this corresponds to a special type of high-order Finite Agent in GAT (e. g., ). In this case, the effective laws of will tend towards those of relativity. The description of physical phenomena will be embedded within a relativistic spacetime structure, and its deterministic evolution will strictly follow causal connections defined by light cones.
- Principle 3: Quantum Mechanics – Originating from Comprehensively Capability-Limited Interactive Observation. Further, when all five core capabilities of Observer Agent are finite, particularly when its actions can produce non-negligible effects on the observed system (), thus constituting a Finite Agent that fully interacts with its environment (e. g., ), the physical laws described by in this context will necessarily embody the core features of quantum mechanics: system states evolve as probability amplitudes, information acquisition is fundamentally limited by the uncertainty principle, and the observation (measurement) process itself is an interaction that induces changes in the systems state (observer effect), ultimately leading to probabilistic measurement outcomes.
7.6. Unified Interpretation of the Origin of Entropy and Observer-Dependence
- represents the entropy within the framework of Generalized Agent Theory (GAT);
- is the set of all possible microstates of the system;
- is the subjective probability distribution assigned by observer that the system is in microstate when macrostate is observed, this distribution being determined by s capability vector ;
- is the number of microstates indistinguishable by in macrostate , a larger value indicating a greater information deficit.
8. Conclusion
Acknowledgments
References
- Agrawal, U., et al. (2024). Observing quantum measurement collapse as a learnability phase transition. Physical Review X, 14(4), 041012. [CrossRef]
- Mogi, K. (2024). Artificial intelligence, human cognition, and conscious supremacy. Frontiers in Psychology, 15, 1364714.
- Feng, L., et al. (2024). From observer to agent: On the unification of physics and intelligence science. Preprints, 2024100479.
- Edwards, D. J. (2025). Further N-frame networking dynamics of conscious observers in quantum cognitive systems. Frontiers in Computational Neuroscience, 19, 1551960.
- Liu F and Shi Y (2014) The search engine IQ test based on the internet IQ evaluation algorithm. Procedia Computer Science 31:1066-1073.
- Liu F, Shi Y and Liu Y (2017) Intelligence quotient and intelligence grade of artificial intelligence. Annals of Data Science 4:179-191.
- Liu, F., & Shi, Y. (2020). Investigating Laws of Intelligence Based on AI IQ Research. Annals of Data Science, 7(3), 399-416. [CrossRef]
- Feng, L., Lv, B., & Liu, Y. (2025). Agent: A New Paradigm for Fundamental Units of the Universe. Preprints. [CrossRef]
- Nwana, H. S. (1996). Software agents: An overview. The Knowledge Engineering Review, 11(3), 205-244. 。.
- Russell, S. J., & Norvig, P. (2021). Artificial Intelligence: A Modern Approach (4th ed.). Pearson.
- Wooldridge, M., & Jennings, N. R. (1995). Intelligent agents: Theory and practice. The Knowledge Engineering Review, 10(2), 115-152. [CrossRef]
- Park, J. S., O'Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023). Generative Agents: Interactive Simulacra of Human Behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST '23).
- Wooldridge, M. (2009). An Introduction to Multiagent Systems (2nd ed.). John Wiley & Sons.
- Shannon, C. E. (1948). A mathematical theory of communication. The Bell System Technical Journal, 27(3), 379-423.
- Turing, A. M. (1936). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, s2-42(1), 230-265.
- Wiener, N. (1948). Cybernetics: Or Control and Communication in the Animal and the Machine. MIT Press.
- Ackoff, R. L. (1989). From data to wisdom. Journal of Applied Systems Analysis, 16(1), 3-9.
- Newell, A. (1982). The knowledge level. Artificial Intelligence, 18(1), 87-127.
- Boden, M. A. (2004). The Creative Mind: Myths and Mechanisms (2nd ed.). Routledge.
- Szathmáry, E., & Smith, J. M. (1995). The major transitions in evolution. Nature, 374(6519), 227-232.
- Lynch, M. (2007). The frailty of adaptive hypotheses for the origins of organismal complexity. Proceedings of the National Academy of Sciences, 104(suppl_1), 8597-8604. [CrossRef]
- Anderson, J. R. (1982). Acquisition of cognitive skill. Psychological Review, 89(4), 369–406.
- Wixted, J. T. (2004). The psychology and neuroscience of forgetting. Annual Review of Psychology, 55, 235-269. [CrossRef]
- Kaplan, J., et al. (2020). Scaling Laws for Neural Language Models. arXiv preprint arXiv:2001. 08361.
- Zhao, W. X., et al. (2023). A survey of large language models. arXiv preprint arXiv:2303. 18223.
- Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200-219.
- Tononi, G. (2008). Consciousness as integrated information: a provisional manifesto. The Biological Bulletin, 215(3), 216-242. [CrossRef]
- Dehaene, S., & Changeux, J. P. (2011). Experimental and theoretical approaches to conscious processing. Neuron, 70(2), 200-227. [CrossRef]
- Seth, A. K., & Bayne, T. (2022). Theories of consciousness. Nature Reviews Neuroscience, 23(7), 439-452.
- Powers, W. T. (1973). Behavior: The control of perception. Psychological Review, 80(5), 303–322.
- Dehaene, S., & Naccache, L. (2001). Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework. Cognition, 79(1-2), 1-37.
- Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21), 610–623.
- Shanahan, M. (2022). Talking about large language models. arXiv preprint arXiv:2212. 03551.
- d'Espagnat, B. (1979). The quantum theory and reality. Scientific American, 241(5), 158-181. [CrossRef]
- Hoefer, C. (2016). Causal Determinism. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Winter 2016 Edition).
- Maudlin, T. (2012). Philosophy of Physics: Space and Time. Princeton University Press.
- Laplace, P. S. (1951). A Philosophical Essay on Probabilities (F. W. Truscott & F. L. Emory, Trans.). Dover. (Original work published 1814). [CrossRef]
- Einstein, A. (1916). The foundation of the general theory of relativity. Annalen der Physik, 354(7), 769-822.
- Heisenberg, W. (1930). The Physical Principles of the Quantum Theory (C. Eckart & F. C. Hoyt, Trans.). University of Chicago Press.
- Von Neumann, J. (1955). Mathematical Foundations of Quantum Mechanics (R. T. Beyer, Trans.). Princeton University Press. (Original work published 1932).
- Einstein, A. (1916). The foundation of the general theory of relativity. Annalen der Physik, 354(7), 769-822.
- Dirac, P. A. M. (1930). The Principles of Quantum Mechanics. Oxford University Press.
- Sklar, L. (1992). Philosophy of Physics. Westview Press.
- Fuchs, C. A., & Schack, R. (2013). Quantum-Bayesian coherence. Reviews of Modern Physics, 85(4), 1693–1715. [CrossRef]
- Boltzmann, L. (1877). Über die Beziehung zwischen dem zweiten Hauptsatze der mechanischen Wärmetheorie und der Wahrscheinlichkeitsrechnung, respective den Sätzen über das Wärmegleichgewicht. Wiener Berichte, 76, 373-435.
- Jaynes, E. T. (1957). Information theory and statistical mechanics. Physical Review, 106(4), 620–630. [CrossRef]
- Bennett, C. H. (1982). The thermodynamics of computation—a review. International Journal of Theoretical Physics, 21(12), 905-940.
- Jaynes, E. T. (1957). Information theory and statistical mechanics. Physical Review, 106(4), 620–630. [CrossRef]




| Agent Type |
() |
() |
() |
() |
() |
Intelligence Level () |
| Calculator | ||||||
| Intelligent Car | ||||||
| Adult Human |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).