1. Introduction
Understanding how macroscopic physical laws emerge from microscopic interactions is a central challenge in modern physics. In this document, we introduce an agent-based framework where each agent possesses a local state and adheres to decision rules governed by intrinsic stochasticity and local interactions with its neighbors. This formalism is built upon several key axioms:
Locality: The future state of any agent depends solely on its own present state and that of its immediate neighbors.
Internal Persistence: Certain intrinsic properties of agents remain invariant over time, ensuring the preservation of identity.
Minimal Consensus: The collective evolution of the system corresponds to the minimization of a global cost functional, mirroring the principle of least action.
Strategic Indeterminacy: The stochastic nature of decision making introduces inherent uncertainty, analogous to mixed strategies in game theory.
Temporal Compatibility and Historical Optimization: The framework guarantees continuously differentiable evolution, accounting for memory effects in the agents’ decision processes.
Equitable Exchange: Symmetry in interactions ensures the conservation of global invariants in accordance with Noether’s theorem.
Drawing inspiration from both classical variational principles and strategic decision-making models found in game theory, the proposed framework demonstrates how simple local rules can lead to the emergence of complex global phenomena. Through variational calculus, the derivation of Euler–Lagrange equations and conservation laws is achieved, showing that the macroscopic dynamics obey well-known physical laws, such as energy conservation and wave propagation, while also capturing the probabilistic aspects inherent to microscopic dynamics.
The remainder of the document details the axiomatic foundations, mathematical derivations, and examples illustrating the transition from local agent-based decisions to global physical behavior. This synthesis offers a perspective that unites elements of classical physics, stochastic processes, and game theory, thereby contributing to a deeper understanding of emergent phenomena in complex systems.
Preliminaries and Global Setup
Let be a countable set of fundamental agents, indexed by . For each agent a, we define:
A state space , equipped with its Borel -algebra .
A local decision kernel
where
and
denotes the set of Borel probability measures on
.
A finite neighborhood , determined by a hypergraph , where each hyperedge is a finite subset of .
The global configuration space is defined as:
and a configuration at time
is represented by
The evolution of the system is governed by the local decision kernels , inducing a joint Markov process on .
Axiom I: Locality (Causal Decoupling)
The stochastic process
is said to satisfy the locality condition if, for every agent
and for every measurable set
, it holds that:
where the filtration
is defined by
This axiom enforces that the future state of any given agent depends solely on its current state and the states of its local neighbors.
Axiom II: Internal Persistence (Invariant Quantities)
For each agent
a, there exists a measurable function
such that for all time steps
t,
This property guarantees the conservation of intrinsic quantities (such as mass, charge, or other topological invariants). The equivalence classes defined by the level sets
represent the persistence of agent identity over time.
Axiom III: Minimal Consensus (Emergent Action)
Every agent
is associated with a local cost functional
The global action over a trajectory
is defined as:
Principle of Minimal Consensus. The system is postulated to evolve along the trajectory
that minimizes the global action:
with
denoting the set of admissible (differentiable) trajectories. This is the emergent analog of the classical principle of least action, where local decision optimizations collectively induce global dynamics.
Axiom IV: Strategic Indeterminacy (Probabilistic Dynamics)
Let
be square-integrable random observables associated with agent
a. There exists a constant
such that:
This inequality emerges naturally from the stochastic structure of the decision kernel , reflecting the intrinsic limitations on the simultaneous predictability of dual quantities associated with each agent.
Axiom V: Temporal Compatibility (Differentiability of Evolution)
Assume that the global configuration
is continuously differentiable, i.e.,
In particular, for each agent
a, if an invariant observable
is defined (see Axiom II), then
This condition ensures that the microscopic evolution of agents is temporally coherent, preserving the invariance properties necessary for the emergence of conserved physical quantities at a macroscopic level.
Axiom VI: Equitable Exchange (Symmetry of Interactions)
Let
be a Lie group acting on the product state space
for any interacting pair
, with associated action:
For any
, the transformation is given by
and we impose the conservation condition
where
denotes an invariant scalar (which might represent energy, momentum, or a similar quantity). This axiom guarantees that agent interactions are governed by symmetries, leading to conserved quantities in accordance with Noether’s theorem.
Axiom VII: Historical Optimization (Memory Effects)
Let
be a fixed memory timescale. The stochastic decision process of an agent is said to exhibit a memory effect if the update for agent
a depends on its finite history:
This modification to the decision kernel introduces non-Markovian behavior, enabling past states to influence present decisions. Such memory effects are crucial in capturing the delayed responses observed in complex systems and in driving the temporal evolution towards an emergent global consensus.
Global Setup: Agent-Based Configuration Space
We consider a set of fundamental agents
, each with a local state
, evolving over time. The global configuration space is:
We aim to derive the Euler–Lagrange equations from first principles using the emergent axioms defined in the formalism.
Axiomatically Building the Action Functional
Local Dependencies (Axiom I: Locality)
Each agent’s evolution depends only on its own state and its neighborhood:
Implication: The dynamics are locally constrained, which allows the definition of a
local cost functional:
Existence of Conserved Quantities (Axiom II: Internal Persistence)
Each agent has a measurable invariant
such that:
Implication: Agent identity and intrinsic quantities are preserved through time. These invariants will later be associated with conserved currents via Noether’s theorem.
Global Objective (Axiom III: Minimal Consensus / Least Action)
The global trajectory
is the one that minimizes the total action:
Interpretation: This is the emergent analogue of Hamilton’s principle, where agents collectively minimize the sum of their local costs.
Differentiability of Trajectories (Axiom V: Temporal Compatibility)
We assume that:
which allows us to perform variational calculus.
Implication: The action functional becomes differentiable, and the calculus of variations is valid.
Defining the Emergent Lagrangian
We define a local Lagrangian for each agent using:
where:
is a kinetic energy term (e.g., ),
is a local potential, possibly including interactions.
By Axiom I, this Lagrangian depends only on local state and neighbors.
The global Lagrangian becomes:
Hence, the action functional is written as:
Principle of Least Action: Variational Derivation
Let
, with
. Then:
Expanding to first order:
Integrating the second term by parts (using Axiom V):
since boundary terms vanish.
For
under arbitrary variations
, we obtain the Euler–Lagrange equations:
Noether’s Theorem from First Principles: Axiomatic Derivation
We present an explicit derivation of Noether’s Theorem using only the axioms introduced in this framework. The goal is to show that every continuous symmetry of the emergent Lagrangian gives rise to a conserved quantity.
Differentiable Agent Evolution (Axiom V)
By
Axiom V (Temporal Compatibility), the global configuration
is assumed to be continuously differentiable:
This allows us to apply variational calculus to derive equations of motion.
Variational Principle (Axioms I, III)
According to
Axiom III (Minimal Consensus), the system evolves along a trajectory
that minimizes the global action:
where the global Lagrangian
L is defined as the sum of local Lagrangians:
By Axiom I (Locality), each local Lagrangian depends only on the state of agent a and its neighbors .
Variational Derivative
Consider an infinitesimal variation
with
. Then:
Integrating the second term by parts and using the vanishing boundary condition:
Since
is arbitrary, we obtain the Euler–Lagrange equations:
Continuous Symmetry (Axiom VI)
Axiom VI (Equitable Exchange) states that for each interacting pair
, there exists a Lie group
acting on
such that the interaction is symmetric under the transformation:
with the invariance condition:
Suppose the action of
G extends to the full configuration space
S, with
for a smooth path
and
, such that:
Conservation Law
Since the action is invariant under the transformation:
Using the chain rule and previous results, we obtain:
Define the
Noether current:
Internal Quantities (Axiom II)
Axiom II (Internal Persistence) posits the existence of invariants
such that:
These invariants can often be directly associated with the symmetries of the Lagrangian. Thus, the conserved current is functionally linked to , confirming the conservation law derived above.
The derivation above shows that:
Every continuous symmetry of the emergent Lagrangian (Axiom VI), under differentiable dynamics (Axiom V) and locality (Axiom I), gives rise to a conserved quantity (Axiom II), in accordance with Noether’s Theorem.
This demonstrates that conservation laws emerge directly from the axiomatic structure of local agent-based dynamics.
Role of Symmetries and Conservation Laws (Axiom VI)
Axiom VI (Equitable Exchange) postulates that interactions are symmetric under Lie group transformations
, and that some quantity
is preserved:
If the Lagrangian is invariant under a continuous group
G, then Noether’s theorem guarantees the existence of a conserved quantity:
This connects Axioms II and VI directly with the conservation laws emerging from the variational structure.
Renormalization Group Flow and Emergent Constants
Microscopic dynamics are inherently dependent on local decisions and fluctuations. However, when these dynamics are averaged over many agents and over appropriate scales, the system may flow under a renormalization group (RG) transformation. Let the effective action at scale
be denoted by
. The RG flow is characterized by a beta function:
The existence of fixed points
such that
signals the emergence of scale-invariant behavior and the appearance of physical constants as emergent parameters. In our framework:
The fixed point corresponds to a stable collective behavior of the agents.
Couplings in (e.g., effective masses, interaction strengths) converge to constant values.
The quantization via the path-integral formalism provides a bridge between the microscopic stochastic dynamics of individual agents and the macroscopic quantum-coherent behavior observed in physical systems. The renormalization process further explains how intrinsic fluctuations at the agent level, when averaged appropriately, lead to effective constants and conservation laws consistent with classical and quantum physics. This synthesis suggests that traditional parameters of physics may emerge naturally from underlying micro-dynamics governed by local decision rules.
Example: Discrete Model on a Two-Dimensional Lattice
Let the agents be arranged on a regular lattice indexed by
with grid spacing
a. Each agent has state
. The local update rule is defined by:
where:
This update rule expresses that each agent adjusts its state according to the difference between its value and the average of its four neighbors.
Continuous Limit and the Diffusion Equation
Taking the limit
(and
in a compatible manner) leads to a continuous scalar field
defined on a domain in
. In this limit, the discrete Laplacian
converges to the continuous Laplacian:
Thus, the evolution equation becomes the standard diffusion (heat) equation:
This example demonstrates the emergence of macroscopic diffusive dynamics from simple local interactions among agents arranged on a two-dimensional lattice. The process can be interpreted in the emergent framework as follows:
Individual agent updates, governed by local averaging rules, reduce local differences (gradients) in the state field.
The gradient descent (or energy minimization) approach leads naturally to a diffusion-like smoothing effect.
In the continuum limit, the dynamics converge to the diffusion (heat) equation, illustrating how local rules give rise to global macroscopic behavior.
Game Theory
Modern physics has long sought to understand how complex global phenomena emerge from simple local interactions. A promising approach lies in the use of an axiomatic framework where agents—each possessing their own state and decision-making rule—interact on a local level while collectively giving rise to macroscopic laws. Simultaneously, game theory has provided powerful insights into the behavior of strategic players, whose local decisions based on limited information lead to emergent equilibria in complex systems.
In the framework presented here, the agents (or “players”) evolve according to a set of axioms. These axioms not only guarantee that the emergent dynamics respect global conservation laws and symmetries but also embody principles familiar in game theory. This document aims to provide a deep and extended analysis of each axiom from both the emergent physical laws perspective and its game-theoretic interpretation. The result is a narrative that connects deterministic variational principles with the stochastic, adaptive dynamics found in strategic games.
Internal Persistence: Conservation of Strategic Identity
The Internal Persistence axiom posits that specific intrinsic quantities associated with each agent are invariant over time. In physical systems, such invariants may represent conserved quantities such as energy, charge, or momentum. In game theory, this invariance mirrors the preservation of a player’s identity or core strategic properties across multiple rounds of a game.
For instance, in iterated games where players adjust their strategies over time, certain underlying preferences or dispositions may remain unchanged even as players adapt to the environment. These underlying strategic invariants ensure continuity; they provide a stable reference point that allows the overall system to evolve without degenerating into randomness. Internal persistence, therefore, guarantees that while agents may vary their external actions (or strategies) in response to local conditions, their fundamental characteristics are preserved, ensuring that the conservation laws observed at the macroscopic level have a micro-level origin.
From a game-theoretic perspective, such invariants serve as a baseline for strategy evaluation. They ensure that players are not entirely mutable but instead retain a consistent identity that influences their interactions and learning processes over time. This stable identity is essential for the emergence of long-term equilibria, just as the conservation of energy or momentum is key to the predictability of physical phenomena.
Minimal Consensus: Toward a Global Equilibrium
The Minimal Consensus axiom embodies the principle of least action, whereby the global trajectory of the system corresponds to the minimization of a total action functional. This total action, an aggregation of local cost functionals over time, plays a role analogous to a potential function in a game. In strategic terms, each agent is modeled as a player aiming to minimize its own cost or maximize its payoff, and the aggregation of these objectives leads naturally to the emergence of an equilibrium state.
In many games, especially in the context of potential games, the players’ local objectives can be combined into a single global potential function. An equilibrium—often a Nash equilibrium—is reached when no agent can unilaterally reduce its cost any further. Here, the global minimum of the action functional represents a state in which each agent’s local decision is optimally coordinated with the decisions of others. In this state of minimal consensus, no single agent finds it beneficial to deviate from its current strategy.
This convergence toward a minimal global action illustrates how local optimization, when consistently applied across all agents, leads to macroscopic order. It highlights the interplay between local decision-making and global dynamics, emphasizing that the collective behavior of the system is not imposed externally but arises naturally from the self-organizing interactions of its constituents.
Strategic Indeterminacy: Embracing Uncertainty
The Strategic Indeterminacy axiom captures the intrinsic uncertainty in the decision-making process by introducing probabilistic elements into the evolution of each agent. Analogous to mixed strategies in game theory, this axiom recognizes that in many real-world scenarios, decisions are made under conditions of uncertainty. Rather than following a single deterministic path, agents sample from a distribution of potential actions.
In game theory, mixed strategies allow players to randomize their choices in order to optimize expected outcomes, especially when facing incomplete or imperfect information. The indeterminacy in physical systems, therefore, is not an aberration but a fundamental characteristic that contributes to the robustness and adaptability of the overall dynamics. It prevents the system from being trapped in suboptimal or overly rigid configurations by ensuring that a diversity of actions is maintained over time.
This probabilistic behavior introduces a form of strategic variability that is essential for exploring the vast space of possible configurations. It effectively allows the system to “experiment” with different strategies, gradually reinforcing those that lead to lower costs and higher overall stability. As a result, the global equilibrium that emerges is a reflection of both deterministic optimization and the stochastic nature of local decisions—a balance that is central to many complex adaptive systems.
Temporal Compatibility and Historical Optimization: Dynamics Over Time
The Temporal Compatibility axiom ensures that the evolution of the system is continuously differentiable, which is a necessary condition for applying variational calculus. In both physics and dynamic game theory, the continuity of the state evolution is crucial for stability and predictability. This axiom guarantees that small changes in time result in correspondingly small changes in state, thus allowing for a coherent and predictable evolution of the system.
In parallel, the Historical Optimization axiom emphasizes that the decision-making process is not memoryless. Agents base their current decisions not only on immediate local information but also on a finite history of their past states. This incorporation of memory is vital in dynamic games, where past performance and previous rounds of the game influence future strategies. The cumulative knowledge acquired from historical interactions enables agents to refine their strategies continuously, leading to more informed and effective decision-making over time.
The inclusion of memory effects allows the system to capture phenomena such as path dependence and hysteresis, where the history of the system affects its current dynamics. In game theory, this is akin to learning dynamics, where players adjust their strategies based on the outcomes of previous interactions. Over time, the interplay between immediate reactions and long-term historical optimization drives the system toward a stable equilibrium that reflects both instant feedback and accumulated experience.
Equitable Exchange and Symmetry: Balancing Strategic Interactions
The Equitable Exchange axiom states that interactions between agents must be symmetric, ensuring that the total invariant (such as energy, momentum, or any other conserved quantity) is preserved during exchanges. This symmetry is a cornerstone of modern physics, underpinning Noether’s theorem, which connects continuous symmetries with conservation laws. In a game-theoretic context, equitable exchange mirrors the idea of fairness and balanced interactions among players.
In strategic games, a symmetric environment implies that all players are subject to the same rules and conditions. No player has an inherent advantage solely due to asymmetries in the system. This balance is crucial for the emergence of fair and stable equilibria, as it prevents scenarios where one player can exploit the system at the expense of others. The conservation of invariants—whether it is a physical quantity or a strategic resource—ensures that the system remains in equilibrium over time and that local imbalances are corrected through symmetric interactions.
By enforcing symmetry at the local level, the Equitable Exchange axiom also facilitates the propagation of conservation laws throughout the entire system. This propagation is similar to ensuring that every strategic move in a game is checked by reciprocal actions, thus maintaining a stable, overall equilibrium that is robust against perturbations.
Synthesis: The Emergence of Global Order from Local Decisions
Integrating the aforementioned axioms, we arrive at a unified picture in which complex global behavior arises organically from the concerted interactions of many simple, strategically behaving agents. Each axiom—from locality to historical optimization—imposes a constraint that mirrors a strategic rule in a multi-agent game, and together they ensure that the system evolves toward a state of minimal global action.
In this synthesis, local decision-making is not an isolated or fragmented process but a component of a much larger, self-organizing system. The agents act as players in an iterated, dynamic game, where their limited information, conserved internal identities, mixed strategic choices, and memory of past interactions all contribute to the emergence of macroscopic equilibrium. Such an equilibrium is characterized by global conservation laws, stable symmetries, and the predictable behavior typically associated with physical systems governed by the principle of least action.
Moreover, this integrated perspective highlights that the mathematical and conceptual tools of game theory are not merely metaphors but provide rigorous methods for analyzing the dynamics of complex physical systems. By drawing on ideas such as Nash equilibrium, replicator dynamics, and mixed strategies, we gain valuable insights into how local decisions can collectively yield the robust, global phenomena observed in nature.
Energy Conservation
To derive the conservation of energy from first principles using the agent-based action formalism, under the assumption that the global Lagrangian is invariant under time translations.
Assumptions and Setup
Let be the global configuration of the agent system. Suppose:
Each agent
has a local Lagrangian
, with the total Lagrangian given by:
The action is defined as:
The Lagrangian is
invariant under time translations, i.e.,
Noether’s Theorem for Time Translation
Noether’s theorem states that if the action is invariant under a continuous transformation of time, there exists a conserved quantity associated with that symmetry.
Infinitesimal Time Shift: Consider a transformation of the time variable:
and the corresponding shift in the trajectory:
Then the variation of the Lagrangian under this transformation is:
If the Lagrangian is invariant (i.e.,
), then:
However, since
L depends on
s,
, and
t, we use the total derivative:
Using the Euler–Lagrange equations:
we rearrange to obtain:
Therefore, if
, i.e., if
L does not depend explicitly on time, then the quantity:
is conserved. That is,
Emergent Conservation of Energy: If the global Lagrangian
is invariant under time translations (i.e.,
), then the emergent energy:
is a conserved quantity. This result follows directly from Noether’s theorem applied within the agent-based axiomatic framework.
Axiom IV: Strategic Indeterminacy
Let
be square-integrable random observables associated with an agent
. Axiom IV asserts the existence of a constant
such that
This inequality embodies the intrinsic limitation on the simultaneous precision with which two observables can be measured; it arises from the stochastic nature of the decision kernel .
Interpretation
In this formalism, we assume that and represent dual observables analogous to quantities such as position and momentum (or, more generally, state and flow). The local randomness in the agent’s decision-making process precludes deterministic access to these observables. Specifically, the decision kernel defines a probability distribution over the state space , thereby inducing inherent randomness in the outcomes of and .
Variance-Based Inequality
The variances of the observables are defined by
Consequently, Axiom IV guarantees that
This expression serves as a generalized form of the uncertainty principle emerging directly from the probabilistic structure of agent decisions, rather than from traditional operator or wavefunction methods.
Emergent Uncertainty Principle
Theorem.Let and be observables associated with an agent a, derived from the local decision kernel . If the stochastic behavior of these observables satisfies Axiom IV, then
This result implies that any increase in precision for one observable inevitably results in a corresponding increase in uncertainty for the other. The constant quantitatively measures the intrinsic uncertainty inherent in the agent’s decision-making process.
Analogy with Quantum Mechanics
The inequality stated above is structurally analogous to the Heisenberg uncertainty principle,
where
and
denote the standard deviations of position and momentum, respectively. Unlike the universal constant
ℏ in quantum mechanics, the constant
is specific to each agent, emerging from local interactions, memory effects, or inherent randomness within the decision process. Hence, the agent-based formalism provides an emergent uncertainty principle that sets a fundamental lower bound on the joint predictability of dual observables.
Wave Equation
To derive the classical wave equation from a one-dimensional chain of locally interacting agents, using the Lagrangian formalism from the agent-based framework.
Setup: Discrete Chain of Agents
Let agents be indexed by
. Each agent has a state
representing its position at time
t. The local Lagrangian is defined as:
where:
Thus, the total Lagrangian is:
Euler–Lagrange Equations
The equation of motion for agent
i is obtained via the Euler–Lagrange equation:
Therefore, the discrete equation of motion is:
Continuum Limit
Let
a be the spatial distance between neighboring agents. Define a continuous function
such that:
Using the second-order finite difference approximation:
the equation becomes:
Fluctuation–Dissipation Theorem
To derive an emergent version of the fluctuation–dissipation theorem (FDT) within the agent-based formalism, linking spontaneous fluctuations due to stochastic decisions to the system’s linear response to external perturbations.
Background: Classical FDT
In statistical physics, the FDT states that the response of a system in equilibrium to small external perturbations is directly related to the internal fluctuations occurring in equilibrium. Symbolically:
where:
is the observable responding to a perturbation,
is the observable conjugate to the perturbation,
is the linear response function,
denotes equilibrium average.
Axiom IV: Strategic Indeterminacy
In the agent-based model, each agent
evolves stochastically according to a local decision kernel:
and has associated observables
such that:
This variance-based uncertainty encodes the amplitude of spontaneous fluctuations in agent dynamics.
Perturbation Framework
Let us consider a small perturbation
applied to the agent’s decision cost functional:
where
is the observable conjugate to the perturbation.
Define the response of an observable
via the linear response function:
Derivation of Agent-Based FDT
Under the assumption that the unperturbed dynamics reach a local stochastic equilibrium governed by
, the correlation function:
can be expressed in terms of internal fluctuations due to the randomness induced by
.
Applying the fluctuation-response hypothesis (valid near equilibrium and under detailed balance), we obtain:
where
is an effective temperature parameter emerging from the stochastic structure of the agent’s kernel, analogous to thermodynamic temperature.
Emergent Fluctuation–Dissipation Theorem:In agent-based systems governed by stochastic decision kernels satisfying Axiom IV, the linear response of an observable A to a perturbation conjugate to B satisfies:
where the fluctuations arise from intrinsic stochasticity of local decision processes. The constant plays the role of an effective temperature.
Principle of Locality and Causality
To derive the principle of locality and causality in agent-based systems, where the future state of an agent is determined by its current state and the states of its local neighbors, based on Axiom I (Locality).
Axiom I: Locality
Axiom I states that the evolution of the system is governed by local decision kernels. Specifically, for each agent
, the future state
depends only on the current state
of agent
a and the states of its neighbors, denoted by
:
where
is the filtration at time
t and represents all the information up to time
t. This implies that the evolution of an agent’s state is conditionally independent of the rest of the system given its state and the states of its local neighbors. Hence, there is no direct influence from non-neighboring agents.
Causality in Local Interactions
Given the locality condition, the evolution of the system is causal . That is, the state of any agent at time is caused by the current state of the agent and the states of its immediate neighbors. The principle of locality implies that there are no instantaneous long-range interactions between distant agents, and thus the system evolves causally with respect to local information.
Formally: For any agent
, its future state at time
is fully determined by the following:
where
f is a local function describing the agent’s decision rule based on its current state
and the states of its neighbors
. This local causality implies that no event can affect an agent’s state faster than the propagation of information within its local neighborhood.
Implications for Global Causality
From Axiom I, we can conclude that the overall system evolution is globally causal. If we know the state of the system at time
t, denoted
, the state at time
is determined by:
where
f is the function representing the local decision kernels for all agents. This implies that the global state evolution is deterministic and local causes precede local effects.
Emergent Principle of Locality and Causality: In agent-based systems, governed by local decision kernels as described by Axiom I, the system’s evolution follows the principle of locality and causality. The future state of any agent depends solely on its current state and the states of its local neighbors, ensuring that no agent’s state is influenced by distant agents instantaneously. This structure underpins a globally causal evolution, where each agent’s future is determined by local interactions and not by distant or non-local events.
License
This work is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0).
Author Contributions
The author conceptualized and developed the theoretical framework, performed the mathematical analyses, interpreted the results, and prepared the manuscript.
Funding
This research was conducted without any specific financial support from public, commercial, or non-profit funding agencies.
Data Availability Statement
No new data were used in this study.
Conflicts of Interest
The author declares no potential conflicts of interest with respect to the research, authorship, or publication of this work.
References
- Anderson, P. W. (1972). More Is Different. Science, 177(4047), 393–396.
- Noether, E. (1918). Invariante Variationsprobleme. Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, Mathematisch-Physikalische Klasse, 235–257.
- Landau, L. D., & Lifshitz, E. M. (1976). Mechanics (3rd ed.). Pergamon Press.
- Feynman, R. P. (1948). Space-Time Approach to Non-Relativistic Quantum Mechanics. Reviews of Modern Physics, 20(2), 367–387.
- Schulman, L. S. (1981). Techniques and Applications of Path Integration. John Wiley & Sons.
- Wolfram, S. (2002). A New Kind of Science. Wolfram Media.
- Zurek, W. H. (2003). Decoherence, einselection, and the quantum origins of the classical. Reviews of Modern Physics, 75(3), 715–775.
- Newman, M. E. J. (2010). Networks: An Introduction. Oxford University Press.
- Axelrod, R. (1984). The Evolution of Cooperation. Basic Books.
- von Neumann, J. (1966). Theory of Self-Reproducing Automata. University of Illinois Press.
- Kirman, A. (1992). Whom or What Does the Representative Individual Represent? Journal of Economic Perspectives, 6(2), 117–136.
- Risken, H. (1989). The Fokker-Planck Equation: Methods of Solution and Applications (2nd ed.). Springer-Verlag.
- Ott, E. (1993). Chaos in Dynamical Systems. Cambridge University Press.
- Farmer, J. D., & Geanakoplos, J. (2009). The virtues and vices of equilibrium and the future of financial economics. Complexity, 14(3), 11–38.
- Helbing, D. (1995). Quantitative Sociodynamics: Stochastic Methods and Models of Social Interaction. Kluwer Academic Publishers.
- Strogatz, S. H. (1994). Nonlinear Dynamics and Chaos. Westview Press.
- Prigogine, I. (1980). From Being to Becoming: Time and Complexity in the Physical Sciences. W. H. Freeman.
- Bak, P. (1996). How Nature Works: The Science of Self-Organized Criticality. Copernicus.
- Hofbauer, J., & Sigmund, K. (1998). Evolutionary Games and Population Dynamics. Cambridge University Press.
- Fudenberg, D., & Levine, D. K. (1998). The Theory of Learning in Games. MIT Press.
- Stanley, H. E. (1971). Introduction to Phase Transitions and Critical Phenomena. Oxford University Press.
- Smith, J. M. (1982). Evolution and the Theory of Games. Cambridge University Press.
- Parisi, G. (1988). Statistical Field Theory. Addison-Wesley.
- Mazur, P., & Montroll, E. W. (1977). The Role of Fluctuations in Self-Organization. Physical Review A, 15(4), 2304–2310.
- Chandler, D. (1987). Introduction to Modern Statistical Mechanics. Oxford University Press.
- Smale, S. (1967). Differentiable Dynamical Systems. Bulletin of the American Mathematical Society, 73(6), 747–817.
- May, R. M. (1976). Simple mathematical models with very complicated dynamics. Nature, 261(5560), 459–467.
- Castellano, C., Fortunato, S., & Loreto, V. (2009). Statistical physics of social dynamics. Reviews of Modern Physics, 81(2), 591–646.
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).