1. Introduction
Why do systems naturally increase their order? Spontaneous self-organization in complex systems—whether in convective flows, chemical oscillations, insect colonies, or neural networks—represents a remarkable convergence of physical laws and emergent dynamics. Understanding how structure and efficiency emerges from the interplay between stochasticity, feedback, and dissipation is a foundational problem in nonequilibrium statistical physics. Conventional macroscopic metrics such as entropy production, mutual information, fitness functions or order parameters are extremely useful, but often lack a universal, dimensionless form or a firm underlying variational dynamics origin and are often non-monotone under feedback. We resolve this by deriving a dimensionless action-based efficiency metric whose monotonic rise reflects fundamental path-space organization under maximum-caliber principles. This viewpoint treats our results as inference-level predictions rather than universal laws and is compatible—under appropriate steady-state boundary/flux constraints—with MEPP interpretations.
Several routes have been explored. Stochastic thermodynamics links path probabilities to entropy production and fluctuation theorems [
1], the Maximum-Entropy-Production Principle (MEPP) has been invoked for steady states [
2,
3] and has been proposed as an inference [
19]. Variational extensions of the least-action principle—Onsager–Machlup, Graham, Freidlin–Wentzell—suggest that probable trajectories extremize generalized actions [
4,
5,
6]. Yet no dimensionless, first-principles metric exists that (i) measures organizational efficiency, (ii) is monotonic during the transient growth phase, and (iii) applies across open, feedback-driven systems, that can quantify the degree of self-organization in open, stochastic systems. Beyond classical OM/Graham/Freidlin–Wentzell, a modern variational formulation for open, dissipative systems shows that irreversible processes and boundary exchanges (mass/heat ports) can be derived directly from a constrained action principle [
7].
Here we derive a path-integral observable, the Average Action Efficiency (AAE, )—the number of system events per total physical action—and prove that it is a Lyapunov functional in feedback-driven self-organization (Theorem 1) . We propose this as a model-independent, dimensionless, and variationally grounded metric for quantifying self-organization in open, feedback-driven systems. Stochastic-Dissipative Average Action Principles yield an identity linking the rise of AAE to the action variance and the noise-reduction rate. This defines three dynamical regimes: growth, steady plateau, and decay.
Empirically and theoretically, self-organized structures are often selected for their thermodynamic efficiency: they open channels that dissipate free energy otherwise inaccessible and are thus favored under the given drives and boundaries [
8]. This motivates a system-agnostic, dimensionless efficiency such as AAE. Conceptually, what counts as “self-organization”—routes, detection, complexity, and domain dependence—remains debated [
9]. We adopt an open-system, boundary-aware view consistent with these discussions, focusing on feedback-driven concentration of trajectories and a dimensionless efficiency (AAE).
Previous studies introduced empirical AAE through data and computational analyses, but lacked a path integral foundation [
10,
11,
12,
13,
14,
15,
16]. The present study supplies its missing theoretical backbone. Earlier applications were limited to specific systems [
13,
14,
16], while the current formulation expands the applicability across systems within the self-organization regime
AAE is the first dimensionless Lyapunov functional obtained from a stochastic action. Unlike the valuable Jarzynski- [
17] or Hatano–Sasa–type potentials [
18], which lose monotonicity under feedback, AAE rises monotonically until saturation. This rise is governed by the variance of the action distribution and the time-dependent noise level in the system offering a direct link between microscopic trajectories and macroscopic organization. AAE could solve the critical problem of quantifying self-organization efficiency in transient regimes, enabling variational design of feedback-controlled systems. We report experimental consistency in biological systems such as ATP synthase and validation in agent-based simulations. Additional examples, such as Belousov–Zhabotinsky patterns and convection illustrate broader applicability.
While the main focus of present study is the regime in which AAE rises monotonically under feedback-driven self-organization, we identify two additional dynamical regimes—saturation and decay—that complete a minimal classification. These regimes are described for conceptual completeness, but their empirical or simulation-based treatment is left for future work. While we focus on the theoretical derivation and illustrative tests, a broader empirical survey and comparison with other metrics, such as entropy production, is left to future work; AAE should be viewed as a complementary, dimensionless tool rather than a replacement for existing entropy- or information-based measures.
Information-theoretically, our canonical path weighting implies that path entropy decreases whenever feedback increases effective precision. Thus, the same mechanism that drives AAE to rise during self-organization corresponds to a monotonic drop in path entropy; at steady state both are constant, and under disorganization the trends reverse. This MaxCal viewpoint treats the result as an inference consequence rather than a universal law.
2. Time-Dependent Dynamics.
The increase of the probability for a trajectories with lower action under stochastic and dissipative conditions we refer to as a Stochastic-Dissipative Least Action Principle (SDLAP). For Stochastic Dissipative Framework see [
1,
4,
5]. In open systems, the time-dependent path distribution
evolves from ergodicity (fully random state) to concentration around minimal-action paths as feedback amplifies low-action trajectories. This is because the endpoints for the trajectories become defined at the sources and sinks for energy, instead of all points in the system being equally probable endpoints, as in closed systems or initial random state.
Let
be a time-independent action functional defined over system trajectories
, and let the time-dependent path distribution be given by
where
is the time-dependent normalization (partition function),
quantifies time-dependent inverse noise or uncertainty in the system, and serves as the time-varying Lagrange multiplier. It arises naturally from minimal assumptions on stochastic dynamics. Increasing
(i.e., decreasing noise) concentrates the distribution around minimal-action paths. This behavior is driven by positive feedback mechanisms such as pheromone reinforcement in agent-based simulations.
3. Foundational Conditions for Self-Organization FoundValid for all .
- (C1)
Regular feedback:. Ensures differentiability of and .
- (C2)
Positive inverse noise:. Prevents divergence and ensures a well-defined path measure.
- (C3)
Finite partition function:, ensured by exponential decay of the path density .
- (C4)
Strictly positive action:, , with no explicit time dependence. Ensures (in Alpha ) is finite and meaningful.
- (C5)
Integrability: is finite because . Needed for defining (in Alpha ) and for well-posed ensemble averages.
- (C6)
Strictly positive residual variance: and finite, due to unavoidable fluctuations (thermal, behavioral, or quantum).
These assumptions are standard in statistical mechanics and stochastic thermodynamics. Future work will relax these constraints to treat evolving environments, adaptive feedback, and nonstationary noise models.
The ensemble average action at time
t is given by
and serves as a quantitative signature of organizational progress. We refer to it as a Stochastic-Dissipative Average Action Principle (SDAAP). As the system self-organizes,
becomes increasingly peaked around low-action trajectories, and
decreases correspondingly [
23].
At , the distribution is broad, approximating a uniform distribution over paths. Over time, positive feedback sharpens the distribution, reducing as the system transitions from disordered to organized states. This reduction in average action reflects the system’s increasing alignment with low-action trajectories.
4. Dynamical Action Principles in Stochastic Dissipative Self-Organization
Lemma 1
(Path-weight identity).
Under (C1, C2, C3, C5, C6), The time evolution of
follows from differentiation:
Corollary 1 (Cost Lyapunov property). Assume conditions(C1),(C3),(C5), and(C6), and suppose for all . Then, by Lemma 1 and the path-weight identity, so the ensemble-average action decreases monotonically. Hence, is a Lyapunov functional for the dynamics. We refer to this strict, feedback-driven decay of as the Stochastic–Dissipative Decreasing Average Action Principle (SDDAAP).
Corollary 2 (Steady-state plateau under constant feedback).
Assume conditions(C1),(C3),(C5), and(C6), and let for all . Then, by Eq. (4), so the ensemble-average action remains constant in time. Hence, is a cost Lyapunov functional in this marginal regime, with the system locked on a nonequilibrium steady-state attractor. We refer to this steady-state behavior as the Stochastic–Dissipative Least Average Action Principle (SDLAAP).
Corollary 3 (Disorganization under negative feedback).
Assume conditions(C1),(C3),(C5), and(C6), and let for all . Then, by Eq. (4), so the ensemble-average action increases monotonically. Hence, ceases to be a Lyapunov functional for the system. We refer to this feedback-driven growth of as the Stochastic–Dissipative Increasing Average Action Principle (SDIAAP): negative feedback amplifies noise, broadens the path distribution, and raises the action cost, i.e. the system de-organizes and moves away from its steady-state attractor.
5. Distinction from Established Formulations.
The framework introduced here generalizes classical variational principles such as those of Onsager–Machlup and Graham–Freidlin–Wentzell by incorporating an explicitly time-dependent inverse noise parameter
that evolves under internal feedback. In classical treatments,
is constant and path distributions are either static or analyzed in the long-time limit, assuming asymptotic behavior. In contrast, the Stochastic–Dissipative Action framework introduced here captures the feedback-driven evolution of the path distribution
, including its sharpening around low-action trajectories in the self-organizing regime. This leads to a provable monotonic decrease in average action and a corresponding rise in average action efficiency (AAE) (Theorem 1) , establishing a predictive variational principle for transient, nonequilibrium self-organization beyond static or equilibrium assumptions. Where MEPR-style arguments emphasize efficient dissipation under fixed drives [
8], our path-ensemble identity explains the route to such efficiency: as feedback tightens (
), path entropy decreases and AAE rises monotonically.
6. Average Action Efficiency as a Predictive Metric
Although
reflects the evolving average action cost of system trajectories, it is an unnormalized, dimensional quantity. To compare organization levels across systems or under rescaling of units, we define the dimensionless Average Action Efficiency:
Here, is a reference unit of action chosen to render dimensionless. In quantum systems, ; in simulations, ; and in classical or biological systems, may reflect a system-specific action scale. This formulation preserves the monotonicity of , inherits its Lyapunov character under positive feedback, and ensures invariance under time or action rescaling. As such, provides a normalized theoretical measure of how efficiently a system organizes over time.
As feedback sharpens the path distribution , the average action decreases, and thus increases. This makes a natural, dimensionless order parameter for organizational progress in the self-organizing regime. A higher AAE indicates that the system achieves more organized behavior per unit action expended.
This definition aligns with established uses of action functionals in stochastic control and decision theory, where efficiency and cost are often quantified as functionals minimized over trajectories. Its inverse form ensures invariance under rescaling of time or action units, making AAE a dimensionless variational efficiency indicator across system classes. Thus a high AAE means the system is doing more with less: more events per unit action.
Theorem 1 (Monotonic Rise of AAE in the Self-Organization Regime). Assume conditions(C1)–(C6), and suppose . Then so is a Lyapunov functional for the self-organization dynamics.
Proof. From the identity (Eq. (
4)), we apply the chain rule to
to obtain:
Hence,
increases monotonically. □
Corollary 4 (Saturation at steady state). Assume conditions (C1)–(C6), and let . Then , and the average action efficiency remains constant. This corresponds to the nonequilibrium steady-state plateau observed in self-organization.
Corollary 5 (Decline during disorganization). Assume conditions (C1)–(C6). If , then . This decline of AAE suggests a transition toward disorder, for example due to increasing noise or weakening feedback.
These corollaries point toward a broader classification of dynamical regimes based on the sign of , which we plan to investigate further in future work.
Remark 1
(Self-Organization Regime (SOR)).
Let denote, respectively, the feedback strength, dissipation rate, and initial noise amplitude—each independently measurable in experiment or simulation. Define the self-organization regime as the region of parameter space satisfying
for some empirical constants , , and . Within this regime, positive feedback dominates over dissipation, and noise is sufficient to explore state space while maintaining finite fluctuations. As a result, and hold, and the AAE is predicted to increase monotonically.
Remark 2 (Attainability of the optimum during growth).
On the growth interval , assume and . Then
so rises monotonically while decreases. Since both are bounded below by and , respectively, they converge as :
where reflects irreducible fluctuations and is the maximum inverse noise. : Long-time limit of the average action, . The term captures irreducible noise (e.g., thermal or behavioral) that prevents perfect convergence.
In this saturation regime, . For a density of states , where μ is the spectral exponent near the band edge, one finds:
though the scaling depends on the form of .
This scaling follows from a non-Gaussian density of states with power-law behavior near the band edge. Gaussian approximations predict a rapid collapse of fluctuations, , as noise decreases. However, the observed saturation under finite feedback——implies a non-Gaussian density of states near the attractor, consistent with a power-law form.
Corollary 6 (Ideal zero–noise limit).
Assume the system remains in the cooling regime, i.e. for all and (). Then the path measure collapses onto the minimal-action trajectory , and
This formal limit illustrates the optimal efficiency attainable under vanishing noise, though would require infinite cooling. It defines an ideal attractor toward which AAE converges asymptotically under persistent feedback (the principle of least action).
In agent-based systems such as pheromone-driven foraging,
can be derived from first principles, linking feedback strength and dissipation to the evolution of noise. Unlike free-energy approaches that require mutual information corrections under feedback [
24,
25], AAE maintains strict monotonicity without modification.
Table 1.
Classification of dynamical regimes for action (SDAAP) and AAE ().
Table 1.
Classification of dynamical regimes for action (SDAAP) and AAE ().
| Regime |
|
|
Interpretation |
Self-organization SDDAAP
|
|
|
Lyapunov ↓; ;
|
Steady state/attractor SDLAAP
|
|
|
(NESS); constant |
Disorganization SDIAAP
|
|
|
Lyapunov fails; ;
|
7. Connection to path entropy, MaxCal and MEPP during self-organization.
At any fixed time t, the canonical weight is the maximum-caliber solution given the current constraint on the mean action.
Here, the path entropy
is defined as the Shannon entropy of the path distribution
:
which quantifies the dynamical uncertainty over trajectories
. Substituting the canonical form of
yields:
Under (C1)–(C6) and
,
decreases monotonically, reflecting the feedback-driven concentration of paths.
Its time derivative is:
with strict inequality when
(C2),
(C6), and
.
Thus, self-organization appears as entropy reduction in path space, even though each instantaneous distribution remains the maximum-entropy one subject to tightening constraints. While the principle of maximum entropy production (MEPP) is not universally true, it can emerge from MaxCal when the steady-state boundary/flux constraints are specified appropriately [
20]. For instance, if constraints include fluxes or currents, the steady state maximizing entropy production coincides with the MaxCal solution.
Thus, under , self-organization manifests as path entropy reduction. Additionally, MEPP arises from MaxCal for specific flux constraints, linking statistical inference (MaxCal), dynamics (self-organization), and nonequilibrium thermodynamics (MEPP).
Beyond justifying the canonical weight, MaxCal also constructs a path action: maximizing path entropy subject to dynamical constraints yields a canonical path distribution and an associated action functional, with the most-probable path obeying Euler–Lagrange equations [
21].
8. Time-dependent action functionals.
If the environment evolves—changing topology or costs—then becomes time-dependent and changes for two reasons. Preliminary results suggest that the Lyapunov monotonicity of AAE persists under adiabatic conditions. That is, when system parameters evolve slowly compared to internal relaxation times, the monotonic rise of AAE persists, indicating robustness of the self-organization principle under non-stationary but adiabatic conditions. Extending Theorem 1 to such non-autonomous dynamics is left for future work.
9. Average Action Efficiency as an Empirical Diagnostic
In experiments or simulations with n agents, each performing m system events (e.g., crossings between endpoints), let be the action of the j-th event of agent i at time t. Here a system event denotes the elementary, repeatable transition whose accumulated action we track; e.g. a nest–food crossing in the ant model, a fluid-parcel turnover in thermal convection, or a catalytic cycle in a biochemical oscillator. This model serves as a minimal representation of distributed path optimization, analogous to strategies used in swarm robotics, collective neural computation, and reinforcement-based learning systems.
Both the number of events
and the accumulated action
are time-dependent quantities that increase as the system self-organizes. They are evaluated at each instant
t, not averaged over the full simulation.
To compare systems or detect phase transitions in simulations or experiments, we define the empirical Average Action Efficiency ( ). While from the theoretical model reflects the ensemble-average action cost at time t, it does not capture the rate or sharpness of organizational change. In contrast, , computed from empirical observables—accumulated action and event count —quantifies how efficiently functionally relevant events are executed over time. A system may exhibit a low even in a disordered state, if all actions are uniformly low; conversely, a high may appear during rapid restructuring. Thus, alone cannot resolve organizational intensity, whereas provides a dynamic measure of emergent order.
The empirical average action per event is then:
Here
is the empirical mean action per event, computed from simulation or experimental data. This expression reflects how the system’s evolving efficiency, quantified by
, increases as the total action per event decreases during self-organization.
Figure 1 schematically illustrates the cyclic logical flow between first principles, consequences, derivations, empirical action averages, and data verification, as described in this paper.
Due to finite-sample fluctuations and path variability, the empirical estimate
differs from the ensemble average
, but converges to it as the number of realizations increases. Pooling
R statistically independent runs, we construct the empirical histogram. Let
denote the
r-th trajectory sampled from simulation or experiment. The empirical path distribution is then given by
where
is a Dirac delta functional that assigns all weight to the sampled path
. It approximates the path distribution
. The empirical average action then satisfies:
Because
converges statistically to the theoretical
and
as the number of samples increases, the empirical efficiency
inherits the same dynamical trend predicted by Theorem 1. Consequently, in the organizational growth phase where
and
, we observe in simulation that
increases monotonically, providing support for the variational prediction. We illustrate the theory using an agent-based simulation ant model, where agents explore a grid while depositing and responding to pheromones (
Figure 2). The collective optimization of trails between food and nest shares key feedback mechanics with many other systems [
16]. Multi-run averaging suppresses fluctuations.
Additional simulation observations on
Figure 2 include: 1. Decrease in variance
during organization; 2. Higher plateau
values in systems with more agents; 3. Faster convergence and steeper transitions with larger populations.
This formulation enables direct tests of monotonicity (Corollary 6) and saturation behavior (Remark 2) in experimental and simulated systems. In agent-based models, such as pheromone-guided ant foraging, trajectories
can be obtained from repeated runs under fixed control parameters, allowing
to be calculated as a function of time and system configuration. Consistent with [
8], the specification of drives and ports determines which dissipative structures are viable and tends to select more efficient ones. In our notation, such selection pressure appears as trajectories are concentrating and a rising AAE.
10. Empirical and Operational Validation
In biological systems, AAE can also be directly estimated using established techniques. For instance, in ATP synthase, system events (
) are defined as single ATP synthesis events, while total action (
Q) is measured calorimetrically or by torque-based methods [
26]. This provides a benchmark for operationalizing AAE. The accumulated action per rotational cycle divided by the number of ATP turnover events yields
, within 1% of Planck’s constant
h [
26]. This biochemical nanomachine therefore saturates the Lyapunov bound predicted by Theorem 1, confirming that AAE is both physically meaningful and experimentally measurable. Similar operational definitions could extend to other biological cycles (e.g., Krebs, Calvin), where standardized energy–time metrics allow AAE to be tracked experimentally despite remaining challenges.
11. Computational Feasibility
Modern trajectory engines already output the two ingredients of AAE: (i) the Onsager–Machlup (OM) or Euclidean action along each path—computed natively in path-integral and ring-polymer molecular dynamics packages or neural-network OM minimizers [
27,
28]; and (ii) the event count
(e.g., proton hops, hydrogen-bond switches, lattice transitions) recorded during the same simulation. As a result,
can be computed on-the-fly in ab initio or machine-learning MD without extra sampling overhead. This enables direct comparison of Eq. (
6) with high-dimensional data. Since
Q and
are already produced by trajectory optimizers [
27,
28], AAE requires no exhaustive path enumeration and is immediately compatible with standard MD outputs. MaxCal-based path "Metropolis" samplers let one generate trajectory ensembles for any action
, enabling direct tests of our predictions [
22].
12. Reaction–Diffusion Systems
In oscillatory chemical media such as the Belousov–Zhabotinsky (BZ) reaction, stabilized spiral and target patterns propagate regular events and suppress dissipative irregularities. Prior variational analyses of reaction–diffusion fronts show that coherent patterns tend to maximize the event count
(e.g., redox oscillations) while maintaining approximately constant aggregate action
Q, estimated from reaction–diffusion Lagrangians [
29]. This implies a monotonic increase in
during pattern formation, in agreement with Theorem 1, and identifies a chemical analogue of the dynamics described here.
13. Practical Advantages
The path-integral formulation enables AAE to be applied across systems with arbitrary action functionals, including those with nonlinear dissipation, feedback, or noise. Unlike entropy-based or MEPP-derived metrics that rely on fluxes or gradients, AAE captures the statistical concentration of system trajectories around minimal-action paths. Its two required observables—the event rate (e.g., turnovers, oscillations, hops) and the integrated action Q (from dissipation or Onsager–Machlup functionals)—are routinely accessible in single-molecule assays, calorimetry, or molecular dynamics. Because these quantities are already computed in modern simulations and experiments, AAE offers a lightweight and broadly applicable diagnostic for self-organization, without requiring full microstate resolution or path enumeration.
14. Conclusions
A path-integral derivation shows that the Average Action Efficiency (AAE) is a dimensionless Lyapunov functional that rises monotonically—and thus quantitatively tracks and bounds efficiency—in any feedback-driven, self-organizing stochastic system. Starting from the Stochastic–Dissipative framework and the three action principles (SDDAAP–growth, SDLAAP–plateau, SDIAAP–decline), we proved a Lyapunov theorem: in the self-organization regime AAE rises monotonically and saturates at a finite optimum. Corollaries show that AAE remains constant at steady state and decreases when feedback reverses, completing a minimal dynamical taxonomy. This result links the dynamics of self-organization to the statistical focusing of trajectories around minimal-action paths. AAE fills a long-standing gap between macroscopic entropy-based measures and system-specific order parameters: it is variationally grounded, requires no tuning constants once a reference action scale is fixed, and predicts organizational trends through an elementary Lyapunov identity.
AAE operationalizes path-entropy reduction during trajectory concentration. Interpreting the exponential weight as a Maximum-Caliber (MaxCal) inference result clarifies that our findings are model-level predictions rather than universal laws; under appropriate steady-state flux/boundary constraints, this perspective is compatible with MEPP as an inference consequence, without requiring it here.
Because AAE depends only on an event count and an integrated action, it is measurable via single-molecule tracking or swarm trajectory analysis, enabling direct experimental falsification. Agent-based simulations of ant foraging validate the theoretical prediction: the empirical AAE increases as the system organizes and converges to the predicted attractor value. Single-enzyme data for ATP-synthase reach the theoretical optimum. These results outline clear routes for measuring AAE in hydrodynamic, chemical, and biological settings. In experimental settings, AAE can be estimated from single-particle tracking, trajectory ensembles, or path statistics in systems ranging from molecular motors and active matter to cell migration and robotic swarms, where actions along paths can be inferred from energy usage, timing, or trajectory regularity. AAE operationalizes the efficiency bias discussed in dissipative-structure accounts [
8], providing a dimensionless, model-independent indicator that increases during organization regardless of microscopic details.
Practically, the theorem provides a variational design rule: real-time control of measurable variance and noise reduction can maximize self-organization efficiency in synthetic systems—from swarm robotics to catalytic reactors. Future work will extend the theory to time-dependent action functionals and test whether known entropy-based principles emerge as limiting cases. This program enables generalized variational diagnostics of nonequilibrium organization.
Funding
This research received no external funding
Acknowledgments
The author acknowledges support from Assumption University through Faculty Development and Course Load Reduction grants, as well as support for undergraduate summer research. Additional institutional support was provided by Worcester Polytechnic Institute.
Conflicts of Interest
The author declares no conflicts of interest.
References
- Seifert, U. Stochastic thermodynamics, fluctuation theorems and molecular machines. Rep. Prog. Phys. 2012, 75, 126001. [Google Scholar] [CrossRef]
- Dewar, R.C. Maximum entropy production and the fluctuation theorem. J. Phys. A: Math. Gen. 2005, 38, L371. [Google Scholar] [CrossRef]
- Endres, R.G. Entropy production selects nonequilibrium states in multistable systems. Sci. Rep. 2017, 7, 14437. [Google Scholar] [CrossRef]
- Machlup, S.; Onsager, L. Fluctuations and irreversible process. II. Systems with kinetic energy. Phys. Rev. 1953, 91, 1512. [Google Scholar] [CrossRef]
- Graham, R. Covariant formulation of non-equilibrium statistical thermodynamics. Z. Phys. B Condens. Matter 1977, 26, 397–405. [Google Scholar] [CrossRef]
- Freidlin, M.I.; Wentzell, A.D. Random perturbations. Springer 1998. [Google Scholar]
- Gay-Balmaz, F.; Yoshimura, H. A variational formulation of nonequilibrium thermodynamics for discrete open systems with mass and heat transfer. Entropy 2018, 20, 163. [Google Scholar] [CrossRef]
- Ueltzhöffer, K.; Da Costa, L.; Cialfi, D.; Friston, K. A drive towards thermodynamic efficiency for dissipative structures in chemical reaction networks. Entropy 2021, 23, 1115. [Google Scholar] [CrossRef]
- Nigmatullin, R.; Prokopenko, M. Thermodynamic efficiency of interactions in self-organizing systems. Entropy 2021, 23, 757. [Google Scholar] [CrossRef]
- Georgiev, G.; Georgiev, I. The least action and the metric of an organized system. Open Syst. Inf. Dyn. 2002, 9, 371. [Google Scholar] [CrossRef]
- Georgiev, G.; Daly, M.; Gombos, E.; Vinod, A.; Hoonjan, G. Increase of organization in complex systems. Int. J. Math. Comput. Sci. 2012, 6, 1477. [Google Scholar]
- Georgiev, G.Y. A quantitative measure, mechanism and attractor for self-organization in networked complex systems. In: Self-Organizing Systems (IWSOS 2012), Delft, The Netherlands, March 15–16, 2012. Proceedings. 2012, pp. 90–95.
- Georgiev, G.Y.; Henry, K.; Bates, T.; Gombos, E.; Casey, A.; Daly, M.; Vinod, A.; Lee, H. Mechanism of organization increase in complex systems. Complexity 2015, 21, 18–28. [Google Scholar] [CrossRef]
- Georgiev, G.Y.; Chatterjee, A.; Iannacchione, G. Exponential Self-Organization and Moore’s Law: Measures and Mechanisms. Complexity 2017, 2017. [Google Scholar] [CrossRef]
- Butler, T.H.; Georgiev, G.Y. Self-Organization in Stellar Evolution: Size-Complexity Rule. In: Efficiency in Complex Systems: Self-Organization Towards Increased Efficiency. 2021, pp. 53–80.
- Brouillet, M.; Georgiev, G.Y. Modeling and Predicting Self-Organization in Dynamic Systems out of Thermodynamic Equilibrium: Part 1: Attractor, Mechanism and Power Law Scaling. Processes 2024, 12, 2937. [Google Scholar] [CrossRef]
- Jarzynski, C. Nonequilibrium equality for free energy differences. Phys. Rev. Lett. 1997, 78, 2690. [Google Scholar] [CrossRef]
- Hatano, T.; Sasa, S.-i. Steady-state thermodynamics of Langevin systems. Phys. Rev. Lett. 2001, 86, 3463. [Google Scholar] [CrossRef]
- Dewar, R.C. Maximum entropy production as an inference algorithm that translates physical assumptions into macroscopic predictions: Don’t shoot the messenger. Entropy 2009, 11, 931–944. [Google Scholar] [CrossRef]
- Virgo, N. From maximum entropy to maximum entropy production: a new approach. Entropy 2010, 12, 107–126. [Google Scholar] [CrossRef]
- Davis, S.; González, D.; Gutiérrez, G. Probabilistic inference for dynamical systems. Entropy 2018, 20, 696. [Google Scholar] [CrossRef]
- González Diaz, D.; Davis, S.; Curilef, S. Solving equations of motion by using Monte Carlo Metropolis: novel method via random paths sampling and the maximum caliber principle. Entropy 2020, 22, 916. [Google Scholar] [CrossRef]
- Jülicher, F.; Ajdari, A.; Prost, J. Modeling molecular motors. Rev. Mod. Phys. 1997, 69, 1269. [Google Scholar] [CrossRef]
- Sagawa, T.; Ueda, M. Generalized Jarzynski equality under nonequilibrium feedback control. Phys. Rev. Lett. 2010, 104, 090602. [Google Scholar] [CrossRef]
- Horowitz, J.M.; Vaikuntanathan, S. Nonequilibrium detailed fluctuation theorem for repeated discrete feedback. Phys. Rev. E 2010, 82, 061120. [Google Scholar] [CrossRef]
- Nath, S. Novel molecular insights into ATP synthesis in oxidative phosphorylation based on the principle of least action. Chem. Phys. Lett. 2022, 796, 139561. [Google Scholar] [CrossRef]
- Tuckerman, M.E. Path integration via molecular dynamics. In: Quantum Simulations of Complex Many-Body Systems: From Theory to Algorithms 2002, 10, 269. [Google Scholar]
- Wang, B.; Jackson, S.; Nakano, A.; Nomura, K.-i.; Vashishta, P.; Kalia, R.; Stevens, M. Neural Network for Principle of Least Action. J. Chem. Inf. Model. 2022, 62, 3346–3351. [Google Scholar] [CrossRef]
- Stavek, J.; Sipek, M.; Sestak, J. The application of the principle of least action to some self-organized chemical reactions. Thermochim. Acta 2002, 388, 441–450. [Google Scholar] [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).