1. Introduction
The challenge of establishing a robust physical foundation to explain the emergence of organized biological systems has long perplexed scientists. A comparable situation arose in the early 20th century within the fields of physics and chemistry. At the time, scientists believed that if they could determine the physical laws governing each individual component of a chemical system, they would be able to describe the system’s properties as a function of these physical variables. While this idea was theoretically sound, it proved impractical, as the prevailing theories of physical science were insufficient for such tasks until the development of quantum theory.
A similar predicament persists in the relationship between physics and biology. In theory, understanding the physical evolution of each constituent part of a biological system should allow us to describe any biological system in its entirety. However, this goal faces significant challenges—not only because of the immense computational complexity involved, but also because our current physical knowledge appears incomplete. A more general principle still seems to be missing, one that could account for the compatibility between quantum and classical physics as well as the spontaneous emergence of order [
1,
2,
3,
4,
5,
6,
7].
In this context, the paper aims to demonstrate that the key to linking physics with the formation of biological systems lies in the fundamental physical structure of the universe. Specifically, it explores how the macroscopic classical world, that emerges through the quantum self-decoherence mechanism, naturally reveals a principle of order generation and matter self-assembly.
Without these essential links, grasping how life arises, along with phenomena like intelligence, consciousness, free will, and the dynamics of social structures, within a unified physical framework will likely remain one of the most elusive goals in contemporary science.
2. From Self-Organization to Life, Consciousness and Intelligence
As shown in the first part of this work, from the perspective of universal computation of future states, the spacetime substrate evolves according to the laws of quantum mechanics, efficiently computing successive global quantum states. Then, after a finite time interval, the process of decoherence projects these states into classical configurations, generating the macroscopic evolution we observe. If the universe is effectively performing such computations, its dynamical evolution may be seen as falling within the class of Bounded-Error Quantum Polynomial time, that is, solvable in polynomial time by a quantum computing system with bounded error. In this view, reality unfolds as a quantum computation, punctuated by classical outcomes emerging from the repeated collapse of the wavefunction.
2.1. The Computational Framework of Universe Evolution
The 4-D discrete spacetime structure that comes from the finite speed of light together with the quantum uncertainty (see (60,67) in part one) allows to analogize the universe’s evolution to the development of a discrete computation.
In this case, as with any computation, the universal one exhibits the typical characteristics inherent to the nature of such processes [
8]:
Discretization arising from the finite nature of computational resources. One key argument concerns the inherent limitations of any computer computation, namely the finite nature of computational resources. The capacity to represent or store information is restricted to a finite number of bits, and the available floating-point operations per second (FLOPS) are likewise limited. Consequently, achieving a truly ‘continuous’ simulated reality in the strict mathematical sense is unattainable. In a computer computation, infinitesimals and infinities cannot exist, as their representation would require infinite information. This constraint necessitates quantization, whereby spacetime is divided into discrete cells and effects propagate across them at limited velocities. This characteristic is consistent with the minimum uncertainty of quantum mechanics combined with the finite speed of light (see §2.4 in part one).
Finite maximum speed of information transfer. Another common issue in computer-computation arises from the inherent limitation of computing power in terms of the speed of executing calculations. Objects within the simulation cannot surpass a certain speed, as doing so would render the simulation unstable and compromise its coherence. Any propagating process cannot travel at an infinite speed, as such a scenario would require an impractical amount of computational power. Therefore, in a discretized representation, the maximum velocity for any moving object or propagating process must conform to a predefined minimum single-operation calculation time. This computational requirement aligns with the finite speed of light or transmission of information in the real universe.
Discretization must be dynamic. The use of fixed-size discrete grids is clearly a huge dispersion of computational resource in spacetime regions where there are no bodies and there is nothing to calculate (so that we can fix there just one big cell saving computational resources. On the one hand, the need to increase the size of the simulation requires lowering the resolution; on the other hand, it is possible to achieve better resolution with smaller cells in the simulation. This dichotomy is already present to those creating vast computerized cosmological simulations [
9]. This problem is attacked by varying the mass quantization grid resolution as a function of the local mass density and other parameters leading to the so-called Automatic Tree Refinement (ATR). The Adaptive Moving Mesh Method, a similar approach [
10,
11,
12,
13] to that of ATR would be to vary the size of the cells of the quantized mass grid locally, as a function of kinetic energy density while at the same time varying the size of the local discrete time-step, which should be kept per-cell as a 4th parameter of space, in order to better distribute the computational power where it’s needed the most. By doing so, the grid would result as distorted having different local sizes. In a 4D simulation this effect would also involve the time that be perceived as flowing differently in different parts of the simulation: faster for regions of space where there’s more local kinetic energy density, and slower where there’s less. From this standpoint, flat spacetime can be represented by a single large cell, while regions with higher curvature, and consequently more complex energy density dynamics, require a greater cell density to be accurately described. Moreover, the deformation of cells consequent to their variable density distribution gives rise to apparent forces [ 14], offering a possible explanation for the emergence of gravity and its geometrical nature. Although not immediately apparent, this computational characteristic aligns closely with the concept of gravity in General Relativity. Specifically, the idea of gravity as the expression of an optimization process is consistent with advances in computational physics and information theory [
15,
16], which demonstrate that Newton’s law of gravity and Einstein’s field equations can emerge naturally from information-theoretic principles. Verlinde’s model [
16] was later reformulated by Vopson [
15], who showed that the same outcome can be obtained without the arbitrary introduction of holographic screens, instead invoking the second law of infodynamics together with the mass–energy–information equivalence principle. Within this framework, the fundamental law of gravity appears as an optimization process in which matter moves through space to reduce information entropy, thereby not contradicting but rather reinforcing the computational nature of universal reality.
Maximal efficiency of computational method. In principle, there are two ways to compute the future states of a system. The first relies on a classical apparatus composed of conventional bits, which, unlike qubits, cannot create, maintain, or exploit superpositions of states, as they are purely classical entities. The second employs quantum computation, using a system of qubits governed by quantum laws—most notably the evolution of superposed states, which effectively constitutes a form of parallel computing—to perform calculations. The requirement of maximal computational efficiency aligns with the quantum-gravitational foundation of spacetime, which, through self-decoherence, unfolds into the macroscopic real universe.
However, the capabilities of the classical and quantum approaches to predict the future state of a system differ. This distinction becomes evident when considering the calculation of the evolution of many-body system. In the classical approach, computer bits must compute the position and interactions of each at every calculation step. This becomes increasingly challenging (and less precise) due to the chaotic nature of classical evolution. In principle, the classical N-body simulations are straightforward as they primarily entail integrating the 6N ordinary differential equations that describe particle motions. However, in practice, the sheer magnitude of particles, N, is often exceptionally large (of order of millions or ten billion like as in simulations carried out at the Max Planck Society’s Supercomputing Centre (Garching, Germany). Moreover, the computational expense becomes prohibitive due to the quadratic increase in the number of particle-particle interactions that need to be computed. Consequently, direct integration of the differential equations requires an large increase of calculation and data storage resources for large scale simulations.
On the other side, quantum evolution doesn’t require defining the state of each particle at every step. It addresses the evolution of the global wave of superposition of states for all particles. Eventually, when needed or when decoherence is induced or spontaneously occurs, the classical state of each particle at a specific instant is obtained (calculated) through the wavefunction decay. Under this standpoint, “calculated” is equivalent to “measured”. This represents a form of optimization sacrificing the knowledge of the classical state at each step, but being satisfied with knowing the classical state of each particle at fewer discrete time instants. This approach allows for a quicker computation of the future state of reality with a lesser use of computer resources.
The advantage of quantum calculus over classical calculus can be heuristically illustrated by addressing the challenge of finding the global minimum. When using methods like classical maximum gradient descent or similar approaches, the pursuit of the global minimum, such as in the determination of prime numbers, results in an exponential increase of calculus complexity (time) with the number of bits of the maximum value of the prime numbers rises.
In contrast, employing the quantum gradient descent method allows us to identify the global minimum in linear or, at least, polynomial time. This can be loosely conceptualized as follows: in the classical case, it’s akin to having a ball fall into each valley sequentially to find a minimum, and then the values of each individual minimum must be compared with all the minima before determining the overall minimum. The utilization of the quantum method is similar to contemporarily involve using a large number of balls, spanning the entire energy spectrum. Consequently, at each barrier between two minima (thanks to quantum tunneling), some of the balls can explore the next minimum almost simultaneously. This simultaneous exploration (quantum computing) greatly reduces the time required to probe the entire set of minima. Afterward, wavefunction decay enables the measurement (or detection) of the outcome, identifying the minimum at the classical location of each ball.
If we aim to create a simulation on a scale comparable to the vastness of the Universe, we must find a way to address the many-body problem. Currently, solving this problem remains an open challenge in the field of Computer Science. However, Quantum Mechanics appears to be a promising candidate for making the many-body problem manageable.
This sheds light on the reason why physics properties remain undefined until measured; from the standpoint of the computational analogy it is a direct consequence of the quantum optimization algorithm, where properties are computed only when necessary. Moreover, the combination of the coherent quantum evolution with the wavefunction collapse has been proven to constitute a Turing-complete computational process, as evidenced by its application in Quantum Computing for performing computations [ 17].
Classical computation categorizes the determination of prime numbers as a non-polynomial (NP) problem, whereas quantum computation classifies it as a polynomial (P) problem with the Shor’s Algorithm. However, not all problems considered NP in classical computation can be reduced to P problems by utilizing quantum computation. This implies that quantum computing may not be universally applicable in simplifying all problems but a certain limited class.
An even more intriguing aspect is the possibility of an algorithm capable of addressing the intractable many-body problem, thereby challenging classical approaches. In principle, this would render tractable an entire class of problems characterized by phenomenological representations describable within quantum physics, through the application of quantum computing. However, highly abstract mathematical problems—such as the lattice problem [
18], fall outside this framework and may remain intractable. At present, the most well-known successful examples, aside from Shor’s algorithm [
19] for prime factorization, include Grover’s algorithm [
20] for inverting black-box functions.
If we accept that universal computation is grounded in the principle of efficiency maximization, potentially by reducing NP problems to P problems, then cannot generally be achieved. Therefore, in principle, it cannot simulate any possible universe with arbitrary particle behavior and physical laws, but at least a subset, such as quantum physics. This perspective suggests that the laws of physics are not preordained, but rather emerge from the architecture of the computational algorithm itself, which is deliberately designed to efficiently solve specific classes of problems.
If various instances of universe-like particle simulations were employed to tackle distinct problems, each instance would exhibit different Laws of Physics governing the behavior of its particles.
Therefore, since all the physical laws emerging in a discrete computation are expressions of what is the problem it seeks to solve, in order to address this aspect, we still need to integrate into the SQHM framework a fundamental, as yet undefined, law of reality describing how order emerges spontaneously.
Once this objective is achieved, the SQHM framework reveals that the transition from the self-organizing behavior of matter to the emergence of living systems proceeds through a sequence of critical stages, including the stabilization of ordered structures and the development of replicative and adaptive capacities that require explicit consideration.
By integrating the order-generation law derived from stochastic hydrodynamics with the principles of natural evolution, the theory suggests that intelligence, free will, and consciousness are emergent functions that constitute the ultimate problem-solving goal of universal computation, and are therefore intrinsic characteristics of the physics of the universe.
2.2. Macroscopic Evolution and Far from Equilibrium Order Generation
Even though the quantum basis for the evolution of the macroscopic universe, where free will can be accommodated within its non-deterministic physical framework, has been established in first part of the work, a scientific bridge between physics and biology is still lacking.
The computational analogy emphasizes that the physical laws governing our observed reality arise from the structure of the underlying computational algorithm, intrinsically shaped by the problem it seeks to solve.
One observed phenomenon driven by the physics of this reality is the tendency to generate matter self-organization and living systems.
At present, the physical law that establishes it is not yet fully defined and its formal structure is not well delineated. It has only been formulated for specific cases, such as atmospheric turbulence [
4], electrochemical instabilities [
3,
7], stationary conditions not very far from equilibrium [
1,
2,
5] and others particular cases discussed by Grandy [
6].
In this section, we will show that, based on the SQHM, it is possible to define an energy-dissipation function whose stationarity governs the evolution of macroscopic far from equilibrium behavior and possibly leads to order generation.
2.2.1. The Classical Nature of Gases and Mean-Field Fluids: The SQHM Kinetic Equation
When the De Broglie wavelength approaches infinity (relative to the physical length of our system), the quantum potential suppresses mass density fluctuations , leading both the probability in the phase space (where is the particle density in a phase space point) to converge to , to and to . Consequently, the SQHM converges to quantum mechanics, and acquires its full quantum meaning, as in Eqs. (1–4) in part one of this work.
Nevertheless, it is important to note that the SQHM equations of motion (Eqs. 7–9 and 14–16 in part one) were derived under the condition that the system’s physical length is on the order of the De Broglie wavelength
[
21]. They are therefore applicable primarily to mesoscale systems, to quantum–stochastic dynamics, such as those described by the Langevin–Schrödinger equation [
22], and to the transition from quantum dynamics to the classical macroscopic regime.
The main consequence of this approximation is that the noise , see Eq. ((7) of part one), depends solely on time, whereas, over macroscopic distances, it generally depends on both time and space.
Therefore, in order to derive the dynamics of macroscopic systems, we have to start from the fully SQHM Equations that read [
22,
23] (and references therein)
where the force noise
owns the correlation function [
24]
where
In a macroscopic system, when
<<
where
is the physical length of the system (e.g., mean particle distance or free molecular path for isotropic gas phase), the classic local dynamics is achieved and the force noise
acquires a white noise spectrum whose correlation function from (3-4) reads
From Eq. (5), it is worth noting that classical mechanics, emerging as large-scale incoherent quantum mechanics, is inherently stochastic, with fluctuations representing an irreducible remnant of the quantum nature of spacetime subject to GBN. Consequently, deterministic classical mechanics is merely a theoretical abstraction that can never be fully realized in practice—a limitation rooted in the quantum foundations of the universe and the geometric nature of gravity.
In this case, when considering a system containing a vast number of particles interacting through a sufficiently weak potential (such as the Lennard–Jones potential), we can define a statistical local cell (with side length , where is the range of interaction of the quantum potential (see (28) in part one of this work) of part one of the work) containing a large number of quantum-decoupled molecules (treated as elementary constituents). Under this assumption, the overall system can be ideally partitioned into numerous classical subsystems—statistical local cells—with randomly distributed characteristics. These subsystems are quantum-uncorrelated, as quantum entanglement is confined to a few individual molecules (in gases, only colliding bi-molecular quantum interactions have physical relevance), which may interact only at the very thin boundaries between adjacent statistical local cells.
This allows us to express the statistical distribution of these subsystems (of N particles) from the SQHM evolution and, therefore, independent by the establishment of local thermodynamic equilibrium.
Since, in principle, for sufficiently rarefied gas phases (and even for fluids above the critical temperature of the quantum phase (see § 2.2 in part one) each molecule can be treated as an uncorrelated quantum subsystem, the system can be described by the statistical single-particle distribution (see (22) below) [
25], which follows from the corresponding SQHM distribution
.
In this case, we can use the single-molecule hydrodynamic representation (see Eqs. (1–4, 7-16, part one of the work)), since in the classical macroscopic regime each molecule excludes the others from the volume it occupies —rather than the exact hydrodynamic representation for many-body systems (see
Appendix A.2).
Under this approximation, the SQHM mass density of a single molecule is subject to three forces, one of Hamiltonian origin is described by the phase space velocity vector
where
where
is the particle Hamiltonian and
is the mean-field potential of the cloud of the other molecules. The second contribution
comes from the force inputs of colliding molecule.
The third one
refers to the ballistic-diffusive enlargement of the molecular mass density packet (assumed pseudo-Gaussian) induced by the SQHM dynamics [
23,
26] through the quantum potential and the noise.
It is matter of fact that exists a quite huge difference between the characteristic time of molecular collisions and that of the noise leading to the SQHM mass density diffusion. This makes difficult to describe the overall kinetics of Equations (6-8). Therefore, to have a meaningful overall framework, we focus on the molecular kinetics on which acts the ballistic-diffusive SQHM dynamics [
23].
In order to do that, we consider the evolution of averages of phase space velocities, namely
Furthermore, on time interval much larger than the characteristic molecular collision time
, the term
due to random molecular collisions, can be described by a stochastic differential equation with a drift
and a random white noise that reads
leading to the stochastic overall phase space velocity field
described by the Fokker-Plank phase space conservation equation [
27,
28]
from where the phase space density of particles
reads
which on its turns also obeys to the Fokker-Planck equation [
28]
Equation (16) is not sufficient to define the evolution law of the statistical distribution
(see (22) below), which requires an additional relation. Traditionally, one way forward is to consider the Boltzmann kinetic equation. However, semi-empirical relations between thermodynamic forces and fluxes, such as Fick’s law, are also employed. Unlike the Boltzmann equation [
28], the Onsager’s semi-empirical basis between forces and fluxes, under local thermodynamic equilibrium [
29], lacks detailed information about the structure of the thermodynamic coefficients it contains—treated as empirical constants—which are essential for fully interpreting its physical meaning. However, the local equilibrium condition restricts the applicability of such additional semiempirical relations.
Here, in order to obtain the far-from-equilibrium kinetics from the SQHM in a mathematically closed form, we assume the semi-empirical relation of the form
It is important to note that the validity of Equation (17) extends beyond the regime of local equilibrium, as the SQHM kinetics continue to apply even in far-from-equilibrium conditions. This is due to the fact that both the quantity defined subsequently in Equation (22), within the SQHM framework, and retain their validity outside local equilibrium.
2.3. The Mean Phase Space Volume of the Molecular Mass Density
In order to assign a precise meaning to Equation (16), we must define the term
arising from the ballistic-diffusive enlargement of the molecular mass density induced by SQHM [
23]. In the preceding section, we identified (for gases and mean-field fluids) three concurrent dynamics:
Free expansion of a molecule’s probability mass distribution (PMD), between two consecutive collisions, when surrounding molecules are at distances well beyond the quantum correlation length.
Upon molecular collision, the molecule PMD undergoes shrinkage, constrained by the reduced free volume available to it and due to the presence of the colliding partner, which limits the accessible space.
The diffusion of molecules, in terms of their mean position, as a consequence of molecular collisions.
Point I is justified by the fact that, immediately after a collision, the molecule resumes its SQHM dynamics, undergoing both ballistic and diffusive free expansion. The ballistic phase dominates at the onset of the expansion, when the colliding molecule rapidly moves away and the molecular mass density remains highly localized, within a physical length scale slightly higher than the quantum correlation lengths
. As the expansion continues and the distribution extends much beyond these distances, the diffusive regime becomes predominant [
23].
Point II is supported by the fact that the rarefied phase exhibit classical behavior where molecules remain distinguishable and, confined to their respective spatial domains, excludes such a volume each other. In this regime, no quantum superposition state, with shared space with neighboring molecules occurs. However, as two molecules approach each other, their exclusive volumes shrink, reaching a minimum at the onset of collision, where the system of the colliding molecules may enter a fully quantum regime, possibly involving a momentary two-molecular superposition of states. Once the molecules separate and their mutual distance exceeds both relevant quantum correlation lengths , the expansion kinetics described in Point I resumes.
As consequence of free expansions and shrinkages of molecular PMD (we may consider of pseudo-gaussian wavefunction) with the associated momentum distribution, will occupy in time a mean phase space volume (MPSV)
that we can pose as
where the MPSV
over the i molecules within the local phase-space domain
, assuming equality between time averages and phase-space averages, is given by:
where
.
Moreover, since the MPSV of a molecule
, in a phase space volume
, has to be a fraction of that available per molecule
and since
is larger when SQHM fluctuations (
) are stronger and the molecular collision time τ (
) is longer, we can therefore assume:
where
is the number of molecules in
and where for brevity it has been posed
.
The value of the parameter
is determined by the free energy constant at thermodynamic equilibrium, to which it is directly related [
25].
The MPSV generated by the SQHM dynamics (7) for a single molecule in the regime where is predominantly diffusive, with a diffusion coefficient . This coefficient is governed by the parameter, which represents the mean fluctuation amplitude associated with the fluctuations induced by the GBN in the quantum potential within the SQHM motion equations. Notably, in principle, is distinct from the molecular temperature , which is related to the amplitude of the molecule’s own energy fluctuations.
Therefore, by using both the definition of the statistical distribution
for and identities (17,21) (the last equality holds when
can be assumed practically punctual), for stationary states it follows that
and, therefore, that
Furthermore, by introducing (24) in the semi-empirical relation (17) explicitly defined as
it follows that
Equations (25), and later (30), reflect the classical behavior of gases and mean-field liquids, in which each molecule excludes others from the phase-space region into which its mass density expands. As a consequence, depending on the local physical conditions, the available phase space for nearby molecules is reduced, driving them to migrate toward regions with a larger accessible phase-space volume.
This phenomenon originates from the molecular flux per unit phase-space volume , induced by the SQHM dynamics, which drives molecules toward regions with greater available phase-space volume per particle, and hence in the direction opposite to the gradient .
Furthermore, following empirical evidence that quantum phenomena emerge as the thermodynamic temperature approaches zero, the GBN temperature and the molecular temperature are interconnected. In the SQHM model, the asymptotic zero-temperature state corresponds to a pure quantum state, which is reached at . Since the perfectly deterministic quantum global state at forbids molecular energy randomization, we conjecture and are coupled, and that can occur if and only if ,. The above conjecture suggests that a reduction in quantum potential fluctuations (as ) is accompanied by an increase in quantum entanglement, which in turn suppresses the randomness of molecular motion—randomness that is associated with molecular thermal energy fluctuations . More precisely, when , classical molecular localization breaks down, and the system transitions to a perfectly quantum, deterministic state
Conversely, the disappearance of energy fluctuations in the quantum system when implies that deterministic quantum mechanics is exactly realized (and).
Although this transition is theoretically complex, empirical evidence allows us to assert that vacuum fluctuations and thermal fluctuations are coupled. Furthermore, since this coupling is mediated by gravitational interactions (see §2), it can be assumed to be weak, allowing us to postulate the following series expansion:
In cases where the thermal energy does not receive contributions from the internal structure of the molecules (as is typical for many gases and classical mean-field fluids at room temperature), isotropic molecules can be treated as structureless, point-like particles interacting via a centrally symmetric potential. In this situation, the direction of variation of
is aligned with
, leading to the relations
and
Thus, Eq. (27) can be simplified to:
from where it follows that
and, at first order in
, that
Therefore, since in the one particle distribution
(15-16) and, therefore,
Equation (16) reads
Moreover, by introducing (30) in it, finally, the kinetic Equation reads
2.4. Far from Equilibrium Maximum Hydrodynamic Free Energy Dissipation in Stationary States
The existence of the
-function far from equilibrium allows for the formulation of a formal criterion for evolution. Insights into the nature of the
-function can be gained by noting that, at local equilibrium, it converges to the free energy (normalized to
) [
25]. Rooted in the SQHM framework, we can conceptualize it as the normalized quantum hydrodynamic free energy.
In order to derive a criterion of evolution far from equilibrium, we observe that the total differential of
can be written as a sum of two terms: the “dynamic differential”:
and the “stochastic differential”:
In (35) the stochastic velocity vector
evolves through pathway with negative increments of
. Moreover, by utilizing (29) we can recognize that
is anti-parallel to
and, therefore, we can affirm that along the relaxation pathway
If we speak in term of the positive amount
of dissipation of normalized quantum hydrodynamic free energy (traceable to thermodynamic free energy dissipation at local equilibrium, [
25]), we have that
It should be noted that the validity of criteria (36) and (37) is restricted to systems composed of
(i) structureless, point-like particles,
(ii) not undergoing chemical reactions, and
(iii) interacting through sufficiently weak, centrally symmetric potentials (e.g., Lennard-Jones type).
Although this criterion is widely applicable in classical gas physics and mean-field fluid theory, both of which are of paramount importance to life, it is not generally valid.
2.5. Stability and Maximum Hydrodynamic Free Energy Dissipation in Quasi-Isothermal Stationary States
To clarify the practical significance of the criterion provided in (37), we analyze the spatial kinetics far from equilibrium.
Given a quantity
per particle
its spatial density:
and its first moment
where
, and by applying the standard mathematical procedure [
30], the equation of motion (31) can be transformed into a spatial kinetic equation, as follows:
Given that at local thermodynamic equilibrium,
disembogues into the free energy, it is worthy to investigate the case in which
where
is the “mechanical” temperature defined as
where the constant
is determined at thermodynamic equilibrium.
Since life typically emerges in environments characterized by small thermal gradients, we assume local thermal equilibrium in the local spatial volume , but conditions far from equilibrium with respect to electrochemical and mechanical variables.
On this assumption, after some manipulations [
25], the quantum hydrodynamic free energy
at constant volume on a local spatial volume
reads
where
and
are the local internal energy and the macroscopic kinetic energy of the system, respectively, where
where
represents the “source” term
and
is the out of equilibrium contribution
where
and at local thermal equilibrium
where
and
are the mean entropy and mean hydrodynamic free energy (for brevity, we omit the adjective quantum) of the local domain
, respectively.
Equation (44) can be used to determine the properties of the matter phase based on electrochemical and mechanical variables far from local equilibrium.
Both and are zero at local equilibrium, as the local gradients vanish.
Moreover, by making explicit the terms
,
and
we obtain
where
is a vector perpendicular to the infinitesimal element of the boundary surface,
where
It is worth noting that, for potentials that are not function of momenta, the term in (51), can be taken out of the integral.
2.6. Quasi-Isothermal Systems at Constant-Volume: Maximum Free Energy Dissipation
The significance of stationary quasi-isothermal states, far from equilibrium under constant volume conditions, arises from the operational context observed in living systems. The constant volume condition is attributable both to the low compressibility of the fluids, such as water, and the polymers comprising these systems.
When considering the overall setup (the system together with its environment), the energy reservoir can in some cases maintain the system in a stationary state over long laboratory time scales, even as the combined system (system plus reservoirs) relaxes toward global equilibrium.
Moreover, we assume that both the system and the energy reservoir are at constant volume, and that the reservoir is thermally isolated. Without loss of generality, we may further assume that the energy reservoirs are much larger than the energy handled by the system per unit time and act on it reversibly. Under these conditions, the decrease in the reservoir’s free energy equals the free energy transferred to the system through volume forces.
For stationary states under approximately isothermal conditions at constant volume (with fixed boundaries), it follows that:
and therefore
where the suffix “sur” and “vol” refer to contributions coming from the boundary surface and volume of the system, respectively. Therefore, (44) reads
Moreover, under quite general conditions, when the system departs from local electro-mechano-chemical equilibrium while remaining in local thermal equilibrium, in the case of most real gases and related fluid phases, such as van der Waals fluids, we can assume that
and
. Therefore, it follows that:
where
Furthermore, if the free energy supplied by the reservoir is dissipated as heat and then reversibly transferred to the environment through a surface maintained at constant temperature, so that
, the local energy conservation equation can be written as:
and, finally, by (57) as
Since, in a stationary state, both and are constant, and is not an explicit function of time, is maximized with respect to variations in , so that is minimized with respect to the possible choices of .
Therefore, for a classical phase of molecules undergoing elastic collisions and no chemical reactions, under quasi-isothermal conditions at constant volume, the maximization of hydrodynamic free-energy dissipation corresponds to the maximization of thermodynamic free-energy dissipation.
This is consistent with Sawada’s findings [
3], which experimentally showed that, once a steady-state configuration is reached in electro-convective instability, the system achieves maximum free energy dissipation [
7].
2.7. Quasi-Isothermal Constant-Volume Systems Without Reversible Free-Energy Reservoirs: Maximum Heat Transfer
In general, when it is not possible to control the reversible free-energy supply from the reservoirs, as in the case of atmospheric instabilities, the free-energy dissipation of the reservoirs differs from that one released by the system and
. In the absence of an explicit expression for free-energy dissipation, Eq. (61), under the assumption of local thermal equilibrium, leads to Equation
which reflects the ‘maximum heat transfer’ principle formulated by Malkus and Veronis [
4], relevant to both fluid dynamics and atmospheric turbulence.
2.8. Theoretical Contextualization of Order Generation via SQHM
In far-from-equilibrium states, it is possible to define the hydrodynamic free energy and the hydrodynamic distribution functionthat describe the statistical state of a classical system and its kinetics.
Once the equations describing the evolution of the system are defined, the principle of maximal dissipation of the stochastic differential of the normalized hydrodynamic free energy, , emerges in far-from-equilibrium stationary states.
This principle does not contradict the earlier formulations proposed by Sawada [
3] , Malkus, and Veronis [
4], and even that by Prigogine [
1]; rather, it is consistent with them and helps elucidate the nuanced interrelations among theories. The derivation in (61-62) is not generally valid, but holds as a particular case under quasi-isothermal conditions for a classical gas and a mean-field fluid phase.
The proposed principle highlights free-energy–based dissipation as the fundamental physical mechanism propelling the emergence of order in fluids and gas phases.
Under far-from-equilibrium conditions, any system that seeks to dissipate its hydrodynamic free energy as rapidly as possible, is compelled to traverse configurations in which order arises. Sawada’s electro-convective experiments effectively confirm this principle under quasi-isothermal and isovolumetric conditions, typical of fluid systems (e.g., water), with potential relevance to processes associated with the emergence of life.
3. From the Generation of Order to the Formation of Stable, Organized Structures
The possibility of inducing order in classical phases, such as in fluids and gases, doesn’t inherently result in the creation of stable well-organized structures, with proper form and functions, required for life. Achieving the stability and form retention akin to solids is crucial, and this necessitates rheological properties characteristic of solids. Thus, nature had to address the challenge of replicating the rheology of solids while retaining the classical properties of fluids or gases such as molecular mobility and diffusion, in which order can be generated.
The solution offered by nature to this predicament lies in bi-phasic materials. Bi-phasic aggregations, such as gels, are composed of a solid matrix, often polymeric in living organisms, with pores filled by a fluid. In these formations, the solid matrix may constitute as little as 2-3% of the total weight, yet it imparts significant rubber-like elasticity to the overall structure, with a shear elastic modulus that can be on the order of or greater than
[
31].
This generally far exceeds the modulus of aerogels, whose pores are filled with gas. Gels, due to their incompressible interstitial fluid, outperform aerogels in forming organized structures with solid consistency and complex functionalities [
31], and were therefore much more preferred over aerogels in the development of living structures.
However, the path to the formation of living systems is far from straightforward, since the synthesis of natural bi-phasic materials, characterized by chemical affinity between the network and the interstitial fluid [
31], was constrained by the availability of molecules in the cosmic chemical environment.
Once gels paved the way for the emergence of stable, organized solid structures and life, the next challenge was to identify a suitable interstitial fluid—one that was both versatile and widely available, and that could operate at temperatures compatible with the active functions of living systems, including energy dissipation and information storage into the structure.
3.1. The Fluid Problem
The availability of fluids in planetary environments, beyond terrestrial water, imposes specific constraints. Here, we enumerate some possible cases:
Europa (moon of Jupiter): Europa is covered by a thick ice crust, but beneath this ice, there might be a subsurface ocean. The exact composition of this ocean is not well-known, but it is believed to be a mixture of water and various salts.
Titan (moon of Saturn): Titan has lakes and seas made of liquid hydrocarbons, primarily ethane and methane. The surface conditions on Titan, with extremely low temperatures and high atmospheric pressure, allow these substances to exist in liquid form.
Enceladus (moon of Saturn): Similar to Europa, Enceladus is an icy moon with evidence of a subsurface ocean. The composition of this ocean is also likely to include water and possibly some dissolved minerals.
Venus: Venus has an extremely hot and hostile surface, but some scientists have proposed the existence of “lava oceans” composed of molten rock. These would be much hotter and denser than typical water-based oceans on Earth.
Exoplanets: The discovery of exoplanets with diverse conditions has expanded the possibilities for liquid environments. Depending on the atmospheric and surface conditions, liquids other than water and methane could exist, such as ammonia, sulfuric acid, or various exotic compounds.
Additional restrictions arise from the requirement to construct bi-phasic materials comprising a polymer network. Beyond water, which is abundant in the universe, the concept of bi-phasic materials involving a polymer network and intermolecular liquids such as methane or ethane remains theoretically plausible. However, their feasibility depends on several factors, including material compatibility and environmental conditions such as temperature and pressure.
3.2. The Information Storing Problem
As previously noted, polymer networks in biphasic materials provide structural support, while liquids act as fillers or components that impart specific properties, including electro-convective diffusional transport of molecules, support of chemical reactions, energy supply, storage and transport, and the maintenance of ordering functions such as tissue repair and information processing.
In the case of biphasic materials, several considerations come into play:
Compatibility: The polymer network and the intermolecular liquid should exhibit chemical affinity in order to form a stable biphasic material [
31]. This may involve considering the chemical and physical interactions between the polymer and the liquid phase.
Multi-configurational expression of polymer gel-based systems: Among polymers, amino acids assemble into a multitude of protein molecules whose folding processes, influenced by fluid dynamics and environmental conditions, give rise to a vast repertoire of molecular conformations. These conformations underpin numerous gel-like structures and functions, such as those of chromosomes, nucleoli, the nuclear matrix, the cytoskeleton, the extracellular matrix, and more.
Structural Integrity: The stability and structural integrity of the biphasic material over time need to account for factors such as the potential for phase separation or degradation of the polymer network.
Regarding the firsts two points, it is crucial to evaluate whether amino acids can facilitate protein formation in alternative fluids, such as methane or ethane. This is because their various folding configurations in aqueous solutions enable the creation of an information storage mechanism through the folding-unfolding process. In fact, beginning with a defined protein system (the chromosomal complement of an embryonic cell), an entire living organism is generated through a cascade process (unfolding set). Conversely, an individual living organism can preserve information about its structure by storing it in an encrypted format. A similar issue is currently under investigation in computer science, where researchers are seeking techniques to compress files to a minimal set of instructions (folding procedure), from which the complete original file can be extracted (unfolding). Therefore, the emergence of proteins in biphasic living structures represents another indispensable step towards the formation of complex living systems. On this matter, we note that the process of protein formation typically entails intricate biochemical processes taking place in aqueous environments. Proteins, as substantial biomolecules comprised of amino acids, undergo a highly specific and complex synthesis that hinges on the interactions among amino acids within aqueous solutions.
Methane, on the other hand, is a hydrophobic (water-repelling) molecule and is not conducive to the types of interactions required for the formation of proteins in the way we understand them on Earth.
Therefore, when contemplating the temperature and external pressure conditions necessary for ethane or methane to exist in a liquid phase—conditions that significantly influence factors crucial for life, such as the diffusion and mobility of chemical species it becomes evident that water, with its carbon-based chemistry, stands out as the most conducive fluid for supporting life. Therefore, we can anticipate that if any ordered structure is discovered in ethane or methane fluids, in presence of sufficient energy sources, it will not attain the complexity of those generated in water.
The processes of life, including the formation of proteins, are intricately tied to the properties of water as a solvent. In aqueous environments, amino acids can interact through hydrogen bonding and other forces to fold into specific three-dimensional structures, forming proteins. Protein folding is fundamental for information storage, which is fundamental in maintaining specific tasks and diversification of leaving functions in different organs or maintains information about the overall organization of living organisms necessary to establish its reproduction cycles. This process is also propaedeutic to the establishment of the selection process required for the perfecting of living structures.
With this perspective, the solution for the existence of highly sophisticated living organisms lies in structures composed of aqueous gels. These structures not only provide support but also host active functions by facilitating the flow of ions and molecular species, generating maintenance or strengthening of the structure and information storing producing as final waste dissipation of free energy and products of associated reactions.
Basic active functions encompass various aspects, such as the mass transport essential for the plastic reconstruction of damaged tissues, organs or other living structures, the contractile muscle function responsible for macroscopic movements, the deposition of biochemical substances in neurons for information storage or to support psychological functions underlying consciousness and so on. From this viewpoint, the continuous energy-based reshaping and treatment of information within living organisms are the driving forces behind life, making the material itself merely the continuously rebuilt and modeled substrate.
It is worth noting that that this energetic back-function mechanism—where the material substrate is continuously reshaped by energy—constitutes the fundamental process that characterizes living systems. Indeed, natural intelligence, consciousness, and intentionality can be understood as expressions of this role of energy, through which living beings are perpetually renewed in response to their environment. This common basis of all structures and organs in a living organism, concurring to the formation of the so-called body-mind, represents a major challenge in the pursuit of consciousness in artificial, so-called intelligent machines [
32].
The difference between a living organism and an equal pile of atoms of carbon, hydrogen, oxygen, nitrogen and small quantities of salt is the energy flow: the first thinks, speaks and walks, the pile of the same quantity and type of matter does not. This empirical observation raises a fundamental question: if the material substrate is the same, where does life reside? All the uniqueness we value in a person or a living organism arises from the synergistic relationships among its elementary components—namely, the cells—relationships that are formed and maintained through continuous exchanges of energy and information, constantly reshaping the whole.
On this basis, we can also distinguish between the artificial intelligence of a machine and biological intelligence [
32]: In the former, the unique chemical current is composed of electrons whose “plastic” function is confined to the state of the computer bits. On the other hand, natural living systems exhibit numerous chemical fluxes, and their plastic functions involve entire structural networks.
3.3. The Emergence of Complexity from Free Energy’s Drive to Order: Intrinsic Natural Intelligence
The evolution of life toward more complex functions was facilitated by the availability of gel-like materials exhibiting both solid-like rheology and fluid-like properties. These materials enabled the localization of specific functions to defined regions, allowing interactions that shaped the living structure. In this way, intricate forms and functions co-emerged.
The acquired shape and ordered functions were maintained over time, enabling evolutionary selection to enhance order in organized systems. This specialization of functions contributed to the escalation of the pyramid of life’s perfection involving the development of living organisms.
To gain an overall understanding, it is helpful to outline the key elements required for the formation of living structures:
Level 1 – Driving Force: Free energy dissipation.
Level 2 – Classical Material Phases: Temporary order generation in fluids and gases.
Level 3 – Mechanism of Order Maintenance and Complex Rheological Structure Formation: Stable order acquisition in biphasic systems, featuring a solid network permeated by fluid.
Level 4 – Carbon-Based Polymeric Networks in Aqueous Solution: The most functional system for energy and mass storage and transport, as well as information processing via protein folding.
Level 1 is available and active throughout the universe. Level 2 is widespread across planets and interstellar space in the form of hydrogen, water, methane, ethane, and other elements. Level 3 requires the sustained presence of liquid phases, supported by energy gradients and sources, over long timescales, and is feasible only on planets of suitable size with a magnetosphere. Levels 4 is possible on planets located in the temperate zones around medium-sized stars that have accreted debris and asteroids originating from supernovae.
The long chain of conditions leading to the emergence of living organisms with a high level of structural organization and functionality makes life a highly unlikely state, requiring numerous resources and components from all the universe. From this standpoint, reality almost appears as if the entire universe were a laboratory, purposefully prepared for this outcome.
Furthermore, from the perspective of universal 4-D spacetime, the formation of the Earth—dating back about 5 billion years—represents a significant fraction of the universe’s lifetime. This suggests that also the entire timescale of the universe is required for the development of life, from LUCA through bacteria, archaea, and eukaryotes. Hence, if life were to arise elsewhere in the universe, it would represent a parallel, contemporaneous expression of existence; no consecutive or sequential emergence of life processes could have been generated by our reality to date. The very problem that the universe, in both its spatial and temporal extension, appears to be designed to address is precisely the generation of life.
3.4. From Ordered Structures to Living Systems
So far, we have highlighted that the mere existence of a ‘force’ propelling material systems to organize themselves as rheological biphasic polymeric and protein gels, far from thermodynamic equilibrium, is not sufficient to generate living organisms; but a second stage is necessary.
It’s clear that order and structure are essential in the formation of living organisms. To grasp what life truly is, or to understand the difference between a collection of molecules and the same amount of matter in a living organism, consider the following thought experiment: if we were to hypothetically separate the molecules of a living organism, moving them far apart in a reversible manner, life would instantly stop. However, if we then return those molecules to their exact original positions, life would reappear. So, if the matter itself doesn’t change between these two states, what, then, defines life?
This thought experiment clearly illustrates that life emerges from the interactions among components and is, at its core, an immaterial phenomenon—primarily an energetic and informational process driven by the synergistic activity of its parts.
From this perspective, we may generalize that the driving principle behind organization gives rise to living organisms and underpins the emergence of complex dynamic systems composed of numerous small functional components, shaping their behavior and evolution. Consequently, social structures, too, should be regarded as part of the framework of living systems.
In this context, the progression of the universe can be understood as a problem-solving process—an expression of the natural intelligence embedded in all of nature and reality. Our aim here is to explain how the organizing tendency of energy naturally gives rise to processes that we recognize as intelligent—processes directed at addressing environmental challenges and discovering optimal evolutionary pathways. This perspective highlights a fundamental property of nature: consciousness and intelligence cannot arise from an immutable material substrate. Instead, they emerge from an energy-driven, self-modifying structure aimed at improving efficiency within a given environmental context.
Let’s conduct a more in-depth analysis on this point. Let
be the Hamiltonian function describing the j-th part of the living system (even an individual in a social structure). Then, the total Hamiltonian function of decoupled parts (when they are far away from each other) is given by:
Meanwhile, when the
parts are interacting together in the “living” state, the Hamiltonian function is formally composed by the interaction stemming from groups with increasing number of components such as
where
refers to the mutual interaction between pairs of molecules (or elements),
between triplets, and so on up to interactions between all
molecules together. Therefore, the synergistic phenomena between parts in a living system, composed by
elements such as cells in a body or individuals in a society, are contained in the term:
Given
, the number of groups of
elements, that reads
the total number
of interaction terms in
, with
spanning from 2 to all elements
of the system, reads
which exhibits an exponential-like increase as a function of the number
of simultaneously interacting elements in the group.
The importance of the synergistic contributions in the interaction terms in (65) naturally diminishes as increases in systems without particular symmetries. For instance, we can think to the first synergistic term concerning interaction between couples of individuals in a social system that is at the basis of reproduction that is of paramount importance respect to synergistic activities with three of more elements that concern production of goods, services and life protection.
Nevertheless, even the high order terms () give decreasing contributions, their number increase so quickly that the total effect may be as important as the preceding ones.
In fact, by considering the number
of (67), for systems with
hugely large (as in complex living systems or social organizations) it can be approximated by the formula
that, for
, reaches the highest values
that well expresses how the number
grows in exponential way in system with huge number
of total components.
For the about
cells of a biological system (69) gives
as well as for social aggregations like cities, or nations, with
individuals for which
Being in (70-71) really huge, furnish a relevant contribution to the system evolution.
For instance, let’s consider the case where each synergistic contribution in the interacting Hamiltonian terms in (71) involve an energetic contribution of order of
of the single element
such as
for
as small as
, the total synergistic output
results much higher than the sum of the single outputs
with a gain ratio
that reads
where, typically,
is determined by the system’s structure and remains independent of
, which, in contrast, can attain very large values.
Nonetheless, it should be noted that in large groups not each individual can directly interact with all others and therefore effective synergistic groups have usually . However, in highly organized systems like living organisms or cities, where it’s common to see thousands or even tens of thousands of people engaged in collaborative activities, the synergistic contributions remain significant. Furthermore, with the advent of modern communication technologies, the number of synergistic interactions continues to increase very fast.
From this, we can infer that individuals living in a city would never be able to achieve, in isolation, the same well-being that they can accomplish through coordinated efforts. This cooperation is essential for ensuring a comfortable and healthy life. The exponential growth of synergistic outputs also explains the rise of megacities, particularly in developing countries, where urbanization is a key driver for improving poor living standards. The synergistic perspective also clearly explains the positive effects of resources redistribution among different social categories of people. The same effect operates in biological systems where complex functions could not achieved without the synergistic cooperation between organs and millions of cells therein.
To illustrate the benefits of synergy, we can consider a simple example: a group of individuals facing extremely cold conditions. When they huddle together, the surface area for heat loss decreases, allowing them to survive in very low temperatures. On the other hand, if they remain apart, they would lose more energy and survival would become impossible under the same conditions. A similar principle applies to securing food in early human civilizations, which catalyzed the development of social structures.
The key factor in triggering the effect of synergistic collaborative interaction lies in the specialization of functions, whereby each group of N members concentrates all their effort on achieving a specific outcome that could never be accomplished by a single individual, who would otherwise need to handle numerous tasks and pursue multiple objectives simultaneously. This holds true both for biological systems and for social ones. In the latter case, this problem constitutes a fundamental basis of survival science, illustrating that even starting a fire is an exceptionally challenging task for an isolated individual lacking the technological means and knowledge provided by society.
In an agriculture-based early society with sparse populations, where families operate as mostly self-sufficient units, the synergistic effects are limited to a small number of individuals, leading to modest outputs. The want to have many children, in groups of families with genetic ties, in such contexts arises from the need to increase the potential for synergy, although this effect is generally capped at around few tens of individuals.
The major turning point came with the rise of industrial society, when hundreds, thousands, or even hundreds of thousands of people could collaborate efficiently. This surge in synergistic activity fueled the wealth and well-being of modern civilization. These dynamics explain why people migrated from rural areas to cities and megacities, and why, paradoxically, poorer regions often have larger cities—a well-known global trend.
Furthermore, in industrial cities, where dense populations and industrial infrastructure already fulfill the need for social and economic synergy, the necessity for large families diminishes. As a result, populations in industrialized societies tend to stabilize or even gradually decline. However, this effect is offset by the increasing efficiency of farmers, who require fewer and fewer workers to produce the same output. This solution, emerging from natural intelligence, can enhance personal income and health without increasing the overall consumption of Earth’s finite resources, thereby suggesting a rational and efficient— and thus genuinely intelligent—approach to the so-called ecological energy transition.
It’s also important to recognize that while energy is a conserved quantity and, therefore, the increase in output from energetic resources is proportional to them, whereas the increase in output from organization grows exponentially. Given that Earth’s system is finite, organizational synergy is the key mechanism capable of generating growth and improvement at constant energetic expenses. Therefore, it should be the primary focus of any nation and ecological transition strategy.
Although not fully understood, the concept of efficiency is embedded in the universal computation and has been continuously pursued over time. The dynamics described here drive the development of ever more efficient systems, capable of overcoming increasingly difficult challenges and propagating themselves throughout the real world.
In this way, a natural selection mechanism is spontaneously embedded within living and social systems. This inherent drive for improvement in organized structures and their functions represents a form of natural intelligence that permeates universal reality and drives its evolution.
4. Evolution, Free Will, and Consciousness: A Bounded Probabilistic Path Beyond Determinism and Probabilism
The SQHM theoretical framework addresses the problem of time that arises in 4D classical General Relativity, where given the initial conditions, 4D spacetime leads to predetermined future states. In this context, time does not flow but merely functions as a ‘coordinate’ within 4D spacetime, losing the dynamic significance it holds in real life. The same holds from the quantum mechanical standpoint where even if the future state can be given in form of superposition of states, it is predetermined on the initial conditions. This deterministic property is broken by the introduction of the GBN inducing quantum self-decoherence to the classical state that introduces the probabilistic nature into the quantum evolution outcome (see § 2.1 part one). In the presence of GBN, the predetermination of future states depends on the inherent randomness of the GBN. Even if, in principle, the GBN may be pseudorandom, resulting from spacetime evolution since the Big Bang, it appears random to all subsystems of the universe, with an encryption key that remains inaccessible. Furthermore, even if from the inside reality we could have proof of the pseudo-random nature of the GBN, featuring a high level of randomness, the challenge of deciphering the key remains insurmountable [
33] and the encryption key practically irretrievable. The detailed definition of all GBN noisy ‘wrinkles’ across the universe is beyond the reach of any finite subsystem, as the speed of light is finite and gathering complete information from the entire universe would require a practically infinite time frame. Therefore, from inside the “universal reality computation”, the GBN appears truly random.
This essentially is equivalent to have the seed, encoding the GBN, outside the computed environment and is inaccessible to us. In such a case, we are effectively dealing with an instance of a one-time pad, which, being equivalent to secure deletion, is known to be unbreakable and permanently unrecoverable.
Since the key of the GBN is irretrievable, and the indeterminacy of future states increases exponentially as a result, we can understand the limitations of free will. In this context, the temporal horizon for predicting future states (before they occur) is necessarily limited within reality. Among the many possible future states, due to the limit of precision, we can infer that it is possible to determine or influence the future outcome only within a certain time frame, suggesting that free will is limited.
Another characteristic of the free will existing in the universal reality can be inferred from the following observation. Since the decision about which state of reality is realized is not directly determined by preceding events beyond a certain past time interval (in the SQHM, this results from 4D quantum decoherence and the decoupling from preceding instants), we can conclude that such a decision is not predetermined.
This arises because universal computation unfolds through successive quantum collapses into classical states. At each moment, multiple possible future states coexist in quantum superposition, and the collapse induced by decoherence provides a genuine opportunity to select which of these potential realities becomes actualized.
From the perspective of the computational algorithm, the decoherence-induced selection implemented by the physical laws drives the progression toward the intended outcome according to a predefined procedure—one based on computing the optimal next configuration of the universe, in which efficiency plays a central role. Within realized reality, this process manifests as what we perceive as an ‘inherent natural intelligence,’ capable of selecting the best subsequent state to achieve its goal. In this sense, genuine free will is ensured by the multiverse structure of the future and by the effective decoupling of present states from past quantum correlations.
Free will, in this context, emerges as a form of bounded probabilism inherent to universal evolution. It takes advantage of probabilism to eliminate determinism from universal evolution, but it is also a process bounded from below, preventing it from falling into an indeterminism ruled solely by chance (the ultimate nightmare of human thought), as the maximization of free-energy dissipation minimizes information loss as much as possible.
From the perspective of living individuals—subsystems that pursue the same overarching goal as the universe that generated them—this process enables the free selection of the most favorable future state from their point of view. This mirrors the universe’s own mode of operation: the drive toward an optimal future state is embedded in the fundamental physical laws emerging from its computation, directing the cosmos toward the final resolution of the problem that the computation itself aims to solve.
Universal progression is thus fundamentally rooted in the drive for maximum efficiency. From the inception of its operation, this is achieved through the optimization of the ‘computation,’ with the real-time realization of the next optimal state thereby inducing order and organization.
4.1. The Emergent Forms of Intelligence
The discrete computational nature of reality demonstrates a form of natural intelligence has evolved through a selection rule based on the assumption that ‘the most efficient system is the best solution’. At a fundamental level, the emergence of living systems is rooted in the physical laws governing matter self-assembly driven by free energy dissipation. Here, efficiency plays a crucial role, improved by a process that we can identify as natural intelligence.
If at universal level the achieving of the most efficient state is conceived from the point of view of the overall system, from the individual stand point of single living organism the achieving of the best future state is referred to itself even if it can be destructive for the environment and other individuals.
Therefore, this criterion does not yield a unique outcome, as it gives rise to two distinct methodologies. The original and foundational one is synergistic intelligence, which seeks collaborative actions aimed at sharing resources or constructing a more efficient system or structure from which the entire global system can benefit. The alternative, or collateral, form is predatory intelligence, wherein the subject acquires resources by overcoming or destroying the antagonist. The former has played a crucial role in shaping organized systems—such as living organisms—as well as social structures and their behaviors. The latter, by contrast, represents a kind of shortcut: it achieves immediate results, but often at the expense of long-term productivity or systemic coherence. From this perspective, synergistic intelligence is more demanding and represents a higher form of intelligence that is destined to prevail over extended timescales. Moreover, since intelligence arises from the synergistic construction underlying matter’s self-assembly and ordered functioning, it appears as the only genuine form of intelligence—superior to the so called “predatory intelligence.” The latter, when adopted, reflects a form of negligence, a lack of ability to grasp the long-term, superior outcomes achievable through synergistic intelligence.
Predatory intelligence is inherently self-contradictory: if taken to the extreme, once it has prevailed and destroyed everything else, it is left with only two options—destroying itself or changing its method. Thus, even if the predatory intelligence of living systems might lead to total destruction, from the standpoint of the universal problem-solving algorithm, evolution will continue toward the assembly of new living systems, likely more advanced and capable of producing different outcomes. A clear example of this dynamic is the possibility of a nuclear war, which represents an expression of humanity’s predatory intelligence.
Humans may think they rule the world, yet in truth, humankind remains under the constant and challenging examination of nature itself.
4.2. Dynamical Conscience
Following the quantum, yet macroscopically classical dynamics of the SQHM, all objects, including living organisms, within the computed reality undergo re-calculations (definition of their classical state), at every time step after decoherence. This process irreversibly propels them forward into the future within reality. The reality that emerges from this framework also appears to give rise to the organization of matter and the formation of complex structures through an emergent process that we can recognize as natural intelligence endowed with free will. The fundamental concept driving this evolution is efficiency, from determining the universe’s classical future state to shaping living organisms. Life itself can be seen as the ultimate expression of efficiency, preserving as much energy and information as possible from dissipation.
On this basis, we are in a position to formulate the assumption that the mind of living systems—considered as a macroscopic classical object—follows the quantum-to-classical decoherence mechanism, which gives rise to the efficient computation of its next future state, consistently guiding it toward the goal of achieving the optimal future state of the entire living organism.
In the universal computation’s task of achieving the optimal future state—in terms of generating order and maximizing information conservation—consciousness and intelligence in living organisms emerge to express, maintain, and pursue the same overarching goal: preserving life, sustaining organization, and improving efficiency. Within this framework, free will is inherent in the choice of event-chains that can lead to the desired outcome. Each living system makes use of this potential to find its own best possible future state, replicating, at a lower scale, the free-will functioning of the universal computation. From this perspective, consciousness, rationality, and free will—though present in varying degrees among living beings—are not exclusive to humans, but are also found in animals, and in different forms across all living organisms.
4.3. On Computability of Conscience
Once we have outlined how consciousness, intelligence, and free will emerge, the concept of their computability naturally arises as the ability to replicate the evolution of the mind in living organisms. In this context, we note that at least three elements are involved: a self-evolving structure, the environment to which it is intimately connected, and an efficient computational mechanism capable of performing the basic operations of the mind in real time.
Even though so-called artificial intelligence systems have been developed in recent years, a major problem remains behind this scene.
This refers to the fact that the abstract possibility of conscience computability, or mimicking, does not mean by fact that we effectively can do it.
In fact, since the demand for computing resources grows exponentially with data complexity, to add functionalities such that conscience can be fully replicated, might currently be unfeasible. The core problem in realizing the “artificial conscience” lies in the need for an efficient computational process, such as that used in the universal computation. Traditional classical methods generally exhibit poor scalability, with computational complexity escalating sharply as the number of variables grows. The underlying mechanism employed by nature is that, in generating consciousness in living organisms, the universal computation utilizes the same optimal methodology it applies to compute the discrete states of classical reality, in accordance with the behavior of matter and the processes leading to the emergence of life.
The real challenge lies in the efficiency of calculations, particularly in relation to the scalability of the problem. The significant time and energy required to implement a classical computation of quantum phenomena—such as consciousness and all aspects of reality governed by universal computation—render such approaches non-scalable, effectively classifying artificial consciousness as an NP problem.
Therefore, quantum computing becomes essential to potentially have a scalable problem transforming classical NP problems, such as to replicate the conscience, into P problems.
This raises serious questions about the classical computability of full consciousness, especially if we aim to reproduce temporal consciousness by incorporating mechanisms such as ‘retention’ and ‘protention,’ along with the integration of multiple sensory inputs and higher-level functions of the biological mind [
34,
35]. The mind’s efficiency, and low energy cost, in producing its behavior reveals the hidden use of efficient algorithms of operation.
If quantum mechanics, which gives rise to classical reality, is capable of bringing about life and the emergence of living beings, then, in principle, having an efficient algorithm to fully capture quantum mechanics would allow us to reproduce universal natural evolution on computers. In practice, however, this remains unfeasible: algorithms that represent quantum mechanics using classical bits are still inefficient and scale exponentially, making the computation of life an NP-hard problem. Indeed, we are not even able to simulate the functioning of a relatively simple biological mind—not even that of an ant—because doing so with classical bits remains computationally intractable.
The way out is that, even if we do not know of an efficient quantum algorithm, we can empirically make use of what the universe itself employs—namely quantum wavefunction collapse—as a means of computing the outputs underlying the functioning of qubits. Therefore, the only plausible ‘efficient algorithm’ for realizing or replicating mind-like behavior is quantum computation.
From this perspective, the development of quantum computing must focus on ensuring effectiveness. However, challenges remain in creating large-scale qubit systems for the development of quantum computation methods with large computational resources, which are essential for accurately mimicking ‘real-time’ mind behaviors (see
Appendix A.1).
Given the inherent nature of universal time progression—where optimal performance is achieved through quantum computation involving stepwise quantum evolution and wavefunction collapse (output extraction)—a living system, in order to generate real-time consciousness and intelligence, must inevitably adopt an analogous, highest-performing strategy by replicating the intelligent universal method of quantum computation.
The computational efficiency of the mind is evidently justified by the complexity of its functioning, as described by various well-established models in neuroscience [
34]. These models conceptualize consciousness in the biological mind as a three-level process [
34,
37]. Starting from the outermost level and moving inward, these levels include cognitive computation, an orientative emotional stage, and, at the most fundamental level, a discrete-time acquisition routine with an adaptive scanning rate. This mechanism gives rise to the subjective perception of time dilation, where brief moments can be experienced as significantly extended periods in the subject’s mind. The routine captures previous states (retention), which are stored and processed to generate expectations for future moments (protention [
36]) anticipated at the next acquisition step. The comparison between predicted and actual states then informs decision-making at higher levels. This discrete-time acquisition of external inputs, used to construct a unified representation of reality [
38,
39,
40], closely parallels the emergence of the classical world from a discrete microscopic spacetime quantum structure.
The key advance realized by the biological mind may lie in its ability to transpose and exploit this universal computational procedure at the macroscopic level. In three-dimensional systems, this process typically occurs at the microscopic De Broglie’s scale—on the order of molecular dimensions, much smaller than biological cells. However, in systems with ordered bi-dimensional or even unidimensional structures (as shown in § 2.2 in first part of the work), it can be extended to larger and larger distances, since the quantum coherence length can reach higher values. Just as superconductive polymers, owing to their unidimensional polymeric structure, are among the most promising candidates for achieving high-temperature superconductivity [
41]—thus opening the way to the assembly of large qubit systems— microtubules in neurons might similarly give rise to macroscopic, transient quantum coherent superpositions of states involving multiple neurons. These states would then undergo decoherence, leading to the final classical state of each neuron, corresponding to the computed outcome (evaluation, awareness, or decision).
This approach aligns with the Penrose–Hameroff–Diósi line of thought [
42], according to which a quantum-mechanical substrate for consciousness can explain various features of human cognition and behavior.
4.4. Intentionality of Conscience
Intentionality is contingent upon the fundamental function of intelligence, which empowers the intelligent system to respond to environmental situations. When potential beneficial objectives are identified, intentionality is activated to initiate action. However, this reliance is constrained by the underlying laws of physics explicitly springing forth the task the universal computation has been set for.
Generally speaking, in our reality, conscience, intelligence and free will address all needs essential for development of order, namely life and more and more efficient organized structures, encompassing basic requirements, for instance, such as the necessity to maintain the integrity and functions of the assembled structure.
Essentially, the intelligent system is calibrated to adhere to the physics of the universal computing method advancing to the next instant by searching for the optimal state in the most efficient possible way, namely through quantum computation
This problem-solving disposition is, however, constrained by physical interaction with the environment. From this perspective, when we turn to artificial machines, it becomes evident that they cannot develop intentionality solely through computation, since they lack integration with the real world and, to date, do not employ quantum state superposition to efficiently compute outcomes, decisions and convictions. In biological intelligence, by contrast, structure is deeply intertwined with the environment through the physical processes that drive reality forward. A living system not only produces energy and manages information but also engages in a reciprocal dynamic in which energy shapes and develops the wetware of living structures and their functions (such as for instance structural plastic repair and organs regulation and stimulation), including the very structures responsible for intelligence, through the transfer and delivery of matter to neurons (e.g., neurotransmitters) and shaping their connections.
In contrast, a computational machine lacks the capacity for its hardware to interact dynamically with the external environment and be modified by its inputs. As a result, such machines remain outside the processes of natural selection, self-enhancement, and genuine free will that enable the systems to choose their optimal future states. In other words, intentionality, the driving force behind the pursuit of desired solutions, cannot be fully or authentically developed through computational procedures executed by immutable hardware. Intentionality, as an expression of free will, is exclusive to continually evolving intelligent systems whose structure and functionality are seamlessly integrated with their environment and continuously updated. Reality accomplishes this through the physical self-evolution of natural polymeric wetware in living systems. Therefore, consciousness does not arise solely from computation; it also emerges as the expression of a self-modifiable structure. In living systems this adhere with the concept of body-mind.
It is worth noting that self-modification in computers can be partially replicated at the software level, as is currently done through genetic programming—a machine learning technique that automatically generates computer programs to solve problems. This approach mimics the principles of natural selection and employs genetic operators such as crossover and mutation to evolve a population of programs, with the goal of finding the most effective solution to a given task.
4.5. Summary of the Section
Considering that the maximum entropy tendency is not universally valid, but rather the most efficient energy dissipation with order and living structures formation is the emergent law, we are positioned to narrow down the goal, motivating the universal computation, to two possibilities: the generation of life and/or the realization of efficient intelligent and conscient systems.
As the physical laws, along with the resulting evolution of reality, are embedded in the problem that the computation seeks to address, intentionality and free will are inherently manifested within the reality to achieve the computation’s objective. All living functions, including consciousness, intelligence, and free will, are deeply rooted in the quantum foundations of macroscopic reality. They rely on, and reach their highest level of macroscopic expression through, the same algorithm that the universe employs to advance efficiently through time, driving matter’s self-assembly and the emergence of ordered structures with complex functionalities.
Classical computation makes non-scalable the solution of complex phenomena such as consciousness, which involve the simultaneous integration of numerous inputs and a nonlinear adaptive response. Quantum computing, with its ability to perform computations in parallel and handle superposed and entangled states, theoretically offer a platform to efficiently model these complex processes. Even if consciousness may require quantum processes, we are still far from being able to build quantum systems that are sufficiently stable and scalable to simulate complex cognitive functions or even consciousness. The current limitations in error correction, room temperature quantum coherence, and large-scale qubit integration remain major obstacles.
5. Broader Implications and Applications
The insights derived from this framework extend far beyond biology and physics, reaching into domains such as social dynamics and economics. The principle of seeking the most efficient solution—an intrinsic feature of universal behavior—shapes not only natural processes but also the structures and interactions that define human society. This perspective offers a novel lens for analyzing phenomena such as economic behavior, inflation control, urban aggregation, and even the cyclical patterns of war and peace. By understanding these processes as rooted in the same natural laws that govern physical reality, the theory suggests new strategies for addressing global challenges with greater effectiveness and foresight.
Such knowledge holds the potential to help humanity confront some of its most persistent problems, reshaping collective behavior for the benefit of the entire Earth system and, ultimately, beyond. To the extent that we understand how the universe evolves and what actions facilitate and promote organization, efficiency and life, we can more effectively evaluate facts, guide our decisions, and enhance both our lives and our psychological well-being. These implications extend far beyond the scope of the present work. Nevertheless, in this spirit,
Appendix A.3 provides a few illustrative examples and potential applications, demonstrating how this perspective may inform and improve real-world decision-making.
5.1. Usefulness of Economic Recessions: the Universal Intelligence Perspective
One of the major debates in economic science revolves around the role of recessionary phases. While the significance and usefulness of expansive economic stages are widely understood—where investments aimed at boosting production output in terms of goods and services are considered the natural course of action—opinions regarding recessionary economic periods are varied. Many individuals believe that recessions are inherently negative and should be avoided altogether. However, from the perspective of how reality unfolds, it can be argued that these economic phases play an important role.
During a recession, consumption typically slows down, and businesses experience reduced incomes, often leading them towards economic failure. However, experience demonstrates that enterprises emerged from recessions has developed a new organized structure that maintain the same output with fewer expenses, or alternatively, produce more and different goods, with the same financial outlay. In doing so, they avoid failure and are able to continue paying salaries, sustaining consumption levels. As this attitude sparks around all the society, the consumption stops to diminish as well as workers firing tends to halt and recession stops. In essence, recessionary economic phases stimulate an increase in the efficiency of the production system.
It is worth mentioning that, since energy is conserved, the energetic output grows roughly linearly with means of production of the system, while the organization can exponentially increase the output at constant energy consumption. This clearly indicates that economic recession phases are more important than expansionary ones in terms of social progress, and shows that the main task of central authority over time is to enhance societal efficiency through continuous organizational action, while also moderating expansion to prevent excessively dissipative behavior.
Hence, if expansion is characterized by an increase in the production system and is energy-intensive, recession represents a phase focused on efficiency enhancement, where restructuring of productive resources becomes paramount. However, it’s essential to differentiate between various types of recessions: moderate recession and severe recessios that is commonly referred to as depression. In the case of a depression, the recession is so severe and profound that it disintegrates collaborative activities, among peoples leading to the collapse of social organization and structures with irreversible damage and a drastic drop in efficiency. As a result, the productive system suffers irreparable destruction, making restructuring for efficiency enhancement much more difficult. Businesses lay off workers, who remain idle at home and lose their skills. This causes a sharp decline in social efficiency, rather than growth. Instead of the continuous reshaping of society through restructuring of work systems, there is a genuine collapse, severely impoverishing the means of production, severe drop of synergies among people and reconstruction of social organization.
From the perspective of the synergistic behavior within modern societies, we can grasp the actions necessary to prevent recessions from descending into depressions.
Just as in a living organism, in the economic fabric, we must sustain the synergy among people. It’s imperative to avert the collapse of enterprises at all costs: Financial institutions, workers’ associations and governmental authorities, must collaborate to preserve the capacity of workers to collaborate effectively despite monetary discrepancies and shortfalls in financial records.
Frequently, the downfall of an enterprise stems from a mere “book” issue, where numbers fail to adhere to rigidly defined rules, often resulting in insignificant amounts compared to the ensuing damages caused by their enforcement. During severe recessions, it becomes crucial to set aside economic formalities and focus on preserving synergies in productive structures through deep restructuring aimed at fostering higher levels of organization and increased efficiency.
6. Discussion: Free Will and Predictability in Goal-Driven Universal Computation
The SQHM framework is anticipated, to some extent, by similar approaches, such as that proposed by Seth Lloyd [
43], who describe the universe as a quantum Turing machine—a physical system performing quantum computation at the most fundamental level, evolving the state of the universe as if executing a quantum program—or by the constructor theory [
44] which proposes that the universe is governed not by empirical physical laws but by fundamental transformations of matter, categorized as the underlying operations of the universe and analogous to the fundamental instructions of a computer’s CPU. In this framework, the universal computer is redefined as a ‘universal constructor.’ However, the theory faces the challenge of time, which is not inherent to the framework and must therefore be explicitly defined.
Compared to constructor theory, the SQHM framework is grounded in well-verified analytical physical theories, such as quantum mechanics subjected to gravity arising from dynamically curved spacetime.
It introduces the idea that the progression of the universe originates from quantum processes embedded within gravitational spacetime, giving rise to an emergent classical reality at the macroscopic scale. This emergent reality operates through an efficient mechanism for advancing forward, effectively realizing a form of quantum computation aimed at maximizing efficiency. Although the precise quantum algorithm driving this process remains unknown, we attempt to replicate the universal method empirically by employing qubits in so-called quantum computers.
The randomness introduced by the GBN makes the universal computation fundamentally unpredictable for an internal observer. Even if this observer uses the same algorithm as the computation to predict future states, the lack of access to the same noise source quickly causes its predictions to diverge. This is because each fluctuation has a significant impact on the wavefunction’s decay. In other words, for the internal observer, the future is effectively “encrypted” by the GBN noise.
Although the GBN noise may be considered pseudo-random—since it is ultimately determined by the initial conditions of the Big Bang—the evolution of the computation remains practically unpredictable.
Only by access to the original seed could, in principle, accurately predict future states or reverse the arrow of time. While pseudo-random noise can theoretically be deconstructed, extracting the underlying seed would be computationally intractable. As a result, the presence of GBN effectively “encrypts” the outcome of the universal computation, due to the high degree of entropy introduced by this noise.
Moreover, the laws of physics are not fixed or predetermined; rather, they emerge from the algorithm of the computation itself, which is designed to solve specific problems. For instance, if we create a computer simulation of airplanes flying through the air and analyze their behavior—where lift is generated by pressure differences across the wings—we naturally derive the laws of aerodynamics. Likewise, simulating ships at sea would lead us to observe and formulate the principles of fluid dynamics. In the same way, the physical laws we observe in reality are emergent expression of the algorithm that drives the universe forward—and, ultimately, of the purpose for which this universal computation is designed.
In a fluctuating 4-D spacetime, classical mechanics emerges from quantum mechanics. As coherence in quantum systems is lost, determinism breaks down, giving rise to a different mode of evolution. Moreover, the decoherence of quantum dynamics, which produces emergent classical behavior, is driven by the curvature dynamics of spacetime, even when these are structured as pseudo-random phenomena rather than purely random fluctuations. From this perspective, the stochasticity of spacetime background oscillations is not necessary for quantum decoherence; rather, it arises from an inherent incompatibility between quantum mechanics and the variable geometrical nature of gravity.
As discussed in §2.2 of Part I of this work, when the force arising from the quantum potential is screened at large distances by its fluctuating component (see (2)), the macroscopic system undergoes quantum decoherence and classical behavior emerges. It is important to note that the stochasticity of this term is not strictly necessary. Rather, it is the variability of spacetime curvature that dynamically alters the local density of mass distribution, generating an apparent signal from the quantum potential, which in turn loses its capacity to maintain coherence in the quantum evolution of the mass-density distribution itself.
Therefore, the pseudo-random GBN, with an effectively inaccessible encryption key, reproduces stochastic-like behaviors that are practically indistinguishable [
45]. Consequently, the problem that the universal computation aims to solve is defined by the structure of its computational algorithm, encrypted through pseudo-random noise whose key is inaccessible, and is perceived by any internal observer as purely random.
Hence, we can move beyond the notion of unpredictable probabilism characteristic of conventional stochastic mechanics and adopt a new framework in which the universe evolves by selecting optimal future states and establishing a well-defined trajectory of evolution. On this basis, the evolution of the universal computation is determined, although it does not exhibit strict determinism. Rather, it is governed by the preservation of order to the greatest extent possible, ensured by the maximal dissipation condition on an energy-type function, which constrains pure randomness into a form of bounded probabilism.
From this perspective, Einstein’s famous quote, “God does not play dice with the universe,” takes on new meaning. In fact, the universal simulation does not engage in randomness, since an evolutionary criterion is established within the computation itself based on the problem it aims to solve. However, from within our reality, we cannot access the seed of this noise, and thus it appears genuinely random to us.
If the aim of the universal computation is to develop a physics capable of generating organized living, and therefore “intelligent” structures, through evolution, improving their organization and efficiency over time, then the universe’s way of solving the problem of achieving the best possible future state reflects a genuine form of inherent intelligence—one endowed with free will: the freedom it exercises in choosing that future.
From the perspective of universal computation, it progresses toward its intended solution through a method specifically designed for that purpose. Within the reality, this process appears as what we perceive to be genuine universal intelligence with free will. This free will is made possible by the existence of the future as a multiverse and by the loss of quantum coupling with past states (predetermination).
The universe progresses through a process of selecting the ‘best future state, neither strictly deterministic nor entirely probabilistic and unpredictable. This probabilistic nature is constrained in a way that drives matter toward self-assembly and organization: the increase of entropy is bounded from below, as the maximization of free-energy dissipation minimizes information loss as much as possible.
If the universe computation faces this objective from the global point of view, on the individual level, free will becomes the ability to “choose how to reach the best possible future state for oneself”. Through the process of natural selection, therefore, two competitive approaches emerge:
Synergistic intelligence, which seeks to build harmonious organizations which increases efficiency;
Predatory intelligence, which seeks immediate advantages for itself while causing greater overall harm. This latter form of intelligence challenges the first, driving it to develop increasingly sophisticated regulations and structures in the real world, in alignment with the aims of universal computation.
7. Conclusion
Based on the SQHM framework, this work proposes an analogy in which the universe evolves through a computation-like process governed by emergent physical laws. This perspective challenges traditional interpretations of reality and has profound implications for the enduring debate between probabilism and determinism.
It advances the idea that intelligence and free will are embedded in the very fabric of universal reality, acting as mechanisms that guide the cosmos toward increasingly efficient and organized future states—where order is preserved to the greatest possible extent. The drive toward maximal efficiency, interpreted as a manifestation of intelligence, originates from the physical foundation of universal evolution, which unfolds through a “smart” quantum computation algorithm. In this view, locally self-collapsing quantum wave functions compute classical local states, giving rise to classical dynamics that, far from equilibrium, enable matter self-assembly and, eventually, living systems. These systems replicate the same organizing tendency, thereby expressing intelligence and free will themselves.
Within this framework, the universe is governed by a form of bounded probabilism, in contrast to the notion of total, unpredictable randomness that would reduce reality to a mere cosmic game of dice. The emergence of physical laws as solutions to a computational problem offers a fresh perspective on cosmic evolution. Inspired by concepts from theoretical physics, this approach emphasizes that, since the key to the pseudo-random GBN is inaccessible, the bounded probabilism inherent in the evolution of physical reality is experienced as genuine randomness.
The universal computation is directed toward generating order through matter self-assembly and the emergence of life. The continual search for the optimal future state thus represents a genuine expression of intelligence and free will. This principle persists across all living systems, which—within this computational-like reality—participate in the universal computational intent. However, since the future state of the universe is encrypted by an inaccessible key, from inside free will is limited to a finite temporal horizon and can never be absolute: no individual can foresee all future events or determine actions that would compel the rest of the universe to conform to their control.
Free will, therefore, manifests in living beings as the tendency to identify and pursue optimal future states, yet remains confined to the individual level—expressed through paths that may be either cooperative or predatory in nature.
8. Concluding Remarks
This work, together with its first part, presents a groundbreaking and cross-disciplinary proposal: by unifying aspects of gravity, quantum physics, classical mechanics, and non-equilibrium thermodynamics within a single theoretical framework, it offers a deeper understanding of complex phenomena such as matter self-assembly, emergence of life, free will, intelligence, and consciousness.
Despite the model’s complexity and its speculative nature, the quantum-gravitational hydrodynamic foundations are supported by the accurate prediction of the Lindemann constant and the helium fluid–superfluid transition, and they remain testable through experimental verification, such as photon-entanglement experiments over planetary distances. Furthermore, the SQHM framework demonstrates that in one-dimensional structures, quantum coherence can be significantly enhanced, providing strong evidence that superconducting polymer networks could enable the development of quantum computing systems with a large number of qubits operating near room temperature, while also supporting the hypothesis that quantum phenomena may occur within neuronal microtubules. Moreover, although SQHM unifies multiple theoretical aspects, it also offers concrete predictions and potential practical applications to real-world domains such as economic control, societal development and organization, and international relations.
Last but not least, it is interesting to note that, since the kinetics of spacetime curvature depends on the motion of the mass–energy density distribution within it [
46,
47], while the dynamics of spacetime curvature, in turn, perturbs the quantum potential and affects the course of quantum evolution, it follows that quantum mechanics and gravity are intrinsically coupled, forming a single quantum-mechanical gravitational problem.
Although the fully coupled problem is generally quite cumbersome, it can be approximately decoupled in the limit of weak fields and sufficiently dilute mass densities by assigning a stochastic form to the GBN dynamics. Within this framework, gravity emerges from the distribution of mass densities that evolve quantum-mechanically at microscopic scales (according to the SQHM) and, through decoherence, transition to classical behavior at macroscopic scales, thereby generating the gravitational field described by general relativity.
Nonetheless, the problem remains incomplete: as we approach the submicroscopic regime, where the underlying (discrete) structure of spacetime begins to produce observable effects, such as those found in elementary particle physics, the fields representing the manifestation of mass within spacetime must themselves be quantized, thereby giving rise to quantized gravity and increasing the overall complexity of the problem.
Moreover, it is worth noting that the emergence of quantum decoherence beyond a finite distance is not caused by the stochasticity of the Gravitational Background Noise (GBN) itself, but rather by the dynamical variability of spacetime curvature. In combination with the chaotic nature of classical trajectories and the weakness of interparticle interactions, this variability leads to a rapid and substantial departure from quantum behavior, namely, decoherence, which destroys quantum superpositions on very large distances and drives the wavefunction to decay (collapse) into stable macroscopic states even if the global system is deterministic and the GBN is pseudo-random. The macroscopic classical dynamics stems from a gravitational interference process that perturbs and modifies the quantum potential responsible for restoring the linearity of quantum mechanics and preserving its coherent evolution.
Even though the spacetime background eliminates the need for an external environment by introducing intrinsic macroscopic classical behavior, the question remains of how quantum decoherence—and the resulting classical arrow of time—acquires its irreversible character. In a globally deterministic system where universal mass densities are coupled to the spacetime background, the GBN is fundamentally pseudo-random. Although the GBN may be pseudo-random (implying that, in principle, both wavefunction collapse and the arrow of time could be reversed), the recoverability of its key or even its existence play a pivotal role. About the recoverability, the vastness of the universe and, more effectively, the finite speed of light is the major unsurmountable obstacle resolvable only on hugely long-time scales. Nonetheless, even if we could make such a long-time observation, on that scale, the cosmic inflation—an effect driven by large-scale repulsive gravitational dynamics [
48,
49,
50,
51,
52]— will come in play introducing a continuously extraction of information from the universe’s matter content, ensuring a monotonic increase in entropy, subtracting information that would be needed to recover the noise key and thereby making wavefunction collapse and the arrow of time genuinely non-invertible. The time required to gain information to recover the noise key can be evaluates roughly of order of the recurrence time so that the reversibility of both wavefunction collapse and the arrow of time could be obtained on such time scale. Nonetheless, since the universe expansion displaces the recurrence time to infinity (no recurrence time exists for universe) also both the wavefunction collapse and the arrow of time reversibility are displaced to infinity and therefore will never happen.
To further clarify, the time required to acquire enough information to recover the noise key can be roughly estimated to be of the order of the recurrence time. On such a time scale, in principle, the reversibility of both wavefunction collapse and the arrow of time could be achieved. However, since the expansion of the universe effectively pushes the recurrence time to infinity (i.e., no finite recurrence time exists for the universe), the reversibility of both wavefunction collapse and the arrow of time is likewise displaced to infinity and will therefore never be realized.
Moreover, even if (for the sake of argument) the key to the pseudo-random GBN were known, it would still be impossible, in principle, to reverse both the wavefunction collapse and the arrow of time. By the end of the aforementioned processes, spacetime has expanded and is no longer in its initial state. Reversing both the collapse and the arrow of time would therefore require undoing the global dynamics of the universe’s expansion.
From this perspective, the question of whether irreversibility is genuine or merely a long-time but momentary, local irreversibility with future possible anti-entropic behaviors, lies in the very nature of cosmic evolution—specifically, whether the universe–spacetime system is deterministic and cyclic [
53], or instead represents a transition from an initial quantum-gravitational non-equilibrium state to a stationary, quantum final state.
Even if the long-range repulsive Newtonian force supports the latter hypothesis, the cyclic, deterministic—or teleological—nature of the cosmos would still be encoded in the quantized properties of spacetime and its fields, which are accessible only within a unified quantum-gravity framework.
Appendix A
Appendix A.1
The universal computational analogy helps us understand the fundamental limits of replicating biological intelligence, free will, and consciousness. These limits become evident when we consider the analogical and informational implications of the pigeonhole principle [
54], according to which information compression inevitably entails the loss of information [
55], precision, and, consequently, computational power.
Therefore, a computational subsystem—being contained within a larger system (universe)—cannot process the same amount of information, nor can it achieve greater computational power in terms of speed or precision. Consequently, any human-made computer, even one employing an extensive network of qubits and utilizing the same “smart” computational method as the universe itself, cannot operate faster or more accurately than the universe itself.
This limitation likewise extends to the development of intelligent systems, with respect to both the efficiency and complexity of their functions, including biological consciousness, free will, intentionality, and, ultimately, real intelligence (measured in terms of energy cost per unit of information produced or processed).
This principle reinforces the conclusion that no artificial or biological subsystem can match the universe’s computational efficiency, though it may approximate it to a limited extent in order to emulate or construct a conscious and intelligent system.
Appendix A.2
Quantum Hydrodynamic Representation for Many-Body Systems
The Madelung quantum hydrodynamic representation transforms the Schrodinger equation for the complex wave function
of N particles system
into two equations of real variable [
56,
57,
58]: the conservation equation for the mass density
and the motion equation for the momentum
,
where
and where
Appendix A.3
Natural Intelligence and bio-inspired approach to improve economic dynamics.
Some examples of how natural intelligence has addressed and successfully solved certain challenges can clearly illustrate how engaging with reality in accordance with the fundamental laws of our universe can lead to improvements in life and well-being, both of which are inseparably linked to the efficiency of the structures and relationships that emerge. From this perspective, the more we deepen our understanding of reality and its laws, the better we will know which actions foster life and support its beneficial development, and which instead undermine it—ranging from individual relationships to large scale initiatives carried out by policymakers.
Appendix A.3.1 Glucose-Insulin Control System: Insights for Control of Economic Expansion-Recession Cycles
An interesting example of strategy that natural intelligence has been able to build up is the glucose-insulin control developed in high-level living organism.
In
Figure 1 we can see the response of insulin blood concentration to a step infusion to glucose. As expected, along time the insulin concentration grows in order to lower the glucose content, but what is relevant is the initial peak of insulin at the starting of glucose infusion. This initial overestimation, even not appropriate respect what is immediately required for the current blood glucose concentration, is useful for increasing the speed and efficiency of the glucose normalization. This can be seen as an efficient strategy that accounts for the inertia in the reaction-diffusion dynamics of bodily fluids, anticipating the desired effect.
Appendix A.3.2 Application to Financial Dynamics
It is intriguing to apply the efficiency of insulin delivery in regulating blood glucose concentration to the problem of controlling inflation in economic systems. In this analogy, blood glucose corresponds to inflation, while insulin represents the interest rate in the monetary system—the latter working to restrain the former.
Broadly speaking, these dynamics resemble the prey–predator model described by the Lotka–Volterra differential equations [
59,
60]. In the economic setting, the “prey” consists of circulating money used for investment and consumption, while the “predator” is the interest rate on borrowing: the higher the rate, the more money is drawn out of free circulation. When banks profit from higher lending rates, the availability of money in the financial system decreases. Conversely, when the slowdown caused by scarce liquidity reduces profits, debtors default on repayments. Banks are then compelled to lower interest rates, allowing the “prey” (available money) to grow again, thereby restoring investment and production, which in turn brings banks back to profitability.
Since in normal insulin–glucose cycles there is no tissue damage, the economic counterpart of these oscillations should be associated with moderate recessions rather than destructive crises.
Appendix A.3.3 Bio-Inspired Control of Inflation
Figure 4 shows the pattern of inflation in response to the typical scenario of raising interest rates to mitigate it. The rate of change in inflation slows progressively as the gap between the equilibrium interest rate and the current rate widens. The onset of declining inflation is then taken as the signal to begin lowering the interest rate. However, even after rates are reduced, inertia causes the pace of disinflation to keep accelerating, potentially driving inflation too low or even into dangerous deflation. Moreover, since interest rates cannot effectively fall much below zero, reversing inflation once it drops below zero is particularly difficult and may persist for a prolonged period.
Figure 4.
Inflation dynamics following interest rate increase. A decline in inflation follows the gradual increase in interest rates. This decline is not immediate; due to inertia, it occurs only once interest rates have peaked significantly above the equilibrium level. At that point, inflation begins to fall more rapidly, even as interest rates start to decrease, which may drive inflation dangerously close to deflation.
Figure 4.
Inflation dynamics following interest rate increase. A decline in inflation follows the gradual increase in interest rates. This decline is not immediate; due to inertia, it occurs only once interest rates have peaked significantly above the equilibrium level. At that point, inflation begins to fall more rapidly, even as interest rates start to decrease, which may drive inflation dangerously close to deflation.
Figure 5 illustrates the inflation dynamics using the insulin–glucose strategy in the context of interest rate increases. A sudden rise in interest rates immediately curbs inflation, leading to a subsequent decline. Afterwards, a reduction in interest rates can be anticipated, moderating the pace of the decline in inflation. If the desired inflation level has not yet been reached, a new, smaller peak in interest rate hikes followed by another reduction can be applied. At six months, the decline in inflation is anticipated, and the interest rate returns to a normal level much earlier and remains significantly lower than in the usual interest rate hike shown in
Figure 4.
Figure 5.
Viewed through the insulin–glucose analogy, inflation dynamics suggest that a sharp rise in interest rates, followed by a gradual easing, may accelerate the reduction of inflation while countering the long-term downward trend.
Figure 5.
Viewed through the insulin–glucose analogy, inflation dynamics suggest that a sharp rise in interest rates, followed by a gradual easing, may accelerate the reduction of inflation while countering the long-term downward trend.
Appendix A.3.4 Synergistic Intelligence in International Trade Relations: A Self-Adjusting Duties
The natural forces that drive the creation of ordered structures and living systems reflect the spontaneous tendency of energy to manage information, manifesting as a form of natural intelligence. This intelligence represents the expression of a universal algorithm that shapes the progression of reality, aiming to refine and enhance the efficiency of living systems and structures over time.
As shown by (73-74), efficiency and people well-being increase exponentially with the growth of synergistic interactions, which, in turn, depend on the number of people engaging with one another. Consequently, the economic output of nations that collaborate far exceeds that of the same nations operating in isolation.
However, for an optimal productive state to be achieved, all nations must be able to contribute to overall production. This means that we must avoid both scenarios: nations acting in isolation and, in a collaborative system, a single nation monopolizing the production of goods and services.
Currently, existing economic approaches to balancing global trade generally fall into two categories:
Protectionism, which imposes tariffs to shield national economies but ultimately hinders or significantly slows down collaborative market exchanges and draw money from free circulation.
Liberalism, which allows unrestricted free trade without safeguards.
Neither of these approaches is fully effective, as both can lead to severe economic consequences. Protectionism may result in high inflation with a significant decline of economy efficiency and people’s well-being, while unchecked liberalism can cause the collapse of some national productive systems, leading to widespread poverty in large areas of world while benefiting only a few.
From the perspective of synergistic Intelligence, it becomes clear that duties with variable tariffs can help prevent both of these extreme scenarios. By fostering a climate of collaborative trust among nations, they enable high levels of trade while ensuring that productive and industrial systems continue to function constructively and balanced on a global scale.
Duties with variable tariffs can be easily structured by first setting a baseline duty percentage for a fully unbalanced trade scenario (let’s call this the “full duty”). The actual tariff applied is then calculated by multiplying this full duty by the real trade imbalance, which is defined as the difference in trade exchange monetary valuer divided by the total trade volume. This system has to be organized by category of goods, and can also be extended to services and e-commerce.
For example, if two nations trade goods with a 60%-40% imbalance—meaning that 60% of the goods (by monetary value) flow in one direction while 40% flow in the other—the effective duty would be 20% of the full duty.
This approach shifts the reaction of trade partners away from imposing retaliatory tariffs and instead encourages them to support the import of certain goods in order to facilitate the export of others within the same category. For instance, a nation with abundant iron production but a relatively small number of circulating cars could promote car imports, in order to induce lowering tariffs on its iron exports.
What makes this approach effective? Firstly, it eliminates trade wars between nations by offering an alternative response that fosters cooperation rather than retaliation. Secondly, by encouraging the import of different types of goods to promote duty tariffs to lower, it stimulates overall trade growth.
These and many other beneficial actions can be undertaken based on the insights gained from understanding the mechanisms that govern reality. However, their scope is so vast that they cannot be fully addressed in a single work and are therefore left for future analyses.
References
- I. Prigogine, Bulletin de la Classe des Sciences, Academie Royale de Belgique 31: (1945) 600–606.
- I. Prigogine, Étude thermodynamique des Phenomènes Irreversibles, (Desoer, Liege 1947).
- Y. Sawada, Progr. Theor.Phys. 66, 68-76 (1981).
- W.V.R Malkus, and G. Veronis, J. Fluid Mech.4 (3), 225–260 (1958).
- L. Onsager, Phys. Rev. 37 (4), 405–426 (1931).
- W.T. Grandy, Entropy and the Time Evolution of Macroscopic Systems, (Oxford University Press2008).
- M. Suzuky and Y.Sawada, Phys. Rew. A, 27-1 (1983).
- Chiarelli, S. (2025). On the Striking Similarities Between Our Universe and a Simulated One. IPI Letters, 3(4), O74-O78. [CrossRef]
- A. A. Klypin, S. Trujillo-Gomez, J. Primack, Dark Matter Halos in the Standard Cosmological Model: Results from the Bolshoi Simulation ApJ, 740, 102 (2011). [CrossRef]
- Berger, Marsha J., and Joseph Oliger, Adaptive mesh refinement for hyperbolic partial differential equations, Journal of computational Physics 53.3 (1984): 484-512.
- Berger, Marsha J., and Phillip Colella, Local adaptive mesh refinement for shock hydrodynamics, Journal of computational Physics 82.1 (1989): 64-84.
- Huang, Weizhang, and Robert D. Russell. Adaptive moving mesh methods. Vol. 174. Springer Science & Business Media, 2010.
- Gnedin, N.Y.; Bertschinger, E. Building a cosmological hydrodynamic code: Consistency condition, moving mesh gravity and slh-p3m. Astrophys. J. 1996, 470, 115. [CrossRef].
- Kravtsov, A.V.; Klypin, A.A.; Khokhlov, A.M. Adaptive Refinement Tree: A New High-Resolution N-Body Code for Cosmological Simulations. Astrophys. J. Suppl. Ser. 1997, 111, 73. [CrossRef].
- M., M., Vopson, Is gravity evidence of a computational universe? AIP Advances 15, 045035 (2025. [CrossRef]
- Verlinde, E. On the origin of gravity and the laws of Newton. J. High Energ. Phys. 2011, 29 (2011). [CrossRef]
- M. Kliesch, T. Barthel, C. Gogolin, M. Kastoryano & J. Eisert, Dissipative Quantum Church–Turing Theorem, Physical Review Letters, Vol. 107, Numero 12, Articolo 120501, 12 settembre 2011. [CrossRef]
- Micciancio, D.; Goldwasser, S. Complexity of Lattice Problems: A Cryptographic Perspective; Springer Science & Business Media: Berlin, Germany, 2002; Volume 671.
- Monz, T.; Nigg, D.; Martinez, E.A.; Brandl, M.F.; Schindler, P.; Rines, R.; Wang, S.X.; Chuang, I.L.; Blatt, R. Realization of a scalable Shor algorithm. Science 2016, 351, 1068–1070. [CrossRef] [PubMed].
- Long, G.-L. Grover algorithm with zero theoretical failure rate. Phys. Rev. A 2001, 64, 022307.
- Chiarelli, P. Can fluctuating quantum states acquire the classical behavior on large scale? J. Adv. Phys. 2013, 2, 139–163.
- Chiarelli, S.; Chiarelli, P. Stochastic Quantum Hydrodynamic Model from the Dark Matter of Vacuum Fluctuations: The Langevin-Schrödinger Equation and the Large-Scale Classical Limit. Open Access Libr. J. 2020, 7, e6659. [CrossRef]
- Chiarelli, P. (2019). Stochastically-induced quantum-to-classical transition: The Lindemann relation, maximum density at the He lambda point and water-ice transition. In Advances and Trends in Physical Science Research Vol. 2 (pp. 46-63). Book Publisher International. [CrossRef]
- Chiarelli, P. Quantum-to-Classical Coexistence: Wavefunction Decay Kinetics, Photon Entanglement, and Q-Bits. Symmetry 2023, 15, 2210. [CrossRef]
- Chiarelli, P.: Far from Equilibrium Maximal Principle Leading to Matter Self-Organization. Journal of Advances in Chemistry, Vol. 5, No. 3, 2013, pp. 753-783.
- Weiner, J., H.,: Stistical Mechanics of Elasticity. John Wiley &Sons, Inc. (1983) USA, pp. 406-9.
- Gardiner, C.W. Handbook of Stochastic Method, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 1985; pp. 80-101, ISBN 3-540-61634.9.
- Rumer, Y.B.; Ryvkin, M.S. Thermodynamics, Statistical Physics, and Kinetics; Mir Publishers: Moscow, Russia, 1980; pp. 444-65.
- Rumer, Y.B.; Ryvkin, M.S. Thermodynamics, Statistical Physics, and Kinetics; Mir Publishers: Moscow, Russia, 1980; pp. 557-62.
- Rumer, Y.B.; Ryvkin, M.S. Thermodynamics, Statistical Physics, and Kinetics; Mir Publishers: Moscow, Russia, 1980; pp. 496, 511-521.
- De Rossi, D., Chiarelli, P., Polyelectrolyte Intelligent Gels: Design and Applications, Chap.15, Ionic Interactions in Natural and Synthetic Macromolecules, First Edition. Edited by Alberto Ciferri and Angelo Perico. © 2012 John Wiley & Sons, Inc. Published 2012 by John Wiley & Sons, Inc.
- Chiarelli, P., Artificial and Biological Intelligence: Hardware vs Wetware, American Journal of Applied Psychology 2016; 5(6): 98-103. [CrossRef]
- Chandra, S.; Paira, S.; Alam, S.S.; Sanyal, G. A comparative survey of symmetric and asymmetric key cryptography. In Proceedings of the 2014 International Conference on Electronics, Communication and Computational Engineering (ICECCE), Hosur, India, 17–18 November 2014; IEEE: Piscartway, NJ, USA, 2014; pp. 83–93.
- Husserl, E., The phenomenology of internal time-consciousness (J. S. Churchill, Trans.). Indiana University Press, (1964).
- Husserl, E., Ideas: General Introduction to Pure Phenomenology*. Translated by W. R. B. Gibson, Macmillan, 1931.
- Damasio, A. (2010). Self Comes to Mind: Constructing the Conscious Brain. Pantheon Books.
- Damasio, Antonio. Feeling & Knowing: Making Minds Conscious. Penguin Random House, 2022.
- Seth, Anil K., Keisuke Suzuki, and Hugo D. Critchley. “An interoceptive predictive coding model of conscious presence.” Frontiers in psychology 2 (2012): 18458. [CrossRef]
- Ao, Yujia, et al. “Intrinsic neural timescales relate to the dynamics of infraslow neural waves.” NeuroImage 285 (2024): 120482. [CrossRef]
- Craig AD. The sentient self. Brain Struct Funct. 2010 Jun;214(5-6):563-77. [CrossRef]
- Sun, D., Minkov, V.S., Mozaffari, S. et al. High-temperature superconductivity on the verge of a structural instability in lanthanum superhydride. Nat Commun 12, 6863 (2021). [CrossRef]
- Hameroff, S., and Penrose, R., “Consciousness in the universe: A review of the ‘Orch OR’theory.” Physics of life reviews 11.1 (2014): 39-78. [CrossRef]
- Lloyd, S., The Universe as Quantum Computer, A Computable Universe. December 2012, 567-581. [CrossRef]
- Deutsch, D., Marletto, C., “Constructor theory of information”. Proceedings of the Royal Society A. 471 (2174) 20140540. arXiv:1405.5563. (2014). [CrossRef]
- Zurek, W. Decoherence and the Transition from Quantum to Classical—Revisited Los Alamos Science Number 27. 2002. Available online: https://arxiv.org/pdf/quantph/0306072.pdf (accessed 10 Jun 2003).
- Chiarelli, P., The Gravity of the Classical Klein-Gordon Field, Symmetry 2019, 11, 322. [CrossRef]
- Chiarelli, P., Quantum Geometrization of Spacetime in General Relativity, BP International, 2023, ISBN 978-81-967198-7-6 (Print), ISBN 978-81-967198-3-8 (eBook). [CrossRef]
- Milgrom, M. (1983). A modification of the Newtonian dynamics. The Astrophysical Journal, 270, 365-370.
- Bekenstein, J. D. (2006). Tensor-Vector-Scalar-Gravity. Physical Review Letters, 97(17), 171301.
- Moffat, J.W. (2005). Scalar–tensor–vector gravity theory. Journal of Cosmology and Astroparticle Physics, 2006, 004 - 004.
- De Felice, A., Tsujikawa, S. f(R) Theories. Living Rev. Relativ. 13, 3 (2010). [CrossRef]
- Chiarelli, P. Quantum Effects in General Relativity: Investigating Repulsive Gravity of Black Holes at Large Distances. Technologies 2023, 11, 98. [CrossRef]
- Penrose, Roger (2004). The Road to Reality. Alfred A. Knopf. ISBN 978-0-679-45443-4.
- Dirichlet, P. G. L. (1834). Über die Annäherung algebraischer Zahlen durch rationale Zahlen.
- Abhandlungen der Königlichen Preußischen Akademie der Wissenschaften zu Berlin, 45–81.
- Wootters, W., Zurek, W., “A Single Quantum Cannot be Cloned”. Nature. 299 (5886): 802–803, (1982). [CrossRef]
- E. Madelung, Z. Phys. 40, 322-6, (1926).
- I. Bialynicki-Birula, M. Cieplak and J.Kaminski, Theory of Quanta, (Oxford University Press, Ny 1992).
- Jánossy, L. Zum hydrodynamischen Modell der Quantenmechanik. Eur. Phys. J. 1962, 169, 79.
- Lotka, A. J.. Elements of Physical Biology. Baltimore: Williams & Wilkins (1925).
- Volterra, V., Fluctuations in the abundance of a species considered mathematically. Nature, 118(2972), 558–560 (1926). [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).