2. Quantum Properties of Spacetime with Fluctuating Gravitational Background
At the quantum level, spacetime undergoes stochastic fluctuations driven by GBN, relics of the Big Bang, and the relativistic dynamics of massive bodies. In what follows, we will demonstrate that these fluctuations generate curvature wrinkles that disturb quantum systems, ultimately leading to decoherence and the transition to classical behavior.
The non-locality of quantum mechanics, as exemplified by the EPR paradox, presents significant challenges in reconciling it with the locality of relativistic spacetime [
1,
2]. The stochastic quantum hydrodynamic approach incorporates these spacetime fluctuations into the quantum potential, effectively suppressing long-range quantum coherence. This mechanism enables the transition to classical mechanics without requiring an external environment or ad hoc assumptions, thus addressing the fundamental issues of the quantum-to-classical transition and wavefunction collapse.
Recent advancements in decoherence theory confirm that interactions with fluctuations play a crucial role in localizing quantum states [
3,
4,
5,
6]. The proposed SQHM builds on these insights, treating spacetime itself as a self-fluctuating system—devoid of an external environment—that gradually loses quantum coherence on scales both larger than the de Broglie wavelength and the theory-defined action distance of the quantum potential [
7].
2.1. The Stochastic Quantum Hydrodynamic Model
The Madelung quantum hydrodynamic representation transforms the Schrodinger equation
for the complex wave function
, into two equations of real variable [
8,
9,
10]: the conservation equation for the mass density
and the motion equation for the momentum
,
where
and where
Following the hypothesis that considers the GBN as a source of quantum decoherence [
11] and by introducing it into Madelung quantum hydrodynamics, equation (1-4) yields a generalized stochastic quantum model capable of describing wavefunction collapse dynamics and the measurement process, leading to a fully self-consistent generalized quantum theory [
12,
13].
To incorporate the presence of the GBN into equations (1–4), we must account for how it perturbs the quantum physical system. The first contribution arises from the energy content of the GBN, which, according to the principle of mass-energy equivalence in General Relativity, can be represented as a locally fluctuating mass density. As shown in [
7,
13], this effect of the GBN can be described by the following assumptions:
The mass density of GBN is described by its wavefunction with density;
The associated energy density of GBN is proportional to ;
The associated mass of GBN is defined by the identity
The GBN mass density is approximately assumed to not interact with the mass of the physical system (since the gravitational interaction is sufficiently weak to be disregarded in the Hamiltonian potential
in (3)) and the wavefunction of the overall system
reads as
Additionally, given that the energy density of GBN is quite small, the mass density is presumed to be significantly smaller than the body mass density typically encountered in physical problems. Hence, considering the mass to be much smaller than the mass of the system, we can assume in Equations (3-4).
The second effect of the GBN arises from the gravitational (geometrical) nature of its mass/energy distribution, which induces local variations in the mass density of the physical system. These fluctuations in mass density of the physical system can be understood by noting that gravitational waves are metric perturbations that cause spacetime itself to oscillate. This oscillation leads to contractions and elongations of distances, similar to the effects observed in gravitational wave detections at the LIGO and VIRGO laboratories. As a result, when the wavefunction undergoes such spatial elongations or contractions, the system’s mass density distribution correspondingly decreases or increases. In this way, spatial contractions and elongations influence the quantum potential energy, as it depends on the mass concentration of the physical system.
By introducing the GBN-induced mass density fluctuations into the quantum potential (4), and following the procedure outlined in reference [
7,
13], Equations (1–4) lead to a system of stochastic quantum differential motion equations.
For systems whose physical size is on the order of the De Broglie wavelength, the resulting quantum-stochastic hydrodynamic equations governing the evolution of the complex field...
, acquire a simpler form and reads [
7,
13]
Given the physical length of the system
, the diffusion coefficient in (7) can be readjusted as
where
depends by the characteristics of the system [
7,
13], and
is the De Broglie length defining physical distance below which the quantum coherence is maintained in presence of fluctuations and the friction coefficient
is of the form
where the semiempirical parameter
, expressing the ability of the system to dissipate energy, obeys to the condition [
7,
13]
Incorporating the time-reversal symmetry of quantum mechanics.
From a general perspective, the solution of the quantum-stochastic hydrodynamic model (SQHM) for
, encompassing its macroscopic behavior, is not captured by (7-12) because the force fluctuations
induced by the quantum potential, owing to its non-local nature, possess a finite correlation length such that
, where
and depend on the spatial point
[
7,
13] . Only for microscopic systems, where
and
, does the force noise become Markovian.
On a given scale , the amplitude of fluctuations, denoted by the diffusion coefficient in (10), tends to zero, indicating a form of noise damping induced by the quantum potential that has no counterpart in classical stochastic systems. From the quantum hydrodynamic standpoint, the mass density distribution behaves like an elastic membrane that becomes increasingly resistant to deformation at wavelengths smaller than the De Broglie wavelength, thereby suppressing the manifestation of gravitational background fluctuations (a quantum effect on geometric gravity).
It is straightforward to see that for or equivalently , the Madelung quantum deterministic limit, in the form of the quantum hydrodynamic analogy (1-4), is recovered as well as both and .
The conventional quantum mechanics is recovered for null noise (or equivalently for microscopic systems whose physical length is much smaller than the De Broglie length).
Nonetheless, it must be observed that under special conditions such as
, the SQHM leads to the Brownian quantum behavior [
7,
13].
In the stochastic quantum hydrodynamic representation, the quantum mass density
is generalized by the probability mass density (PMD)
determined by the probability transition function
[
14] obeying to the Smoluchowski conservation equation [
14] for the Marcovian process (7)
, establishing the phase-space mass density conservation
that leads to the spacetime mass density distribution
in the
In the context of (7-9), does not denote the quantum wavefunction; rather, it represents the generalized quantum-stochastic probability wave that adheres to the limit.
It is worth noting that the SQHM equations (7-9), stemming from the presence of noise curvature wrinkles of spacetime (a form of dark energy), describe a self-fluctuating quantum system where the noise is an intrinsic property of the reference system that is not generated by an environment. An in-depth discussion regarding the property of true randomness or pseudo-randomness of the gravitational background noise is provided in part two of this work.
2.2. Emerging Classical Behavior
When manually nullifying the quantum potential in the equations of motion for quantum hydrodynamics (1-4), the classical equation of motion emerges [
9,
10]. However, despite the apparent validity of this claim, such an operation is not mathematically sound as it alters the essential characteristics of the quantum hydrodynamic equations. Specifically, this action leads to the elimination of stationary configurations, i.e., quantum eigenstates, as the balancing force of the quantum potential against the Hamiltonian force —which establishes their stationary condition [
9,
10]—is eliminated. Consequently, even a small quantum potential cannot be disregarded in conventional quantum mechanics that is the zero-noise 'deterministic' limit of the SQHM (7-9).
Conversely, in the stochastic generalization, it is possible to correctly neglect the quantum potential in (7) when its force is much smaller than the force noise
such as, by (7),
that leads to condition
and hence, in a coarse-grained description with elemental cell side
, such as
It is worth noting that, despite the noise
having a zero mean, the mean of the fluctuations in the quantum potential, denoted as
, is not null. This not-null mean contributes to the frictional dissipative force
in equation (7) [
7,
13]. Consequently, the stochastic sequence of noise inputs disrupts the coherent dynamic evolution of the quantum superposition of states, leading them to frictionally decay to a stationary mass density distribution with
. Moreover, by observing that the stochastic force noise
grows with the size of the system, for macroscopic systems (
), condition (17) can be satisfied if
Since systems obeying (20) may exhibit a rather extended meso-scale in the transition between quantum to classical behavior, a more stringent requirement such as:
can be imposed in cases where the transition from the quantum to classical regime is well-defined and the macroscopic behavior is completely free from non-local quantum potential interaction.
Recognizing that since for linear systems it holds
we readily can observe that these systems are incapable of generating macroscopic classical phases. Generally speaking, as the Hamiltonian potential strengthens, the wave function localization increases and the quantum potential behavior at infinity becomes more prominent.
In fact, by considering the mass density
where
is polynomial of order
, it becomes evident that a vanishing quantum potential interaction at infinity is achieved if and only if
.
On the other hand, for instance, for gas phases with particles that interact by the Lennard-Jones potential, whose long-distance wave function reads [
15]
the quantum potential that reads
develops the quantum force
that can lead to classical behavior [
7,
13] in a sufficiently rarefied phase.
It is interesting to note that in (25), the quantum potential coincides with the hard sphere potential of the “pseudo potential Hamiltonian model” of the Gross-Pitaevskii equation [
16,
17], where
is the boson-boson s-wave scattering length.
By observing that, to fulfill condition (21), we can sufficiently require that
beyond the De Broglie length, it is possible to define the quantum potential range of interaction
as [
7,
13]
where
is a factor that depends on the strength of the Hamiltonian interaction.
Relation (28) provides a measure of the range of interaction associated with the quantum non-local potential beyond the De Broglie length where it fully fluctuates.
It is worth noting that the quantum non-local interaction extends up to a distance on the order of the largest length between and . Below , due to the self-damping of quantum potential fluctuations, even a small quantum potential contributes to the emergence of quantum behavior.
Therefore, quantum non-local effects can be extended either by increasing as a result of lowering the temperature, or by strengthening the Hamiltonian potential, which leads to larger values of .
For instance, by using (22), larger values of can be achieved by extending the linear range of Hamiltonian interaction between particles.
Therefore, in the latter case, when examining phenomena at intermolecular distances where the interaction can be approximated as linear, the behavior exhibits quantum characteristics (e.g., X-ray diffraction from a crystalline lattice). In contrast, at the macroscopic scale, such as in the case of elastic sound waves, whose wavelengths are much larger than both the De Broglie wavelength and the range of the quantum potential, classical behavior emerges.
Finally, it is worth noting that dimensionality can strongly influence the distance over which quantum coherence is maintained. For example, since the form of the Coulomb interaction between charges arises from Gauss’s law and the way field lines spread in space, in D spatial dimensions the flux of the electric field through a hypersphere of radius r must equal the enclosed charge. This determines how the field decays with distance. Accordingly, Coulomb’s law takes different forms in different dimensions: in 3D, in 2D, and in 1D.
As a consequence, the lower the spatial dimension, the stronger the effective Hamiltonian potential interaction, and therefore the longer the quantum coherence length , leading to an enhanced tendency to exhibit macroscopic quantum behavior. The success of high-temperature superconductivity in polymers, with promising prospects for achieving it at room temperature, is fundamentally rooted in their low-unidimensional character of polymer molecules combined with quantum mechanical properties arising from .
2.3. Measurement Process and Decoherence
Throughout the course of quantum measurement, the sensing component within the experimental setup and the system under examination undergo an interaction that can be adequately described by conventional quantum mechanics. This interaction ends when the measuring apparatus is relocated a considerable distance away from the system. Within the SQHM framework, this relocation is subject to precise conditions, as it must exceed the specified distancesand.
Following this relocation, the macroscopic measuring apparatus manages the "interaction output." This typically involves a classical irreversible process, characterized by the time arrow, leading to the determination of the macroscopic measurement result.
Consequently, the GBN noise, at the origin of the lengthsand as well as of the phenomenon of decoherence, assumes a pivotal role in the measurement process. Decoherence facilitates the establishment of a large-scale classical framework, ensuring authentic quantum isolation between the measuring apparatus and the system, both pre and post the measurement event.
This effective realization of the quantum-isolated state of system, both at the initial and final stages, holds paramount significance in determining the temporal duration of the measurement and in collecting statistical data through a series of independent repeated measurements.
It is crucial to underscore that, within the confines of the SQHM, merely relocating the measured system to an infinite distance before and after the measurement, as commonly practiced, falls short in guaranteeing the independence of the system and the measuring apparatus if either or is met. Therefore, the existence of a macroscopic classical reality remains indispensable for the execution of the measurement process in quantum mechanics, and it is not possible to perform a measurement within a perfectly closed quantum universe.
2.3. Minimal Uncertainty and Quantum-to-Classical Transition
The SQHM naturally extends the uncertainty principle to conjugate variables in 4D spacetime subject to curvature fluctuations.
Before proceeding, it is important to clarify that is not the physical temperature of the system but rather a measure related to the mean energy fluctuations of the spacetime GBN.
Only when the quantum coherence of the universe as a whole is broken, and the corresponding De Broglie wavelength has become much smaller than its physical size, can it be considered as divided into classical subsystems. The fluctuation amplitude of these subsystems is then determined by their interaction with the surrounding environment. In this context, the amplitude of energy fluctuations experienced by the system becomes linked to molecular temperature.
2.4. Minimum Measurement Uncertainty in Spacetime with a Fluctuating Background
Any dynamical theory that seeks to describe the evolution of a physical system across different scales and magnitudes must inherently account for the transition from quantum mechanical behavior to the emergent classical phenomena observed at macroscopic levels. The fundamental differences between these two regimes are embodied in the quantum uncertainty principle, which reflects the intrinsic incompatibility of simultaneously measuring conjugate variables. Moreover, quantum entanglement appears fundamentally incompatible with the finite speed of information and interaction propagation in local classical relativistic mechanics.
If a system fully adheres to the principles of quantum mechanics within a certain physical length scale (let's denote it by with ), where its subparts do not possess individual identities, then an independent observer must remain at a distance greater than , both before and after the process, in order to perform a measure on it
Therefore, due to the finite speed of propagation of interactions and information, the process cannot be executed in a time frame shorter than
Furthermore, considering the Gaussian noise in (7) with the diffusion coefficient proportional to
, we find that the mean value of energy fluctuation is
for degree of freedom. As a result, a nonrelativistic scalar structureless particle of mass
m (with
), exhibits an energy variance
such as
from which it follows that
It is noteworthy that the product remains constant, as the increase in energy variance with the square root of precisely offsets the corresponding decrease in the minimum acquisition time . This outcome holds also true when establishing the uncertainty relations between the position and momentum of a particle with mass m.
If we acquire information about the spatial position of a particle with precision
, we effectively exclude the space beyond this distance from the quantum non-local interaction of the particle, and consequently we must require
Moreover, since the variance
of its relativistic momentum
due to the fluctuations reads
the SQHM indeterminacy relation reads
Equating (34) to the quantum uncertainty value, such as
or
we find that the physical length below which the deterministic limit of the SQHM, specifically the quantum mechanics, is realized reads
.
As far as it concerns the
theoretical minimum uncertainty of quantum mechanics, obtainable from the
minimum indeterminacy (35-36) in the limit of
and
, in the non-relativistic limit, we obtain that
and therefore that
It is worth noting that, by accounting for the finite speed of light, the SQHM extends the uncertainty relations to all conjugate variables in four-dimensional spacetime, even though, in standard quantum mechanics, the energy-time uncertainty relation cannot be rigorously defined due to the absence of a time operator.
However, since the energy-time uncertainty principle (41) is fundamental for introducing the concept of virtual particles in quantum field theory, this relation has been formulated in quantum mechanics through the Mandelstam-Tamm time-energy inequality, which is derived from the Robertson uncertainty relation using the generalized Ehrenfest theorem.
Furthermore, it is interesting to note that in the relativistic limit of quantum mechanics for
and/or
, given the finite speed of light, the minimum acquisition time of information in the quantum limit
becomes infinite.
The result (43) indicates that performing a measurement in a fully quantum mechanical global system is not feasible, as its duration would be infinite.
Given that quantum non-locality is restricted to domains with physical lengths on the order of , and information contained in a measure cannot be transmitted faster than the speed of light (violating the uncertainty principle otherwise), local realism is established within the macroscopic physics where domains of order of reduce to a point.
The paradox of 'spooky action at a distance' is an artifact confined to microscopic scales (smaller than ), and arises in the low-velocity approximation of . This leads to non-relativistic quantum mechanics, which appears to permit instantaneous transmission of interactions across space.
It is also noteworthy that in the presence of noise for
, the measured indeterminacy undergoes a relativistic correction, as expressed by
, resulting in the minimum uncertainty relations
and
that give a significant contribution for light particles (with
) in a high energy quantum fluctuating system.
In summary, within the SQHM framework, minimal uncertainty arises from the combined effects of energy fluctuations due to GBN and the finite speed of information propagation, which together impose a lower bound on the time required for information acquisition.
Furthermore, a relativistic correction to the uncertainty principle, particularly significant for light particles, emerges naturally, indicating that a continuous mass spectrum approaching zero is not physically viable, as it leads to a divergence in uncertainty. This suggests that a discrete mass spectrum, excluding the zero-mass case, is necessary for the establishment of a physically consistent universe.
2.5. Theoretical Validation of SQHM: The Lindemann Constant at the Melting Point of Quantum Lattice
A validation test for the SQHM can be conducted by comparing its theoretical predictions with experimental data on the transition from a quantum solid lattice to a classical amorphous fluid. Specifically, we show that the SQHM can theoretically derive the Lindemann constant at the melting point of a solid lattice, representing the quantum-to-classical transition threshold, something that has remained unexplained within the frameworks of both conventional quantum and classical theories [
18].
For a system of Lennard-Jones interacting particles, the quantum potential range of interaction
reads (25-26, 28)
where
represents the distance up to which the interatomic force is approximately linear, and
denotes the atomic equilibrium distance.
Assuming that, to preserve quantum coherence within the quantum lattice, the atomic wave function (around the equilibrium distance) extends over a distance smaller than the quantum coherence length, the square root of its variance must result smaller than which corresponds to the melting point.
Based on these assumptions, the Lindemann constant
defined at melting point as [
18]
can be expressed as
and it can be theoretically calculated, as
that, being typically at melting point
and
, leads to
2.6. The Fluid-Superfluid 4HE Transition
If the Lindemann constant is derived from a quantum-to-classical transition governed by the strength of the Hamiltonian interaction, which determines the quantum potential interaction length, another validation of the SQHM can be obtained by its predictions on transitions induced by the change of De Broglie physical length such as the 4HE fluid-to-superfluid transition.
Given that the De Broglie distance
is temperature-dependent, it impacts on the fluid-superfluid transition in monomolecular liquids at extremely low temperatures, when it equals the mean molecular distance as observed in
4HE. The approach to this scenario is elaborated in reference [
19], where, for the
4HE -
4HE interaction, the potential well is assumed to be
In this context, represents the Lennard-Jones potential depth, denotes the mean 4He -4He inter-atomic distance where .
As the superfluid transition temperature is attained, the De Broglie length overlaps more and more the
4HE -
4HE wavefunction within the potential depth. Therefore, we observe the gradual increase of
4He superfluid concentration within the interval
A more precise assessment, utilizing the potential well approximation for molecular interaction, results [
18] in
, and yields a value
for the Lindemann constant consistent with measured values, falling within the range of 0.2 to 0.25 [
18].
Therefore, the total superfluid 4He occurs as soon as the De Broglie length covers all the 4He -4He potential well for .
However, for , we have no superfluid 4He.
Therefore, given that
,for
, all pairs of
4He enter the quantum state, the superfluid ratio of 100% is attained at the temperature
where the
4He mass is assumed to be
, consistent with the experimental data
from reference [
20].
When
, the superfluid-to-normal
4He density ratio of 50% is reached at the temperature
in good agreement with the experimental data
measured in reference [
20].
Furthermore, by employing the conventional superfluid ratio of 38% at the
-point of
4He, such that
, the transition temperature
is determined to be
in good agreement with the measured superfluid transition temperature of
[
20].
It is worth noting that there are two ways to establish quantum behavior in a classical reality. One approach involves lowering the temperature, effectively increasing the de Broglie length. The second approach is to strength the Hamiltonian interaction, among the particles, to enhance the quantum potential length of interaction. The latter effect can be achieved practically by increasing the distance over which the Hamiltonian interaction remains linear.
The transition between quantum solids and classical fluids, identified by the Lindemann constant, and the fluid-superfluid transition at extreme low temperatures, provide experimental confirmations of the emerging macroscopic classical behavior as a form of decoherent quantum behavior, affecting the underlying physics of phenomena such as viscosity and lattice properties, including X-ray diffraction and electron conductivity.
From this standpoint, we can conceptualize the classical mechanics as emergent from a self-consistent decoherent outcome of quantum mechanics when fluctuating spacetime reference background is involved.
It is also important to highlight that the limited strength of the Hamiltonian interaction over long distances is the key factor allowing classical macroscopic behavior to manifest.
Moreover, by observing that systems featuring interactions that are weaker than linear interactions are classically chaotic, it follows that the classical chaoticity is widespread characteristic of the classical reality.
To this respect, the strong divergence of chaotic trajectories of motion due to high Lyapunov exponents also contributes to facilitate the destruction of the quantum coherence maintained by the quantum potential by leading to high values of the dissipation parameter in (12).
Finally, it is worth noting that dense matter subjected to strong gravitational potentials, such as in black holes or at the Big Bang, exhibits fully quantum behavior. Consequently, classical behavior can only be attributed to the later, inflated phase of the universe, where gravity tends to follow Newtonian dynamics
3. Bridging Quantum and Classical Mechanics
The transition from quantum to classical mechanics is a cornerstone of this framework. Classical mechanics emerges as an effective theory from the interplay of quantum dynamics and stochastic gravitational perturbations. The SQHM extends Madelung’s quantum hydrodynamics by introducing a stochastic quantum pseudo-potential
[
7,
13], which governs the collapse of wavefunctions and the loss of quantum entanglement at macroscopic scales.
3.1. Dynamics of Wavefunction Collapse
The Markov process (7) in the limit of slow kinetics (see Eq. (64) below) can be described by the Smolukowski equation for the Markov probability transition function (PTF) [
14]
where the PTF
is the probability that in time interval τ is transferred to point q.
The conservation of the PMD shows that the PTF displaces the PMD according to the rule [
14]
Generally, for the quantum case, the Fokker–Planck equation (FPE) cannot be obtained by series development [
14] of Equation (58). The functional dependence of
by
, and by the PTF
, produces non-Gaussian terms [
14].
Nonetheless, if, at initial time,
is stationary (e.g., quantum eigenstate) close to the long-time final stationary distribution
, it is possible to assume that the quantum potential is about constant in time, behaving, for all intents and purposes, like a Hamiltonian potential, and reads
Being in this case the quantum potential constant over time, the stationary long-time solutions
can be approximately described by the Fokker–Planck equation
where
leading to the stationary quantum configuration
In ref. [
7] the stationary states of a harmonic oscillator obeying (62) are shown. The results show that the quantum eigenstates are stable with a small change in their variance when subject to fluctuations.
3.2. Evolution of Quantum Superposition of States Submitted to Stochastic Noise
The quantum evolution of (not-stationary) superposition of states involves the integration of the motion Equation (7). Here, we consider the simpler case, excluding very fast kinetics [
21], that reads
By utilizing both the Smolukowski Equation (58) and the associated conservation Equation (59) for the PMD
, it is possible to integrate equation (64) by using its second-order discrete expansion
where
where
has a Gaussian zero mean and unitary variance whose probability function
, for
, reads as
where the midpoint approximation
has been introduced, and where
are the solutions of the deterministic problem:
leading to the PTF
Generally speaking, since the quantum potential is a function of the PMD , the evolution of PTF (73) presents a vagueness since the velocity at the following instant depends on the quantum potential values , and therefore on the values of that are unknown at the moment (a problem induced by the non-locality property of quantum potential).
Nevertheless, given that for T->0 we have the convergence to the deterministic limit and therefore
for very small amplitude of GBN we can proceed by successive order of approximation such as [
7,
13],
where
with
where, utilizing the approximation
, the zero order PTF reads
Finally, by applying equation (75-76), the PTF in the continuous limit reads
where
.
The resolution of the recursive Expressions (76-80) offers the advantage of being applicable to nonlinear systems that are challenging to handle using conventional approaches [
22,
23,
24,
25].
3.3. General Features of Relaxation of Quantum Superposition of States
The classical Brownian process admits the stationary long-time solution [
26]
where
leads to the expression [
26]
In the quantum case, in Equation (81) cannot be expressed in a closed form, unlike (81), because it is contingent on the particular relaxation path the system follows toward the steady state. This path is significantly influenced by the initial conditions, namely as well as , and, by the initial time at which the quantum superposition of states is submitted to fluctuations.
We assume that, prior to the initial time, the microscopic system follows deterministic quantum behavior. At the initial time, it is then brought into contact with the fluctuations of the measuring system, which is part of the universe whose nature is rendered classical by the GBN
This means that the outcome of a measurement depends on the initial time at which the system is subjected to measurement, when it is perturbed by fluctuations originating from the macroscopic part of the measuring apparatus.
Therefore, if we repeat the measurement on the same quantum superposition of states, we will statistically obtain different outcomes.
Furthermore, respect to this aspect, from (65), we can see that depends on the exact sequence of inputs of stochastic noise. This fact becomes more critical in systems with classically chaotic Hamiltonian since very small differences can lead to relevant divergences of the trajectories in a short time. Therefore, in principle, different stationary configurations (of quantum eigenstates) can be reached whenever starting from identical superposition of states at the same initial time. Therefore, in classically chaotic systems, Born’s rule can also be applied to repeated measurements on the quantum state superposition of a single system.
Even if , it is worth noting that, to have finite quantum lengths and , which are necessary to ensure that the measuring apparatus can be quantum-decoupled from the measured system at both the initial and final times, the nonlinearity of the overall system (system–environment) is also required.
Quantum decoherence, which leads to the decay of superposition states, is significantly enhanced by the pervasive classical chaotic behavior observed in real systems.
In contrast, a perfectly linear universal system would preserve quantum correlations on a global scale and would never allow quantum decoupling between the system and the experimental apparatus performing the measurement. In this case, we observe a form of quantum-classical correspondence, where the quantum averages replicate classical behavior, and the large-scale quantum behavior becomes indistinguishable from the classical one. Furthermore, it is important to note that, since the quantum decoupling of subsystems is not possible in linear systems, the mere assumption of separate systems from an environment subtly introduces classical assumptions (existence of finite and , and quantum decoherence) into the nature of the overall supersystem.
Furthermore, since equation (7) is valid only in the leading order approximation of
[
7,
13] (a slow relaxation process with small amplitude fluctuations), in cases of large fluctuations occurring over a timescale much longer than the relaxation period of
, transitions may occur to configurations not captured by (80), potentially leading from a stationary eigenstate to a new superposition of states. In this case, relaxation will once again proceed toward another stationary state. Therefore,
given by (78 or 80) describes the relaxation process occurring during the time interval between two large fluctuations, rather than the system’s evolution toward a statistical mixture.
Due to the extended timescales associated with large fluctuations, only a system composed of a vast number of particles (for instance a gas, where molecules with quantum internal states act as quasi-independent subsystems) can explore the full spectrum of possible configurations. In such cases, large fluctuations arise from inter-particle collisions, gradually drive them toward a statistical mixture.
3.4. EPR Paradox and Pre-Existing Reality in the SQHM
The SQHM emphasizes that, despite the well-defined, reversible, and deterministic structure of quantum theory, its foundations remain incomplete. In particular, it points out that the measurement process is not addressed within the deterministic Hamiltonian framework of standard quantum mechanics, but instead relies on probabilistic postulates introduced externally to the theory’s mathematical formalism.
SQHM reveals that standard quantum mechanics is essentially the deterministic, "zero-noise" limit of a broader quantum-stochastic theory, which arises from fluctuations in the spacetime submitted to stochastic gravitational background.
In this context, the zero-noise quantum mechanics defines the deterministic evolution of the system's initial state "probabilistic wave." However, SQHM suggests that the term "probabilistic wave" is somewhat misleading, as it embodies the probabilistic nature of the measurement process, something standard quantum mechanics cannot fully describe and explain. Since SQHM provides a framework that accounts for both wavefunction collapse and the measurement process, "state wave" looks as a more accurate terminology, since the probabilistic nature being associated with the evolution prescribed by the SQHM, rather than being an intrinsic property of the system's state.
Moreover, SQHM reinstates the principle of determinism into quantum theory by clarifying that conventional quantum mechanics describes the evolution of the deterministic limit of the system's "state wave." The apparent probabilistic outcomes arise from the influence of fluctuating gravitational backgrounds.
SQHM also addresses the long-standing question of whether reality exists prior to measurement. While the Copenhagen interpretation suggests that reality only emerges when a measurement forces the system into a stable eigenstate, SQHM proposes that the world naturally self-decays through macroscopic-scale decoherence. In this view, only stable macroscopic eigenstates persist, establishing a lasting reality that exists even before measurement occurs.
With regard to the EPR paradox, SQHM shows that, in a perfectly deterministic (coherent) quantum universe, it is impossible to fully decouple the measuring apparatus from the system and therefore it is impossible to realize a measure within a finite time interval. Such decoupling can only be achieved in a large-scale classical supersystem—a weakly bounded quantum system embedded in 4D spacetime with a fluctuating background. In this scenario, quantum entanglement, driven by quantum potential, extends only over a finite distance. Accordingly, SQHM reinstates local relativistic causality at the macroscopic level, where the quantum scale reduces to a point in the presence of GBN.
If the Lennard-Jones interparticle potential produces a sufficiently weak force, leading to a microscopic range of quantum non-local interactions and a large-scale classical phase, as far as it concerns photons, they retain their quantum properties at the macroscopic scale since they own an infinite quantum potential range of action as shown in reference [
7]. As a result, photons are the ideal particles for experiments designed to demonstrate the features of quantum entanglement over long distances.
In order to describe the standpoint of the SQHM on this argument, we can analyze the output of two entangled photon experiments traveling in opposite directions in the state
where
and
are vertical and horizontal polarizations, respectively, and
is a constant phase coefficient.
Photons “one” and “two” impact polarizers (Alice) and (Bob) with polarization axes positioned at angles and relative to the horizontal axis, respectively. For our purpose, we can assume .
The probability that photon “two” also passes through Bob’s polarizer is .
As widely held by the majority of the scientific community in quantum mechanics physics, when photon “one” passes through polarizer with its axes at an angle of , the state of photon “two” instantaneously collapses to a linear polarized state at the same angle , resulting in the combined state .
In the context of the SQHM, able to describe the kinetics of the wavefunction collapse, the collapse is not instantaneous, and following the Copenhagen quantum mechanics standpoint, we must assert rigorously that the state of photon “two” is not defined before its measurement at the polarizer .
Therefore, after photon “one” passes through polarizer , from the standpoint of SQHM, we have to assume that the combined state is , where the state represents the state of photon “two” in the interaction with the residual quantum potential field generated by photon “one” when adsorbed at polarizer . The spatial extension of the field of the photon two, in the case the photons travel in opposite direction, is the double of that one crossed by the photon one at its adsorption. In this regard, it is noteworthy that the quantum potential is not proportional to the intensity of the field. Instead, it is proportional to its second derivative. Therefore, a minor perturbation in the field with a high frequency at the tail of photon two (during the absorption of photon one) can give rise to a significant quantum potential force field.
When the residual part of the two entangled photons also passes through Bob’s polarizer, it undergoes the transition with probability. The duration of the interaction between the second photon and Bob’s polarizer—i.e., the measurement process involving full wavefunction collapse or absorption—is at least equal to the photon's spatial extension divided by the finite speed of light. Therefore, the time required to transfer the information about the measurement of the first photon to the location of the second photon’s detection does not violate local relativistic causality.
Since the decay time of the second photon in Bob’s polarizer depends on the spatial extension of the residual component of the entangled pair, it can be measured with high precision in experiments conducted over planetary distances. An experiment using measurement devices with Alice’s polarizer located on the Moon and Bob’s on Mars has been proposed in Refs. [
7].
Summarizing, the SQHM reveals the following key points:
The SQHM posits that quantum mechanics represents the deterministic limit of a broader quantum stochastic theory;
Classical reality emerges at the macroscopic level as a decohering, self-decaying global quantum state, representing a preexisting reality prior to measurement.
The measurement process is feasible in a classical macroscopic world because truly quantum-decoupled and independent systems can exist—namely, the system and the measuring apparatus, both before and after the measurement.
Determinism is acknowledged within standard quantum mechanics under the condition of zero GBN.
Locality is achieved at the macroscopic scale, where quantum non-local domains condense to punctual domains.
Determinism is recovered in quantum mechanics as the zero-noise limit of SQHM, since the probabilistic nature of quantum measurement is introduced by the presence of GBN.
The maximum light speed of the propagation of information and the local relativistic causality align with quantum uncertainty;
The SQHM addresses the GBN as playing the role of hidden variable [
27]: The Bohm non-local hidden variable theory ascribes the indeterminacy of the measurement process to the unpredictable pilot wave, whereas the SQHM attributes its probabilistic nature to the fluctuating gravitational background. This background is challenging to determine, both because it predominantly originates from early-generation processes during the Big Bang and because it is characterized by the weak nature of gravitational interactions. In the context of Santilli's non-local hidden variable approach in IsoRedShift Mechanics [
28], it is possible to demonstrate the direct correspondence between the non-local hidden variable and the GBN [
27]. Furthermore, It must be noted that the resulting unpredictability and probabilistic nature of wavefunction decay and measurement outcomes is further compounded by the inherently chaotic nature of classical dynamics and the randomness introduced by GBN.
3.5. The SQHM in the Context of the Objective-Collapse Theories
The SQHM offers a physical perspective that closely aligns with the so-called Objective Collapse Theories. [
29,
30,
31,
32].
In objective collapse theories, the Schrödinger equation is augmented with additional nonlinear and stochastic terms, referred to as spontaneous collapses, that serve to localize the wave function in space. The resulting dynamics ensures that, for microscopic isolated systems, the impact of these new terms is negligible, leading to the recovery of usual quantum properties with only minute deviations.
An inherent amplification mechanism operates to strengthen the collapse in macroscopic systems comprising numerous particles, overpowering the influence of quantum dynamics. Consequently, in a sufficiently rarefied phase, the wave function for these particles is consistently well-localized in space, behaving practically like a point in motion following Newton's laws.
In this context, collapse models offer a comprehensive depiction of both microscopic and macroscopic systems, circumventing the conceptual challenges linked to measurements in quantum theory. Prominent examples of such theories include: Ghirardi–Rimini–Weber model [
29], Continuous spontaneous localization model [
30] and the Diósi–Penrose model [
31,
32].
On the other hand, the SQHM addresses the problem of the wavefunction not from a semi-empirical perspective—treating it as an experimental given—but derives it entirely from a theoretical framework that incorporates the effects of the GBN, interpreted as a phenomenon of General Relativity, into quantum physics. In doing so, it proposes an innovative approach that effectively tackles critical issues within this class of theories and highlights how our limited understanding of how quantum mechanics can be reconciled with general relativity underlies the difficulties in interpreting certain fundamental physical phenomena.
A key achievement in the SQHM lies in resolving the wavefunction 'tails' problem by introducing the interaction length of the quantum potential alongside the De Broglie wavelength. Beyond these characteristic scales, the quantum potential can no longer sustain coherent Schrödinger behavior in the wavefunction’s tails.
The SQHM also highlights that there is no need for an external environment in order to introduce fluctuations, demonstrating that the quantum stochastic behavior responsible for wave-function collapse can be an intrinsic property of the system in a spacetime with fluctuating metrics due to the gravitational background. Furthermore, situated within the framework of relativistic quantum mechanics, which aligns seamlessly with the finite speed of light and information transmission, the SQHM establishes a clear connection between the uncertainty principle and the invariance of light speed.
The theory also derives, within a fluctuating quantum system, the indeterminacy relation between energy and time—an aspect not expressible in conventional quantum mechanics—providing insights into measurement processes that cannot be completed within a finite time interval in a truly quantum global system.
Notably, the theory proposes a potential explanation for the measurement of entangled photons through a Earth-Moon-Mars experiment [
7].
The SQHM approach offers a promising perspective for resolving key issues such as the EPR paradox and the von Neumann measurement problem. By explicitly modeling the stochastic effects of GBN, SQHM provides a rigorous framework for the emergence of irreversibility and classical behavior—without violating the fundamental principles of quantum mechanics.
4. The Discrete Nature of Spacetime
Within the framework of the SQHM, incorporating the uncertainty on measure in fluctuating quantum system and the maximum attainable velocity of the speed of light such as
it follows that the uncertainty relations
leads to
and, consequently, to
.
where is the Compton’s length.
Identity (86) reveals that the maximum concentration of a body mass , compatible with the uncertainty principle and the repulsive force of quantum potential against the mass concentration, is within an elemental volume with a side length equal to half of its Compton wavelength. This is equivalent to saying that, in order to confine the spatial position of a body to Compton-wavelength precision, an energy on the order of its rest mass is required. Further increasing the precision would require additional energy (embodied in the quantum potential) which in turn leads to the generation of new mass.
This result holds significant implications for black hole (BH) formation. To form a BH, all the mass must be compressed within a sphere of the gravitational radius
that by (86) cannot be smaller than the Compton length
, giving rise to the relationship:
,which further leads to the condition:
indicating that the BH mass must obey to the relation
.
where is the reduced Planck mass.
Identity (89) has been derived here in a vacuum with zero background fluctuations. If we consider positive temperature as in (44-45) it follows that
,leading to
and, consequently, to
.
Moreover, being
, we have also that
Result (93) shows that, in the presence of fluctuations amplitude (temperature) greater than zero, a black hole with Planck mass becomes unstable, requiring additional mass to achieve stability. It is worth noting that this temperature-driven instability could be the mechanism that destabilized pre-big bang black hole [
33] potentially triggering the big bang through the energy randomization generated by its matter-antimatter interaction and progressive annihilation, leading to its extrusion beyond the gravitational radius and to the spacetime inflation.
Result (86) demonstrates that the maximum mass density, constrained by quantum and relativistic laws, is attained when confined within a sphere whose diameter equals half the Compton wavelength. This is implemented by the repulsive quantum potential that becomes greater larger the mass localization. Consequently, within the gravitational radius, the black hole's mass cannot collapse into a singular point; rather, the gravitational force is overcome by quantum one when reaching a sphere with a radius equal to the Compton wavelength. In an equilibrium condition at BH collapse, the gravitational force and the quantum potential exactly counteract each other.
Assuming that spacetime has a discrete structure, and considering the nature of an elemental volume—defined as the region within which mass density exists and is therefore uniformly distributed—the hypothesis that the Planck length represents the smallest discrete elemental volume becomes untenable. Such an assumption would make it impossible to compress the mass of large black holes into a sphere with a diameter significantly smaller than half the Compton wavelength, the scale at which quantum-gravitational equilibrium is reached [
33,
34,
35]. Consequently, the minimal elemental length of spacetime must be so small that the resulting physical behavior remains practically indistinguishable from the continuous framework of General Relativity. In contrast, treating the Planck length as the smallest discrete unit of spacetime leads to a description of the universe that contradicts the gravitational predictions of General Relativity on large black holes.
It is important to emphasize that the above statement does not violate the postulates of loop quantum gravity, as the minimal discrete length element of spacetime does not necessarily coincide with the minimal measurable length below which no further increment in distance can be detected. Attempting to do so would require an amount of energy sufficient to create a black hole with a mass exceeding that of a Planck-mass black hole.
On the other hand, since existing black holes compress their mass into a core smaller than the Planck length [
33,
34,
35], a discretized spacetime would require elemental cells of even smaller volume. In the computational analogy (described in part two of this work), the maximum grid density of spacetime is determined by the size of these fundamental spacetime cells.
Thus, it is important to consider that the assumption that the smallest discrete spacetime distance corresponds to the minimum possible Compton wavelength
generates by the maximum available mass/energy
, which is the mass/energy of the pre-big bang black hole as well as of the universe [
36].
This provides a criterion to rationalize the universe's mass/energy, explaining why the mass of the universe is not smaller than it is, as it is intrinsically tied to the minimum length of the discrete spacetime element.
If the pre-big-bang black hole was an anomalous fluctuation gravitationally confined within an elemental cell of spacetime, the mass/energy of the universe cannot be smaller than what it is.
4.1. Implications of Spacetime Discretization
Since, for existing large-mass black holes—such as the supermassive black hole at the center of galaxies—the core of condensed mass is on the order of the Compton wavelength [
33,
34,
35], which is much smaller than the Planck length, the elemental spacetime cells would have to be even smaller in order to effectively reproduce the continuous description of General Relativity. Otherwise, we would expect significant deviations from Einstein’s gravitational theory in such regimes. Spacetime discretization is, however, expected to emerge only at extremely small scales, near the Big Bang, where the quantized nature of spacetime interferes with the classical geometry of gravity.
4.1.1. The Arrow of Time and Free Will
At this stage, in order to analyze the universal evolution with the characteristics of the SQHM, let’s firstly consider the local evolution, in a domain of spacetime much larger both than the De Broglie lengths and quantum coherence lengths
[
7,
13]. After a certain characteristic time, these sufficiently macroscopic superposition of states, evolving following the motion equation (7), decays into one of its eigenstates and leads to a stable state that, surviving to fluctuations, constitutes a lasting over time measurable state: We can define it as reality since, for its stability, gives the same result even after repeated measurements. Moreover, due to macroscopic decoherence, the local domains in different locations are quantumly disentangled from each other, so their decay to one of their stable eigenstates is not simultaneously. Due to the perceived randomness of the GBN, the wavefunction decay can be assumed to stochastically distribute across local spatial domains. This results in a large-scale, classical reality within spacetime, where each local domain undergoes spontaneous decay at different times. As a consequence, these locally stationary and classically stable states collectively give rise to an emergent, globally classical structure and stochastic dynamics.
From the standpoint of the SQHM, the universal progression exploits quantum evolution for a finite period, and—through decoherence—derives the classical N-body state at discrete instants via wavefunction collapse.
Then It proceeds for a while by quantum evolving until decoherence give rise to the next classical state avoiding the need for continuous classical computation of N-body states. Instead, classical states emerge only when the quantum state collapses, as in a measurement. This self-collapse essentially constitutes a genuine computation of the next classical state, analogous to how a quantum computer derives classical output from qubits.
In practice, the universe performs a form of computational optimization, accelerating the derivation of its future states by leveraging a qubit-like quantum computation process.
Reversing this reasoning, we are able to use qubits (even if we may not know their operating algorithm) and build quantum computers precisely because they mimic a property inherently embedded in the universe, suggesting that the universe itself functions as a quantum computer.
4.1.2. Nature of Time and Classical Reality: The Universal “Pasta Maker”
Working with a discrete spacetime offers advantages that are already supported by lattice gauge theory [
37]. This theory demonstrates that in such a scenario, the path integral becomes finite-dimensional and can be assessed using stochastic simulation techniques, such as the Monte Carlo method.
In our scenario, the fundamental assumption is that the optimization procedure underlying universal computation is capable of generating the evolution of physical reality. The key advantage of this approach is that the universe simulates quantum mechanics over finite time intervals in polynomial (P) time, and then computes the corresponding classical state via wavefunction collapse. This effectively transforms the classical many-body evolution from a non-polynomial (NP) problem into a polynomial-time (P) one, enabling efficient computation.
At the onset of the subsequent moment, the array of potential quantum states (in terms of quantum superposition of states) encompasses the classical states of realization. Consequently, the future states form a quantum multiverse where each individual state is potentially attainable depending on events (such as the chain of wave-function decay processes) occurring beforehand. As the present unfolds, marked by the quantum decoherence process leading to the attainment of a classical state, the past is generated, ultimately resulting in the realization of the singular classical reality: the Universe.
From this perspective, the classical state does not extend across the full spacetime in a literal sense, but rather appears continuous at the macroscopic scale. A useful analogy is a gas of molecules: at the macroscopic level, the collective quantum superposition rapidly decoheres, while within each microscopic domain—smaller than the de Broglie wavelength—the internal quantum superposition states of individual molecules persists. However, in the macroscopic view, molecules are effectively reduced to point-like entities, and information about their internal quantum states is lost, as it does not contribute to classical macroscopic observation.
Figure 1.
The Universal “Pasta-Maker”.
Figure 1.
The Universal “Pasta-Maker”.
Moreover, if many configurations of the universe can be realized in the future, we cannot know which one will actually occur beyond a certain temporal horizon. By contrast, the past consists of fixed events—the realized universe—which we can know but cannot alter.
It is precisely this multitude of possible future states that provides the foundation for the existence of free will. Nonetheless, it still needs to address the underlying law that governs the emergence of order (information handling), in order to fully explain how free will arises (see the second part of the work).
In this context, we can metaphorically illustrate spacetime and its irreversible universal evolution as an enormous pasta maker. In this analogy, the future multiverse is represented by a blob of unshaped flour dough, inflated because it contains all possible states. This dough, extending up to the 4D surface of the present, is then pressed into a thin pasta sheet, representing the quantum superposition projective decay to the classical state realizing the universe. From this standpoint, classical states are ‘re-calculated’ at each instant (in each macroscopic domain) and carried into the future. For instance, we can think of our body as being recalculated at each subsequent state, appearing progressively older as it is carried forward in time. However, in this process, as our present state (both bodily and mental) becomes quantumly decoupled from the previous instant, we simultaneously lose access to, and awareness of, the past universe. As a result, we perceive the external world as a reality in continuous transformation.
The 4D surface boundary between the future multiverse and the past universe marks the instant of present time. At this point, the irreversible process of decoherence occurs, entailing the computation or reduction to the present classical state and generating the direction of time flow. This specific moment defines the present in reality, a notion absent from the relativistic spacetime framework, where time is merely a coordinate that does not flow. The equations of general relativity are
timeless in a sense, they describe the whole spacetime at once. In quantum theory of gravity (see for example, the Wheeler-DeWitt equation [
38] there is no time at all, everything seems "frozen" and there's no clear way to define "evolution" or "change" as we experience it.
The SQHM marks the point at which irreversibility occurs, setting it apart from other positions along the time axis and giving rise to the dynamics of what we experience as real, unidirectional flowing time.
8. Conclusions
This work presents an ambitious and interdisciplinary proposal, aiming to unify aspects of gravity, quantum physics and classical mechanics within a single theoretical framework.
By bridging diverse domains of physics, this approach integrates quantum and classical physics into a coherent model, particularly through the use of stochastic quantum hydrodynamics. A central idea is the incorporation of stochastic perturbations arising from gravitational background noise, further reinforcing the perspective that the interplay between these two regimes underlies the intricate and emergent nature of our macroscopic reality.
The stochastic quantum hydrodynamic model accurately predicts both the Lindemann constant and the fluid-to-superfluid transition in helium. Furthermore, it addresses the problem of photon quantum entanglement in the context of a finite speed of information transmission, an effect that could be tested in a long-distance, planetary-scale experiment.
The use of Madelung’s quantum hydrodynamic analogy, combined with gravitational stochastic noise, offers a novel framework for understanding how quantum phenomena transition into the classical regime. This approach also tackles fundamental unresolved issues, including the EPR paradox, quantum–classical coexistence, and the origin of time's flow in reality — key challenges in modern theoretical physics.
The work aligns with a school of thought that acknowledges the self-decoherence into a macroscopic reality without requiring a measurement process to determine it. By incorporating quantum uncertainty along with the constraints of finite light speed and information transmission, the theoretical framework proposes that physical spacetime has a discrete nature, allowing for an analogy with the evolution of the universe as a discrete computational simulation. The optimization of this computational process, which leads to the emergence of macroscopic reality, is achieved through the decoherence process, which appears to function as the 'calculation' of the classical state, much like qubits operate in quantum computers.
From this standpoint, classical states are 'recalculated' at discrete instants and propagated into the future. As a result, macroscopic reality manifests a dynamical behavior of continuous macroscopic transformation, where the irreversible nature of quantum decoherence gives rise to the unidirectional flow of time. According to this framework, reality is constrained neither by deterministic laws nor by an unpredictable, probabilistic, measurement-dependent actualization.