Preprint
Article

This version is not peer-reviewed.

Black Holes and Kullback-Leibler Divergence, Decomposing Path-Dependent Processes

Submitted:

27 February 2025

Posted:

28 February 2025

You are already at the latest version

Abstract
This work develops an information geometric framework that unifies the principle of stationary action with the minimization of Kullback-Leibler divergence in stochastic systems to probe the shortcomings of General Relativity. By reformulating path probabilities using maximum entropy methods, we decompose complex, path-dependent distributions into symmetric sub-components, thereby isolating error terms that correspond to missing dynamics in black hole mechanics. Analytical and computational analyses reveal that an entropy-maximizing state ΔDKL=0 signifies an ideal match between theory and observation, while deviations expose hidden mechanisms underlying curvature evolution. Extending the approach with Fisher Information Geometry and Ricci flow, we derive a novel relation, γ(T·ds/dE)=dθ/dτ, which links thermodynamic quantities with spacetime expansion dynamics. This connection offers fresh insights into phenomena such as black hole evaporation and event-horizon viscosity. Our findings underscore the necessity of incorporating path-dependence into gravitational models, providing a possible avenue for reconciling classical and quantum descriptions of spacetime evolution.
Keywords: 
;  ;  ;  ;  ;  

1. Introduction

General Relativity achieved remarkable success in predicting a wide range of astronomical observations. However, there remain narrow regimes where Einstein’s theory falls short. In these domains, an Information Geometry approach offers new techniques to probe the underlying deficits. In this paper, we demonstrate that the principle of stationary action is equivalent to minimizing the Kullback-Leibler (KL) divergence in stochastic systems. This equivalence allows us to decompose complex, path-dependent distributions into symmetric sub-components. By leveraging the additivity property of divergence, we isolate an “error” distribution that captures the information where General Relativity fails to match observation.
Building on established work that connects maximum entropy with stationary action to recover dynamical laws [1,2,3], our formulation expresses the KL divergence in terms of the path-dependent distribution P [ Γ ] , preserving the Euler-Lagrange equations of classical mechanics. Through computational and analytical methods it is shown that Δ D K L = 0 , an entropy maximum, corresponds to an ideal state where theory and observation align. Specifically, one can decompose a complex distribution into distributions that have specific properties that are invariant under specific transformations. These invariance are closely linked to conserved quantities and informational symmetries akin to Noether’s Theorem. This model-to-observation deficit is leveraged to analyze the "missing" dynamics of General Relativity through deriving a measure for multi-path dependence. Finally, we derive a novel relationship between entropy, energy, and the expansion scalar:
γ T d S + S d T d E = d V d τ
which offers fresh insight and a more intuitive understanding into black hole relaxation and viscosity.

2. Methods and Established Theory

2.1. Review of Stochastic Action Principle (SAC)

The Stochastic Action Principle (SAP) provides us with a framework whereby the principle of stationary action is equivalent to maximized entropy (Wang, 2009). First hypothesized by Jaynes (1957)[4] Principle of Maximum Entropy, this idea has been built up by various researchers over the years. However, for the benefit of brevity, we will concentrate on formalisms and experimentation provided by Dr. Wang’s Stocastic Action Principle derived from Shannon entropy, and Dr. Yahalom’s (2022) work starting from the Fisher Information Perspective. To start we will have path information encoded as Shannon entropy in relation to the path probability distribution. Specifically, rather than looking at a singular path we instead look at the average path taken by a large number of particles. For entropy S, Path Γ and distribution P:
S ( Γ ) = Σ γ P [ Γ ] ln P [ Γ ]
The Action (A) Value measured via the Action Functional:
A [ Γ ] = t a t b L ( x ( t ) , x ˙ ( t ) , t ) d t
then determine the path ( Γ ) by maximizing entropy subject to system knowledge constraints, average action:
< A > = Σ Γ P [ Γ ] A [ Γ ]
The entropy maximization framework is seen via normalization, with fixed <A>:
δ ( S η Σ Γ P [ Γ ] A [ Γ ] α Σ Γ P [ Γ ] ) = 0
The Lagrangian for variation, functional J extremized with bounding constants α , η :
J = S + α ( Σ Γ P [ Γ ] 1 ) + η ( Σ Γ P [ Γ ] A [ Γ ] < A > )
Take variation with respect P [ Γ ] for small variation.
J P [ Γ ] = 0 = > ( ln P [ Γ ] + 1 + α + η A [ Γ ] ) = 0
or
ln P [ Γ ] = ( 1 + α ) η A [ Γ ]
and 1 + α is constant so exponentiation both sides gets.
P [ Γ ] = 1 Q e η A [ Γ ] e η A [ Γ ]
This is the final path probability distribution, more precisely, the probability of a path decays exponentially with its action deviation. In discrete notation for paths k from a to b:
p a b ( k ) = 1 Q e η A a b ( k )
With extremal condition:
ln P [ Γ ] + η A [ Γ ] = c o n s t
This shows the balance of maximizing entropy and minimizing action cost term i.e. "Maximum Path Entropy Principle". These results were showing in a computational experiment by Wang (2009) showing a Gaussian cloud of dust particles travel through paths that obey this principle. They also find Shannon entropy:
S a b = D ( r ) p a b ln p a b
which mimics thermodynamics d Q = d U + d W , and U = E ¯ = Σ i p i E i . Finally, we find:
2 p a b = η p a b 2 A , η = 1 2 m d > 0
then p a b = 0 so A = 0 so 2 A > 0 or in other words, the least action path is the most probable path [5]. When we have vanishing noise, stochastic dynamics simply tend towards regular dynamics:
S a b = ln Z + η A ¯
implying A = 0 , our stationary action principle, a perspective supported by maximum entropy methods in statistical physics [6].

2.2. Recovering Classical and Quantum Mechanics from SAC and Fisher Information

To recover Classical Mechanics is quite trivial. In the limit where the parameter η is large (small fluctuations or low temperature in statistical mechanics analogy), the weight e η A [ Γ ] becomes extremely small unless A [ Γ ] is at its minimum. This is analogous to the Feynman Path integral construction which for classical systems yields the Euler-Lagrange equations and therefore all of Classical Mechanics.
For Quantum Mechanics this is less straightforward, as the path integral weighs each path by a complex exponential:
A [ Γ ] e i A [ Γ ]
For this we must allow a non-vanishing "spread" of paths, by modifying maximum entropy approach to work with complex probability amplitudes rather than probabilities in an entropic dynamics framework [7]. By incorporating interference between paths, the resulting theory reproduces the quantum dynamics governed by the Schrodinger equation. In effect, classical principle of least action is replaced by a principle where all paths contribute, weighted by e i A [ Γ ]
An alternative construction has been shown to lead to the quantum Hamilton-Jacobi equation and ultimately the Schrodinger equation reproducing quantum dynamics as shown in [8] and then Dirac Equation [3].

2.3. Extending to Fisher Information Geometry

Now we will briefly review some concepts with Fisher Information Geometry which will be required for our later analysis. These are, that information geometry has a dualistic structure. In particular, two conjugate torsion-free affine connections coupled with the metric. Analogous geometric ideas also emerge in nonlinear sigma models, where metric flows are interpreted via renormalization group methods (see Friedan, 1985 [9]). Dual parallel transport that is metric-compatable, 2 k types of k-gons. Compared with Riemannian manifolds which exhibit opposite holonomies (net rotation of vector) i.e. Einstein Field Symmetry/Geodesic congruence. These features are not necessarily true for information manifolds when: duel affine connections exist such as (e-connect, m-connect), connections with torsion, non-metric connections (Weyl), or non-Abelian gauge field or spin structures i.e. transporting a spinor.
This paper argues that these unique properties of an information manifold actually allow us to better model spacetime dynamics, particularly Kullback-Liebler Divergence which exhibits D K L ( P | | Q ) D K L ( Q | | P ) or more precisely, not necessarily equal.

2.4. Kullback-Liebler Divergence

Divergence between two distributions P and Q is:
D K L ( P | | Q ) = Σ x P ( x ) ln P ( x ) Q ( x )
This canonical definition and its fundamental properties are standard in information theory (see, e.g., Cover and Thomas, 2006 [10]). For appropriate distributions, we can show that available work exists P relative to Q:
F n e q F e q = k T D K L ( P | | Q )
In other words the available work exists relative to equilibrium [11]. Moreover, this phase-space perspective on dissipation, which links thermodynamic work to the asymmetry between forward and reverse path probabilities, is thoroughly discussed in Kawai, Parrondo, and Van den Broeck (2007) [11]. Here ( Q | | P ) asks how inefficient it is to use P to model Q. We can easily see that the directional imbalance of distributions go away when:
Δ D = | D ( P | | Q ) D ( Q | | P ) | = 0
It should be intuitive to see this is a maximal entropy state, there are more microstates available when D ( P | | Q ) = D ( Q | | P ) , this is already well established as a consequence of:
D ( P | | u ) = Σ x ϵ X P ( x ) l o g P ( x ) u ( x ) = l o g | X | H ( P )
This is analogous to Ficks Normal Diffusion (proportional to negative concentration gradient). For any process that distorts distributions P or Q, where P and Q are dependent distributions then there is a tendency to maximize entropy (i.e. moving from high-density to low-density states). This emerges from the balanced Log Ration Condition:
ln P ( x ) Q ( x ) s o ( p ( x ) Q ( x ) ) ln P ( x ) Q ( x ) = 0

2.5. Computing Divergence in identifiable Distributions

There is a rich body of literature which shows various distributions, have this property. For example, Gaussian distributions exhibit rotational invariance—reflecting the conservation of angular momentum (see, e.g., [10])—and location shift families demonstrate translation invariance, which underpins momentum conservation; similarly, the Cauchy distribution’s reflection and scale invariance, along with its invariance under Möbius transformations, mirrors conformal symmetries as discussed in [9], while nonequilibrium thermodynamic approaches have linked path-dependent entropy production to these symmetry properties ([11,12]); furthermore, the axial symmetry of von Mises–Fisher distributions ([13]) and the dual affine structure inherent to Bregman divergences ([14]) further corroborate how such statistical symmetries are intimately connected to fundamental conservation laws. These distributional properties can all be verified by a fairly trivial computational experiment. The resulting distributions which produce Δ D = 0 all have symmetrical properties which can be directly related to invarience (conservation under translation). From a range of distributions analyzed, below are some notable distributions whereby Δ D = 0 (and properties to note):
  • Gaussian distributions (same covariance, frame of reference invariance, related to time symmetry),
  • location shift families (integrated depends on θ 1 θ 2 , related to translation symmetry),
  • Canchy distributions of same family (same scale, Reflection, Scale Invariant, some Mobius transformations),
  • Antipodal or Permuted Discrete Distributions (mirror image or group rotation, rotational symmetry),
  • Von Mises-Fisher (preferred direction spin up/down, symmetric under mean direction but not concentration),
  • Bregman Divergence (exponential family distributions are symmetric when "distance" d is essentially Euclidean, implying no extra gauge or affine structure, also implies connection used is usual (Levi-Civita) (see, e.g., Amari and Nagaoka, 2000 [?])).
  • This should not be too surprising, the distributions on an information manifold can be viewed essentially as field states.

2.6. Decomposing Observed Distributions

The obvious next step is to examine an observed distribution signal, which may appear random or chaotic can actually be represented by a series of decomposable symmetric distributions. Given this, we look towards methods to linearly separate a mixed distribution into their component distribution parts. This again is fairly straightforward using Fourier analysis and signals analysis techniques. In particular, we can find the marginal likelihood for a given model M i to be a fit within a mixed solution from:
p ( y | M i ) = p ( y | θ i , M i ) p ( θ i | M i ) d θ i
The posterior probability for model M i from Bayes Rule:
p ( M i | y ) = p ( y | M i ) p ( M i ) p ( y )
and from Bayes Factor (see, e.g., Kass and Raftery, 1995 [15]):
B F i j = p ( y | M i ) p ( y | M j )
Alternative checks for likelihood, for one distribution to be the "correct" decomposition can be found using AIC/BIC or Penalized Likelihood/MAP estimation instead or other techniques. The important thing in this analysis is to find a high degree of confidence that a given signal is decomposed correctly into its sub-signals, and if uncertainty exists between configurations this must be factored into analysis. To better avoid this, we can determine boundary conditions based on physical law. Further confidence is increased when evaluating the time evolution of the distribution as each composable signal has a unique time evolution signature when it is perturbed and drifts back to its idealized state.

2.7. Complex Components (Phase Space Projection)

If we have a distribution that exists in phase space, but its distribution can only be observed in R 3 , then we inevitably get complex structure and unpredictability. The result of this is a large variation in the probability "certainty" of the linear separation. In other words, a single smooth distribution may appear in R 3 as having multiple modes. If we are optimizing for R 3 space then D ( Q | | P ) again will exhibit mode-seeking behavior, where under time evolution the divergence of ( Q | | P ) will re-center itself around one mode which acts pseudo "orthogonal" to the leftover modes. This mode-seeking behavior is conceptually very similar to the super-position of states. Selecting one distribution automatically causes cascading effects into all remaining distribution patterns, mimicking wave function collapse. Similarly, we can only capture one mode at a time, so the higher the convergence on one mode, the less information and therefore higher uncertainty we have about the remaining, unselected mode. For a complex function
f ( z ) = u ( x μ ) + i v ( x μ )
Projected onto the real axis we define projection:
g ( x μ ) = Re ( f ( x μ ) ) , h ( x μ ) = Im ( f ( x μ ) )
Complex function is simply a sum of exponentials, real projection exhibit interference patterns:
f ( z ) = Σ n e i k n z , Re ( f ( z ) ) = Σ n c o s ( k n x )
The complex Fourier modes is simply:
f ( z ) = Σ n A n e i k n z
An interesting pattern emerges whereby for any complex distribution, the real distribution must have its most likely decomposition to include multiple sub-distributions. When this pattern is renormalized, the distribution not captured by mode optimization is free for further signal decomposition. The reader may note this is very similar to the way the Standard Model has essentially been constructed:
L S M = L g a u g e + L f e r m i o n + L H i g g s + L Y u k a w a
Where each Lagrangian term represents its own path and action minimization and therefore the fields can be written in terms of a path distribution. Conceptually we can think of this process in terms of entropic maximization tends towards phase alignment

2.8. Path Dependence

Instead of just a simple distribution of point P(x), the probability of being in state x. We can consider the probability of observing a particular path P [ Γ ] , where Γ can be either discrete steps in a path or a continuous time evolution. Therefore, path-dependent divergence with the path integral measure Γ , defined as the limit of the product measure over all discretized time steps of the path with appropriate normalization is:
D K L ( P | | Q ) = Γ P [ Γ ] ln P [ Γ ] Q [ Γ ]
This tells us how surprised we would be for a path to deviate from assumed dynamics Q, rather than the observed dynamics P. Entropy production relative to the ratio of forwards and backwards path probabilities, total entropy production is given by the KL divergence [16]. Where [ Γ ˜ ] is the time-reversed trajectory:
Δ S t o t [ Γ ] = ln P [ Γ ] Q [ Γ ˜ ]
And the average over all trajectories in a stochastic system is:
Δ S t o t = D K L ( P [ Γ ] | | Q [ Γ ˜ ] )
Similar relations for nonequilibrium steady-state entropy production have been derived by Hatano and Sasa (2001) [12], further reinforcing the connection between path-dependent dynamics and irreversible work. This average KL Divergence quantifies total entropy production for traveling a path, giving us the irreversibility of a process/dynamic (cf. Seifert, 2012 [17]). This in "reverse" D ( Q [ Γ ] | | P [ Γ ] ) quantifies the error in observed model when trying to capture idealized measures. This is akin to saying what is the system inefficiency or signal loss.

2.9. Identifying Multi-Path Dependence

The dual affine structure of Information Manifolds provides the perfect grounding for evaluating processes with multiple separable path-dependent processes. It is simple to find examples, an object placed in front of a fluid flow will experience path-dependency from the geometry of the space (assuming constant curvature), while the displacement of fluid flow due to relative differences in inertia result fluid being forced around the object in turbulence. This is consistent with the thermodynamic metrics approach [18]. In this, not only is the object affected by the flow geometry, but the geometry itself is affected by the presence of the object. If we remove this object, its impacts can still be seen until equilibrium is re-established and normal, unimpeded flow returns to an equilibrium. In this, total divergence can be separated by Leibniz rule or additivity of KL divergence, so we can decompose these two separate but related path-dependent processes into:
D K L ( P | | Q ) = D K L ( P | | P 1 ) + D K L ( P 1 | | Q ) + Σ x [ P ( x ) P 1 ( x ) ] ln P 1 ( x ) Q ( x )
if P 1 ( x ) / Q ( x ) is constant i.e. when P ( x ) P 1 ( x ) , P 1 ( x ) = 0 or if P ( x ) = P 1 ( x ) then this cross term vanishes [10]), which is the case if an observed distribution can be perfectly modeled by two separate path models.
If a composite distribution can be viewed as a mixture or sum of sub-distributions (A + B = C in some sense), then its total divergence with respect to baseline Q generally decomposes into the sum of the divergences of the sub-distributions, plus additional cross terms. Those cross-terms vanish under specific conditions. In this treatment we assume symmetry in the sense that their density ratios relative to Q are constant (or nearly so) over their support. Under these conditions, the cross-terms in the full decomposition vanish, justifying the simplified additive form: D 1 = D 2 and Δ D 12 = D 1 D 2 relating them to baseline distribution Q, final distribution P and an intermediate distribution P 1 . As such for a distribution ( P | | Q ) comprised only of symmetric distributions D 1 = ( P 1 | | Q ) , and D 2 = ( P | | P 1 ) respectively:
Δ D K L ( P , Q ) = Δ D K L ( P 1 , Q ) + Δ D K L ( P , P 1 ) , Δ D 1 = Δ D 2
Given the maximum entropy is found at Δ D K L = 0 then if (P,Q) are in a state of maximum entropy, then its two sub-components must either be reciprocals or equal (i.e, symmetric themselves). If they are independent paths, they should tend towards individual net divergence of zero, but small perturbations essentially are guaranteed to be antisymmetric in a sense. This should make sense, the amount of differences between two composite distributions results in the divergence of the underlying total distribution.
This is very powerful, without requiring any information about a missing path process, we can say that we can measure the energy gap between a theoretical model, and our observed distribution. From this we can gain insight or even potentially recover the path information which will give rise to the underlying mechanisms that our theoretical models do not currently capture.

2.10. Quantum Reference Frame Invariance

The reader might pause here because D K L can be related to entropy, and in this time evolution we have D 1 = D 2 this seams unphysical, we should not be seeing a negative path derivative of entropy. However, it is already established that entanglement and coherence of subsystems is invariant under ideal Quantum Reference Frame transformations (Cepollaro et al., 2024) [19]:
E A + C A = E B + C B , Δ E Δ C 0
Additionally, Vopson & Lepadau (2022) found what they coined "the Second Law of Infodynamics"[20], where under repeated observation and experimentation, across vastly different systems they observed an informational entropy S i n f with exactly this property, in particular, that S t o t a l = S t r a d + S i n f o or written differently, S t r a d = S i n f . These two results one would argue exactly meets the condition set for D 1 = D 2 as an entropic maximum.
Carlo et al.’s Theorem 1 demonstrates that for pure bipartite quantum states, the sum of a particular entanglement measure and its corresponding coherence measure remains invariant under ideal quantum reference frame (QRF) transformations. Although this result is derived for a simple scenario using specific measures, specifically a pure bipartite quantum states, its core message is that a unitary QRF change merely redistributes quantum resources without altering the overall quantum uncertainty of the system.
In our work, we invoke this theorem solely as an illustrative analogy rather than as a foundational assumption. We explicitly acknowledge its limitations—namely, that it applies only to pure bipartite states and may not hold for more complex or mixed systems. Nevertheless, the underlying principle that total “quantumness” (or quantum uncertainty) is conserved under a change of reference frame is well grounded in the unitarity of quantum mechanics. This trade-off has been noticed in other areas of physics such as the "second law of infodynamics" whereby a system’s informational content decreases with an increase in information. In this they see again a total information-centric conservation law arise where the total informational content within the system is preserved and transformed in a way that preserves this relationship.
Furthermore, as discussed in our Appendix A, we extend this intuition by considering the trade-off between local coherence and system–environment entanglement in dissipative settings. In this extended framework, even if local measures (such as coherence) appear to decay, the overall quantum uncertainty remains invariant when accounting for all parts of the system. While this generalization is speculative, it provides a plausible conceptual bridge from the restrictive conditions of Theorem 1 to the broader phenomena observed in our study.
In summary, although Carlo et al.’s result is limited in scope, its central insight—that the total quantum resources are preserved under ideal QRF transformations—supports our view that decaying informational entropy does not imply a loss of overall quantumness, but rather a redistribution that is frame-independent.

3. Results

It should be clear that, D K L ( P | | Q ) = 0 occurs when our theoretical model matches observation, i.e. when an ideal physical model is known. D K L ( P 1 | | Q ) is how much the ideal theoretical model is mismatched by our current model. D K L ( P | | P 1 ) tells us how much the amount of work required to bring a universe following our best current model to match our observed universe. In this idealized case, D K L ( P | | Q ) = 0 , the following must result D ( P 1 | | Q ) = D ( P | | P 1 ) . This can be described as a decomposition of two separate, but additive types of path dependence (i.e. geometric curvature and drag), but ultimately, knowing the specific underlying mechanism is irrelevant. This is because the optimized feature is an energy deficit from our expected distribution from our models and the observed universe. Lastly, it is well known that Ricci Flow follows as a Lie derivative with respect to a perturbed state. The Linearized operator governing evolution of h(t) has only smooth decaying modes, consistent with the below work. We attempt to reconcile this observational deficit by modeling it as a function of Ricci flow as a spacetime reparative process ensuring a smooth manifold.

3.1. Ricci Flow as Repair

The geometric evolution for a Riemannian metric, the unnormalized flow is given by:
t g i j = 2 R i c i j
We can examine the rate of "repair" compared to average flow, this is examined later as a function of temperature and density. Setting aside this for now we know the DeTurck Trick, whereby weakly parabolic (freedom to change coords) adds a diffeomorphism term to flow, leading us to a strickly parabolic system:
t g i j = 2 R i c i j + L V g i j
In this, the Lie derivative with respect to a perturbed field V, the linearized operator governing the evolution of h(t) only has decaying modes. This ensures that any perturbation dies out over time in a smooth and exponential rate relative to neighbouring states [?]. In many cases the linearized analysis shows that each mode of the perturbation decays independently and exponentially as well. This is to say in the language of this framework that if h(t) can be decomposed into eigenmodes of the linearaized operator, then each mode decays like:
e λ t
with λ > 0 as t approaches infinity and the metric returns (up to a diffeomorphism) to g 0 . It is important to note however that there may exist upper and lower bounds of decay rate. If this is the case this signature should relate to a rate of information transfer (perhaps quantum speed limit) and be extracted from examining signatures upon decomposition.

3.2. Black Holes

In order to investigate the "missing dynamics" between our observations and ideal models, we can first consider small perturbations of a background metric and that this relaxation process is governed by a linearized Ricci flow relational to the linearized Ricci operator L
h i j t L h i j
This gives a natural exponential decay such that:
h ( t ) e λ t
In this "Ricci repair" refers to the natural relaxation process of spacetime geometry under Ricci flow, whereby perturbations do not immediately revert to the idealized state but instead decay exponentially over time. we know that black holes can be described in three separate ways consistent with the four laws of black hole mechanics [21]. First, in terms of their event horizon; Second, in terms of the singularity; Third, in terms of the gradient curvature of spacetime between regions. This paper argues that solely examining a black hole from only one of these frameworks is an insufficient description. As such we will relate spacetime compression of states to the hypothesized upper rate limit of information transfer (quantum speed limit) as a function of an upper boundary of decay. By doing so we have an emergent boundary relational to the Bousso or Bekenstein Bound for a given observer’s reference frame. Specifically, for an outside observer, you end up with an energy density which approaches infinity at the singularity, but with maximum entropy boundaries across a finite surface area dependent on the radius of the event horizon. Each of these entropic shells have their own energy distribution as given by Helmholtz Free Energy equations. As the radius of a black hole is directly proportional to its mass, which is then proportional to the number of maximally defined states (atomic states) we can understand the reason why smaller black holes evaporate faster if entropy flow dominates energy dissipation. This is given because if R s r then higher-order multipoles lead to fields dissipating far faster. However, for R s r it would appear as a point source and the inverse-square law would dominate.
As such we have two reliable boundary conditions for entropy, particularly as it relates to curvature. Specifically, the maximum entropy state, or curvature state that blocks photon escape (i.e. Bousso, Bekenstein Bounds at event horizon). Assuming a stable system, all free energy approaching the event horizon is drawn into the center of the black hole rapidly until maximum entropy density is reached (finite or infinite), if the slope of the gradient of curvature is continuous between these two points, the rate of black hole decay should directly relate to differences in energy states between curvature steps. This is consistent with our expectation that Ricci Flow is the potential description for geometric repair as exponential decay. For a given system we have:
S ( n ) = 1 C S ( n + 1 )
which can be expressed by the curve:
S ( x ) = S 0 e x ln C
This "Ricci repair" mechanism, whereby a small deviation in the metric decay exponentially over a characteristic time scale, mirrors relaxation processes in thermodynamics and provides a natural explanation for the gradual repair of spacetime geometry. Below we will investigate if this an appropriate explanation for where observation does not meet our current theories.
However, because the bounding values of curvature are predefined (escape curvature of photon and maximum relative curvature between neighbour states), the only varying parameter is the radius of the black hole event horizon, and therefore we can write that the change in field decay is moderated by the scaling factor k (a function itself of r):
C = C k , C k = C
where C represents the maximum possible field decay. Again, we can use the Von Neumann entropy S of a quantum state ρ given by:
S ( ρ ) = Tr ( ρ log ρ )
If we consider two states then the amount of entropic work performed is related to the difference in their entropies and respective energies.
F ( ρ ) = E ( ρ ) T S ( ρ )
What is interesting here is the equivalence of T S ( ρ ) to energy, indicating that given the relatively homogeneous temperature within a stable black hole, the change of temperature relative to energy is d T / d E 1 / E 2 ; therefore entropy changes are equivalent to changes in energy at a first-order approximation. This should not be too surprising as a commonly defined macroscopic description of temperature as a comparison of two heat transferring states (V, N) is:
T : = ( d U d S ) V , N
In this, there are some specific consequences particularly as it relates to the thermodynamic limit for considering temperature in an information context. This will be examined in a later paper, so for now we can focus on the Work extracted which is related to the difference in free energies:
W = F ρ 1 F ( ρ 2 )
This can instead be looked at directly from the perspective that a change in entropy is a change in energy state. Specifically, decreasing the entropy increases free energy, which can then propagate away. Along the event horizon, very high entropy inhibits the release of photons, as this entropy dissipates away. In this case, even if E ( ρ ) within the event horizon (outside singularity) falls to zero, i.e. in the state where all mass/energy falls towards the center, there still exists a flow energy resulting from the flow of entropy. This should make sense as the entropy flowing away from the singularity is equal to the entropy flowing away from the event horizon. In this, the flow of entropy is what makes black holes evaporate as vacuum entropy density is significantly lower. This requires its own gradient resulting in the fluid-like nature of the surface of a black hole. We can look at the rate of volumetric contraction of the black hole as it relates to the loss rate. The ratio of the rate of entropy loss S as it relates to field decay with the respective volume change defined by the total entropy/energy within the sphere. Under constant temperature:
T d S + S d T d t d V d t = T d S d t d V d t = T d S d V
and this from the above construction is simply:
T d S d V = 3 C r 4 π r 3 = 3 C 4 π r 2
The Schwarzschild radius is given by Black hole thermodynamics was first formulated by Bekenstein (1973) [22] and quantum evaporation by Hawking (1975) [23]:
R s = 2 G M c 2
Which we can rearrange to convert back into mass:
1 R s = c 2 2 G M
This gives us the rate of entropy change as a ratio of total volume change (curvature change):
T d S d V = 3 C c 4 16 π G 2 M 2
The total entropy of a black hole is given by the Bekenstein-Hawking formula and is proportional to the area of the event horizon:
S = k B c 3 G A 4
For which we find the mass loss rate derived from the power equation:
d M d t c 4 G 2 M 2
And we can factor out the mass loss:
T d S d V = d M d t 3 C 16 π G
Since the rate of entropy loss is the same as the rate of mass loss (energy/mass equivalence, see Appendix A) we now work towards a cosmological constant for the maximum field decay rate between two states.
C = 16 π G 3
where C 3 = V r 3 = 4 π 3 is a generalization for an n-dimensional sphere of 3 spatial dimensions. The generalized equation for the coefficient of an n-dimensional sphere (n-ball) with radius r is:
C n ( dimensions ) = V n ( r ) r n = π n 2 Γ n 2 + 1
So, for a spacetime with dimensions D = (1 + 3), we have:
C = D C 3 G
This can be factored back into the exponential decay equation to get the mass decay rate due to entropy dilution.
  • Similarly, we can replace mass conversion here with energy to get:
T d S d V = d E t d t , bounded by : C u m 3 s
where adjusting C to relativistic SI units:
C u = C c 2 = D C 3 G c 2
This is expressed in terms of SI units ( m 3 / s ) . Specifically, it appears to be a maximum expansion rate on the order of one cubic Planck length per Planck time which is a startling, if not intuitive result. Additionally, in evaluating time as proper time ( τ ) we can see the rearrangement of the equation to the form of:
d t d τ T d S d E t = γ T d S d E t = d V d τ
expansion scalar change is a measure of relative relaxation over time is respective to the change of entropy in respect to the energy of the system more concretly:
d V d τ = θ V
with ( C e ) as the maximum energy relaxation relative to the observer i.e.
  • Formulated relativistically with the Lorentz factor:
γ i T d S d E = d V ( γ i , j , k ) d τ
or more simply in a truly covariant solution:
T d S d E = d V d τ
What we get from integration as a result is:
T S E t = V ˜ ( τ ) = V ( τ ) V 0 = e 0 τ θ ( τ ) d τ
one might be interested, particularly if they can observe free propagating energy that:
T S E t = 1 E f E t
which relates to the neighboring relaxation rate that is bounded by C μ
S ( x ) = S 0 e x ln C e
It is important to note however that this Lorentz factor does cancel out, given that the scaling factor θ is measured with its own Lorentz factor, giving us a truly covariant solution. This is interesting because S and E can both be described in terms of state functions which can be expanded upon from the completed work in Section 2.3:
S E t = T r ( ρ l o g ( ρ ) ) T r ( ρ H ^ )
And if ρ is thermal then ρ = e β H / Z : λ i = e β E i / Z where E i is the energy of the i-th eigenstate of H and:
T = 1 k B β
Or in other words, the change in symmetry compression is relative to the observer. Specifically, we can see dE as a change to the states’ eigenvalues and dS as a change to the states’ distribution, particularly as it relates to normalization. This implies that a component of curvature relaxation directly relates to the emergence of symmetries in the state function path. Again, it is obvious how this relates back to signals processing theory and in particular data modulation.
This should make intuitive sense, we can view entropy as a measure of concentration of energy on a manifold. Just like weight on an elastic surface, deformity is a function of the concentration of weight, not simply the total amount of weight on the surface. Evenly distributed weights in this case may not break the threshold where surface tension is exceeded and no deformity exists, conversely, if all the weight was concentrated in a singular point, likely the surface would tear. Temperature is a measure of the mean velocity of these particles, this too makes sense why rapid translations across a surface result in a storing of energy if the geometry is not instantaneously repaired back to idealized state in the path of the particles. It also gives a somewhat intuitive description for why particles with mass cannot pass the speed of light, because their forward deformation of spacetime would require spacetime to deform faster than the maximum limit. By adding momentum (via Lorentz factor) we are pre-deforming spacetime in front of us, through dissipation, to ensure this maximum is not reached locally. For massless particles, they do not exceed the tensile capacity of spacetime so they are free to transmit at maximum speed.

3.3. Black Hole Viscosity

Finally, the slope of the entropy flow determined by the radius of the black hole can be related to fluid dynamics where back pressure is directly related to flow in a classical system and is characterized by the slope of the fluid height profile:
P ( x ) = ρ g ( h 0 S x )
In this sense, it takes more work to change the flow of a system with high back pressure. This relates directly to viscosity along the event horizon. A slower entropy decay due to high back pressure would directly appear as a fluid with high viscosity. This directly matches the observed phenomena of fluid-like black hole boundaries. Further calculation is required to get a purely quantum entropy description for this back pressure.

3.4. Expansion Scalar Examination

The expansion scalar θ is used to denote fluid-like flow for an observer. It is defined as the covariant divergence of the four-velocity field:
θ = μ u μ
Where μ is the covariant derivative associated with the spacetime metric g μ v and u μ is the four-velocity field of observers or fluid elements. This expansion scalar is interpreted as expanding or diverging worldlines when θ > 0 , contracting or convergent worldlines when θ < 0 , and static when equal to 0. Alternatively, we find:
d θ d τ = u μ μ θ = 1 3 θ 2 u μ v u μ v + ω μ v ω μ v R μ v u μ u v
This scalar is regularly defined as an infinitesimal volume element dV moving with the flow with observers’ proper time (see Raychaudhuri, 1955 [24]; Hawking and Ellis, 1973 [25]). The dramatically simplified version for the reader is such that:
d θ d τ = 1 3 θ 2 + Λ
In other words θ is the fractional rate of volume change per unit of proper time. For homogeneous and isotropic universes, like as described by the Friedman-Lemaitre-Robertson-Walker (FLRW) metric, the expansion scalar is related to the Hubble parameter:
θ = 3 H ( t )
The behavior of θ is crucial in the Penrose-Hawking singularity theorems, with highly negative values leading to singularities, this highlights this derived result acting in congruence with the above formulation. Further, for examining tidal forces and its impact on expansion, shear and rotation congruence we can examine the evolution of the expansion scalar along the flow as governed by the Raychaudhuri equation:
d θ d τ = 1 3 θ 2 σ μ v σ μ v + w μ v w μ v R μ v u μ u v + a μ a μ
Implying a deep thermodynamic nature of gravity [26]. This above formulation should give ample evidence of the relationship between entropy, energy, and the expansion scalar. In particular, it should become apparent to the reader how the expansion scalar is directly related to dark energy. In particular how the cosmological constant Λ contributes to the Ricci tensor R μ v such that:
R μ v = 4 π G g μ v ( ρ 3 p ) + Λ g μ v
From the first Friedman Equation (energy density equation) we can also see:
H ( t ) 2 = 8 π G 3 ρ ( t ) k a ( t ) 2 + Λ 3
and the second Friendman Equation (acceleration equation) which the Λ term drives observed accelerated expansion:
d 2 a ( t ) d t 2 a ( t ) = 4 π G 3 ( ρ ( t ) + 3 p ( t ) ) + Λ 3
Also important to note that dark energy has equation state w = 1 which means it has negative pressure, whereby we denote pressure as having three times more effect on dynamics than energy density (hence 3p(t)).
If the overall system’s expansion scalar approaches a constant value, this reflects a state of continuous accelerated expansion otherwise seen, or more accurately previously modeled as de-Sitter expansion.

3.5. Special Conditions

This derivation results in some obvious special conditions, primarily when a system has constant entropy and energy i.e. conservation of energy and entropy. In this case, we get:
d τ d V where θ bounded by C e
This should make intuitive sense, as time increases, the total volume enclosed in the light cone of all interactions observed within it (system evolution) increases as well. Additionally, we have another special case that is potentially worth investigating:
d E d S d τ where θ bounded by C e
Where a local change of entropy (perturbation of a field) with discrete time steps. This describes a system of two overlapping light cones that are contained in another larger “older” light cone. As time increases, the entropy of the system increases, as such the change of energy contained within curvature also increases. Spacetime contraction requires some quanta of energy, for photons propagating this would look like a redshift as wavelength increases. It is important to note here that this phenomenon would relate directly to the gradient of neighboring states’ entropy, as a photon’s energy would be transferred into the vacuum state if the two states’ normalization is not congruent (i.e. the scalar deformation) consistent with a unitary interaction. On a universal scale, this should on average appear as a linear relationship with distance traveled, with only local deformations resulting from regions with highly similar entropic states or regions where photons gain energy from vacuum states with higher potential. Unlike other "tired light" mechanisms, as this process is based on the relative energy or entropy of the neighboring states that a particle transits through, rather than the particle’s energy itself, it should not be wavelength-dependent. As such, this process should not exhibit blurring which is a categorical problem with other "light scattering" or "tired light" models.

3.6. Temperature Regulation and Dark Matter

Another consequence of this formulation is that temperature evaluated between two points, T = U ( x ) / S ( x ) is a macro, statistical description that is valid under certain assumptions. T, the average energy transmission through spacetime (i.e. average momentum of particles) may mediate the flow of the expansion scalar.
In systems with little or no relative free energy, with Von Neuman entropy, on can determine from the eigenvalue populations in the density matrix where:
T = 1 k B β , d t d τ T d S d E = d θ d τ = d t d τ T d ( T r ( ρ l o g ( ρ ) ) d T r ( ρ H ^ )
and:
ρ = Σ i = 1 n λ i | v i v i
so if ρ is thermal, ρ = e β H / Z
and λ i = e β E i / Z where E i is energy of the i-th eigenstate of H.
  • In other words, average interaction frequency relates to higher rate of unitary interactions which affect the rate that a systems entropy is updated. Another way of thinking about this is it is the degree that a systems internal movement resists curvature relaxation. Intuitively this makes sense, if we repetitively block flow in a river downstream, the geometry of the flow retains deformation proportional to the frequency of blockage and the rate of flow repair. If we look back to Section 2.2, 2.3 we see that a series of unitary operations occur when path evaluation (curvature evaluation) occurs, in other words, a particle transits through a region. Any transiting particle, such as a photon emitted from a source, would cause interactions resulting in the updating of the states normalization, therefore entropy and the curvature of spacetime. Conversely, in very low-temperature settings, fewer and lower energy particles transit through the medium resulting in less impactful changes to curvature. This may give us insight into why Dark Matter halos are predominantly viewed at the edge of galaxies (see, e.g., Bullock and Boylan-Kolchin, 2017 [27]), as dense regions around star formation tend to have higher temperatures and higher average flux density. As such, We can view the above boundary condition as a measure of the proportion of the free energy and locked entropic energy within a system, in other words:
T d E f d E l
In other words, the higher the temperature, the more dominant free energy will be within the system compared to locked energy, i.e. entropically defined energy. A direct examination of this ratio will be conducted in a later paper. Until then the reader can ponder if the velocity of the leading edge of gravitational waves, and thus gravitational energy density can be directly affected by temperature as it acts on gravitational wave flux density.

4. Discussion

The relationship between quantized spacetime and entropy should be clear. Entropic work is performed by the observation or translation of any particle. For simple particles without complex spin symmetries, this time evolution is forced into gradient dispersion as a direct consequence of the projection of a particle its prior state location influencing its current state location. That entropy is further increased requiring more work to be performed when the neighboring state is dissimilar. Since curvature increases with increased energy, this provides a mechanism for the experience of smooth spacetime curvature as spacetime contracts or expands to become more homogeneous with its surrounding states. It is important to note here that a relaxing volume necessitates its neighbor volume to contract as a particle with a given energy is essentially annihilated and then created in the new volume because of the field propagation. If contraction and expansion of a volume occur non-linearly (such as in General Relativity), a macro-observer in a stable system would observe a change of relative volumes dramatically smaller than the lower bound of the non-zero minimum length. This would make observing this mechanism only possible under extreme circumstances where the gravitational field is violently disturbed. Also note that under this formulation gravitational fields are the combinatory field of all energies in all existing fields, I find no reason to assume that this entropic process prefers energy propagation in any particular mode.
This framework gives us a time evolution description for curvature and a relativistic energy density which relates directly to entropy and increasing system symmetries. This description translates directly to expected results for black holes and dark energy. In particular, we expect dark matter density within the halo to fall off proportional to the cube of radius at large distances.
Numerical simulations and observational studies consistently indicate that dark matter halos exhibit a characteristic two-regime density profile. In the inner regions of galaxies, dark matter is expected to follow a shallow cusp with a density scaling roughly as ρ ( r ) 1 / r , as predicted by the Navarro–Frenk–White (NFW) profile [28]. Observational evidence from rotation curve analyses, such as those of M33, supports this central cusp behavior [29], while weak lensing measurements further confirm that the inner mass distribution is well described by such a profile [30]. High-resolution N-body simulations—including results from the Aquarius Project—demonstrate that halos transition from this 1 / r behavior in their cores to a steeper ρ ( r ) 1 / r 3 decline in the outer regions, a change that is essential for ensuring a finite total halo mass [31,32]. Moreover, empirical models constructed using nonparametric techniques have further validated that the outer halo density follows this steep decline [33]. This steep decline is exactly what we would expect from a Ricci Flow decay mechanism that is based on the ratio of free propagating energy to total energy i.e. 1 ( E f / E t ) = T S / E t . Additionally, we can view the expansion of our observable universe to be an expansion of our light cone whereby low entropy states are being added to our relativistic worldview. By using theory which shows that entanglement information is inversely related to coherence information, also seen in information dynamics (Vopson, 2022) you can derive a relationship between the gradient of entropy. Specifically, one can find an entropic relationship between entanglement and coherence that is in covariant and contravariant forms of a new metric tensor. Additionally, we can find that curvature can be described here as worldlines expressing harmonized modes of information where the real and complex components of the eigenvalues are. These scalar multiples relate directly to modes of force that correspond to the 2D plane delta curvature (as compared to Riemann Flat space). This geometric analysis reveals interesting results given the entropically defined time evolution boundary conditions.

4.1. Limitations

Several key limitations must be considered within this paper. First, in deriving the relationship between the change of entropy and energy we assumed stable, non-changing temperature (hence from product rule S d T = 0 . This cannot be assumed for the vast majority of applications, however, for black holes d T / d E 1 / E 2 and as such should not play a dominant role, particularly on a first-order approximation. Further work should be conducted in fully incorporating dT as simplifications are highly likely. Secondly, we constructed this formulation at extreme boundaries near a singularity, where mass is pulled towards the center. As such we assumed that the entropic component of total energy was dominant for the region in between the singularity and the event horizon. In systems with elevated levels of free energy, this component must be considered for its impact on curvature deformation according to the stress-energy tensor.
More significantly, is the potential for an alternative decomposition which may better "fit" the distribution divergences. While this author attempted to ground the decomposition in sound logic and fundamental principles (least action, geometric interpretation) one cannot ignore the fact that much of these ideas are derived from approximation and assumption from best fit.

5. Conclusions

In conclusion, our investigation challenges the traditional path-independent assumptions of tensor contraction in spacetime, revealing that significant information is lost in this simplification. The evidence points to the necessity of path-dependent analyses to account for the entropic dynamics inherent in spacetime, particularly at quantum scales. This realization opens new avenues for research, particularly in the examination of the Riemann Tensor with an emphasis on preserving information and embracing the entropic nature of spacetime evolution. Future work should explore these entropic path-dependent processes in greater detail, aiming to integrate them with existing theoretical frameworks to enhance our understanding of spacetime dynamics. This approach not only promises to refine our grasp of General Relativity also bridges the gap between classical and quantum physics by acknowledging the fundamental role of information entropy in shaping the universe.

Funding

The author’s research is financially supported by Department of National Defence, Veterans Affairs Canada.

Acknowledgments

This paper would not have been possible without the exceptional support of my many friends and family. In particular, I would like to thank my brother Jonathan Gauvin for his persistent encouragement, feedback, and insight. Additionally, I would also like to thank Chris Nantau for providing editorial and grammatical support. Lastly, to Akeem Warren and all those whose service became the ultimate sacrifice. You were better than we deserved. Lest we forget.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

For a net divergence equal to zero, this has been previously shown to be an entropic maximum:
Δ D K L ( P , Q ) = D K L ( P | | Q ) D K L ( Q | | P ) = 0
Under some perturbation, retaining conservation of energy (total distribution) we can decompose this to:
Δ D K L ( P , Q ) = Δ D K L ( R , Q ) + Δ D K L ( P , R ) + Δ δ = 0
Where δ is simply the interference between these decompositions:
δ = Σ x [ P ( x ) R ( x ) ] ln R ( x ) Q ( x )
and x [ Γ ] for a path distribution formulation. We have from Cepollaro et al., (2024) that the sum of entanglement and coherence is invariant under QRF transformation:
δ | Ψ x > = E + C
In this we can state that some mixed systems’ total "quantumness" is also invariant under QRF transformation. If the system under evaluation is classically dominated, i.e. there is little noise, interference, or quantum fluctuations then Δ δ = η 0 and we can state:
η Δ D K L ( R , Q ) = η Δ D K L ( P , R )
which gives us the analog that can be integrated with the solution found by Vopson & Lepadatu (2022) of decreasing informational entropy:
C ( ρ ) = S ( ρ d i a g ) S ( ρ ) , E = S ( ρ d i a g ) = T r ( ρ l o g ρ )
and QFI states, interference or phase sensitivity:
F Q = 4 ( ψ | ψ | 2 )
with a higher F q we have quantum mechanics as more dominant. Taking this, we can simply reframe η 1 , 2 as the uncertainty of each affine connection compared to the Levi-Civita connection, which is the connection for purely classical manifolds, i.e. maximum entropy where Δ D K L = 0 . So:
Δ D K L ( R , Q ) = Δ D K L ( P , R ) ( η p r η r q )
In other words, these two distributions should be equal up to some quantum correction term where for a classically defined system:
( η p r η r q ) 1
In this, application we have applied the two different distribution expectations, one from the Hawking radiation and the other derived from an expectation of Ricci flow relaxation which has a non-instantaneous relaxation time. i.e.
d ( T S ) d V = d M d t ( C 16 π G 3 )
giving us the η 1 / η 2 such that:
C = 16 π G 3
or when factored for Energy,
C μ = 16 π G 3 c 2

References

  1. Wang, Q.A.; Tsobnang, F.; Bangoup, S.; Dzangue, F.; Jeatsa, A.; Le Méhauté, A. Stochastic action principle and maximum entropy. Chaos, Solitons & Fractals 2009, 40, 2550–2556. [Google Scholar] [CrossRef]
  2. Matsueda, H. Emergent general relativity from Fisher information metric. arXiv 2013, arXiv:1310.1831. [Google Scholar] [CrossRef]
  3. Yahalom, A. Dirac equation and Fisher information. Entropy 2024, 26, 971. [Google Scholar] [CrossRef] [PubMed]
  4. Jaynes, E.T. Information theory and statistical mechanics. Physical Review 1957, 106, 620–630. [Google Scholar] [CrossRef]
  5. Kaila VR, I.; Annila, A. Natural selection for least action. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 2008, 464, 3055–3070. [Google Scholar] [CrossRef]
  6. Pressé, S.; Ghosh, K.; Lee, J.; Dill, K.A. Principles of maximum entropy and maximum caliber in statistical physics. Reviews of Modern Physics 2013, 85, 1115–1141. [Google Scholar] [CrossRef]
  7. Caticha, A. Entropic dynamics, time, and quantum theory. Journal of Physics A: Mathematical and Theoretical 2011, 44, 225303. [Google Scholar] [CrossRef]
  8. Reginatto, M. Derivation of the equations of nonrelativistic quantum mechanics using the principle of minimum Fisher information. Physical Review A 1998, 58, 1775–1778. [Google Scholar] [CrossRef]
  9. Friedan, D. Nonlinear models in 2 + ε dimensions. Annals of Physics 1985, 163, 318–419. [Google Scholar] [CrossRef]
  10. Cover, T.M.; Thomas, J.A. Elements of information theory, 2nd ed.; Wiley-Interscience, 2006. [Google Scholar] [CrossRef]
  11. Kawai, R.; Parrondo JM, R.; Van den Broeck, C. Dissipation: The phase-space perspective. Physical Review Letters 2007, 98, 080602. [Google Scholar] [CrossRef]
  12. Hatano, T.; Sasa, S.-i. Steady-state thermodynamics of Langevin systems. Physical Review Letters 2001, 86, 3463–3466. [Google Scholar] [CrossRef] [PubMed]
  13. Mardia, K.V.; Jupp, P.E. Directional Statistics; Wiley, 2009; ISBN 978-0-470-31781-5. [Google Scholar]
  14. Banerjee, A.; Merugu, S.; Dhillon, I.S.; Ghosh, J. Clustering with Bregman Divergences. Journal of Machine Learning Research 2005, 6, 1705–1749. [Google Scholar]
  15. Kass, R.E.; Raftery, A.E. Bayes factors. Journal of the American Statistical Association 1995, 90, 773–795. [Google Scholar] [CrossRef]
  16. Roldán, É.; Parrondo, J.M.R. Entropy production and Kullback–Leibler divergence between stationary trajectories of discrete systems. Physical Review E 2012, 85, 031129. [Google Scholar] [CrossRef]
  17. Seifert, U. Stochastic thermodynamics, fluctuation theorems and molecular machines. Reports on Progress in Physics 2012, 75, 126001. [Google Scholar] [CrossRef]
  18. Sivak, D.A.; Crooks, G.E. Thermodynamic metrics and optimal paths. Physical Review Letters 2012, 108, 190602. [Google Scholar] [CrossRef]
  19. Cepollaro, C.; Akil, A.; Cieśliński, P.; de la Hamette, A.-C.; Brukner, Č. The sum of entanglement and subsystem coherence is invariant under quantum reference frame transformations. arXiv 2024, arXiv:2406.19448. [Google Scholar] [CrossRef]
  20. Vopson, M.M.; Lepadatu, S. Second law of information dynamics. AIP Advances 2022, 12, 075310. [Google Scholar] [CrossRef]
  21. Bardeen, J.M.; Carter, B.; Hawking, S.W. The four laws of black hole mechanics. Communications in Mathematical Physics 1973, 31, 161–170. [Google Scholar] [CrossRef]
  22. Bekenstein, J.D. Black holes and entropy. Physical Review D 1973, 7, 2333–2346. [Google Scholar] [CrossRef]
  23. Hawking, S.W. Particle creation by black holes. Communications in Mathematical Physics 1975, 43, 199–220. [Google Scholar] [CrossRef]
  24. Raychaudhuri, A.K. Relativistic cosmology. I. Physical Review 1955, 98, 1123–1126. [Google Scholar] [CrossRef]
  25. Hawking, S.W.; Ellis GF, R. The large scale structure of space-time; Cambridge University Press, 1973. [Google Scholar] [CrossRef]
  26. Jacobson, T. Thermodynamics of spacetime: The Einstein equation of state. Physical Review Letters 1995, 75, 1260–1263. [Google Scholar] [CrossRef] [PubMed]
  27. Bullock, J.S.; Boylan-Kolchin, M. Small-scale challenges to the ΛCDM paradigm. Annual Review of Astronomy and Astrophysics 2017, 55, 343–387. [Google Scholar] [CrossRef]
  28. Navarro, J.F.; Frenk, C.S.; White SD, M. A Universal Density Profile from Hierarchical Clustering. The Astrophysical Journal 1997, 490, 493–508. [Google Scholar] [CrossRef]
  29. Corbelli, E.; Salucci, P. The Extended Rotation Curve and the Dark Matter Halo of M33. Monthly Notices of the Royal Astronomical Society 2000, 311, 441–447. [Google Scholar] [CrossRef]
  30. Mandelbaum, R.; Seljak, U.; Cool, R.J.; Blanton, M.R.; Hirata, C.M.; Brinkmann, J. Density profiles of galaxy groups and clusters from SDSS weak lensing. Monthly Notices of the Royal Astronomical Society 2006, 372, 758–776. [Google Scholar] [CrossRef]
  31. Springel, V.; Wang, J.; Vogelsberger, M.; Ludlow, A.; Jenkins, A.; Helmi, A.; Navarro, J.F.; Frenk, C.S.; White, S.D.M. The Aquarius Project: the subhaloes of galactic haloes. Monthly Notices of the Royal Astronomical Society 2008, 391, 1685–1711. [Google Scholar] [CrossRef]
  32. Gao, L.; Navarro, J.F.; Cole, S.; Frenk, C.S.; White SD, M.; Springel, V.; Jenkins, A.; Neto, A.F. The redshift dependence of the structure of massive ΛCDM haloes. Monthly Notices of the Royal Astronomical Society 2008, 387, 536–544. [Google Scholar] [CrossRef]
  33. Merritt, D.; Graham, A.W.; Moore, B.; Diemand, J.; Terzić, B. Empirical Models for Dark Matter Halos. I. Nonparametric Construction of Density Profiles and Comparison with Parametric Models. The Astronomical Journal 2006, 132, 2685–2700. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated