1. Introduction
In classical general relativity, the cosmological constant,
, added in the Einstein equations, governs the Universe’s accelerated expansion. However, the equivalence principle of general relativity requires that every form of energy gravitate similarly. Therefore, accordingly to this principle, the enormous energy of the quantum vacuum fluctuations must produce a large gravitational effect. Moreover, the virtual particles contributing to the vacuum energy density,
, should alternate the value of the cosmological constant [
1,
2]. Although we do not know how to compute
precisely, the quantum field theory (QFT) allows one to estimate its value. Unfortunately, the estimates disagree with observational data by
factor. It is the worst prediction of the theoretical physics, also known as the cosmological constant problem (For a review, see the references [
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13].). There is no generally recognized explanation for this discrepancy, although a lot of papers have been written on it.
In the middle of the last century, Pauli assumed that the vacuum energies of bosons and fermions might compensate for each other [
14]. This assumption is based on the fact that the vacuum energy of fermion and bosons have opposite signs: the energy of bosons has a positive sign, while that of fermions has a negative one. Later Zeldovich developed this approach to link the vacuum energy to the cosmological constant. Instead of eliminating the divergences through a boson-fermion cancellation, he suggested a covariant Pauli-Villars regularization yielding the finite residual vacuum energy and negative pressure corresponding to a cosmological constant [
1,
2].
Recently the approach based on the Pauli-Zeldovich cancellation of the vacuum energy divergences has been revived to attack the cosmological constant problem by combining both ideas [
15,
16,
17]. The Pauli suggestion was used for canceling all the ultraviolet divergences in the vacuum energy. Zeldovich’s approach was used to prove that the remaining finite part of the vacuum energy yields an effective cosmological constant. Several simple toy models, having particles with equal masses and spin 0 and
, illustrated the method, wherein interactions can be quite non-trivial. However, the problem is still far from its resolution.
The standard formulation of the cosmological constant problem is problematic since the spacetime and the vacuum energy density are highly inhomogeneous and wildly fluctuate at the Planck scales. The importance of quantum fluctuations in spacetime topology at small scales has been emphasized by many authors (See, for instance, [
18,
19,
20,
21,
22,
23,
24].). Assuming that vacuum fluctuations at the Planck length scale might generate a cosmological constant requires quantum gravity [
3,
5,
6,
8,
9,
10,
11,
12]. The search for a quantum theory of gravity faces the challenge of comprehending and describing the quantum nature of space and time [
25,
26,
27]. One can distinguish two general strategies to achieve these goals [
28,
29]. The first strategy consists in quantizing a classical structure that later is recovered as a limit of the quantum theory. The second strategy assumes that the classical structures are emergent from the other theory, the more fundamental theory from the beginning. In the second approach, a formulation of the quantum theory may require omitting the use of continuum concepts
a priori. It means that at the Planck scale, the standard concept of spacetime must be replaced by some discrete structure (See, for instance, [
28,
30,
31,
32,
33,
34,
35,
36,
37,
38].).
In [
39,
40,
41,
42,
43,
44,
45], we proposed a new unified algebraic approach, based on
nonassociative geometry, for describing both continuum and discrete spacetimes. In our model of spacetime, time is quantized, and a random/stochastic process governs the evolution of spacetime geometry. As a result, we obtain a partially ordered set of events with spacetime geometry encoded in the nonassociative structure of spacetime [
46]. Among advanced models that propose discreteness, two are related to our work: Causal Sets [
47,
48,
49] and Causal Dynamical Triangulations [
32,
50,
51,
52,
53,
54,
55,
56]. These models are based on the hypothesis that spacetime is discrete and causality is a fundamental principle.
This paper addresses the cosmological constant derivation within the discrete spacetime model proposed in [
45] by employing the Pauli-Zeldovich cancellation of vacuum bosonic and fermionic degrees of freedom. We treat 3-dimensional space as a simplicial 3-complex (complex network). Vertices (0-simplices) of the network are the atoms of spacetime, and it is assumed that they are fermions with the spin one half. Edges, or 1-simplices, are entangled particle states and have the spin 1. The 2-simplex is built from three vertices. Thus, the 2-simplex is formed as an entangled state with a total spin of 3/2. The 3-simplex is the entangled state of four atoms with a total spin 2. At large scales, a highly connected complex network is a coarse, discrete representation of a smooth spacetime.
The properties of spacetime are described by methods of statistical physics based on the information Shannon-Gibbs entropy [
57,
58,
59]. For high temperatures, the space is presented by a simpilicial 0-complex (or a disconnected discrete space). With decreasing the temperature of the network, the process of triangulation begins with the formation of low dimensional complices and clusters. At the Planck temperature,
, the system experiences the phase transition; for low temperatures,
, the network becomes a simplicial 3-complex (or a triangulated discrete space).
We show that the "foamy" structure, analogous to Wheeler’s "spacetime foam" [
18], significantly contributes to the effective cosmological constant,
. The latter is defined by the universe’s Euler characteristic,
;
, where
denotes the Planck length and
V is the volume of the universe. It is our main result.
The paper is organized as follows. Sec. 2 discusses the statistical properties of complex undirected networks with a fixed number of vertices and a varying number of links, described by the grand canonical ensemble. As a particular application, we consider in detail simple (fermionic) graphs with only one edge allowed between any pair of vertices. In Sec. 3, we briefly introduce a discrete spacetime model based on nonassociative geometry. In Sec. 4, we introduce and explore in detail a spacetime model based on the fermionic network. In Sec. 5, we derive the cosmological constant and show that the Euler characteristic of the universe defines it. In Conclusion, we summarize our results and discuss possible generalizations of our approach.
Throughout the paper, we use the natural units, setting .
2. Statistical Description of Complex Networks
The statistical description of complex networks (or graphs) is based on the informational Shannon-Gibbs entropy [
57,
58,
59]:
where
the probability of obtaining a graph
G, belonging to ensemble of graphs,
. For an undirected network with a fixed number of vertices and a varying number of links, the probability of obtaining a graph
G can be written as [
57,
60,
61,
62]
where
is the partition function,
is an inverse network temperature,
is the chemical potential, and
is number of links in the graph
G. An adjacency matrix,
, takes value 1 or 0 in the
entry for each existing or non-existing link between pairs of nodes (
). The network connectivity is characterized by the connection probability
, i.e., the probability that a pair of nodes
is connected. This probability is equivalent to the expected number of edges
between vertices
i and
j, namely,
.
To obtain the grand potential,
, which we will refer to as the Landau free energy, we use the relation
. Next, one can recover the Helmholtz free energy
F, internal energy
E and entropy
S, using the following relations:
Having the Landau free energy, one can find the expected number of links as follows:
.
Let us assign the “energy"
to each edge
. Then the graph Hamiltonian can be written as
, and the partition function is given by [
61]
Using the partition function, one can obtain the connection probability of the existing link between nodes
i and
j as the derivative of the partition function [
63,
64,
65,
66]:
2.1. Fermionic graphs
Consider a set of undirected graphs with only one edge allowed between any pair of vertices (so-called
fermionic graphs). The computation of the partition function yields
where the product is over the ordered pairs
. Employing Eq. (
6), we obtain the Fermi-Dirac distribution [
57]:
Using the relation
, we obtain
Having the Landau free energy, one can find the expected number of links as follows:
. The computation yields
3. Building Discrete Spacetime
Our approach is based on the nonassociative geometry, statistical description of complex networks, and the following assumptions [
41,
45,
46,
67]:
Vertices or nodes of the network are "atoms" of spacetime.
The distance between two neighboring atoms can not be less than the fundamental length, .
Spacetime geometry is encoded in the nonassociative structure of the network.
Interaction between atoms of spacetime, being nonlocal defines the spacetime geometry.
Time is quantized and the evolution of spacetime geometry is governed by a random/stochastic process.
The spacetime dimension is a dynamical variable.
We treat a discrete space as a simplicial 3-complex (see
Figure 1). Vertices (0-simplices) of the network are the atoms of spacetime, and it is assumed that they are fermions with spin one half. Edges, or 1-simplices, are entangled particle states with the spin 1. We assign to the face of a 2-simplex a spin
, where spin up/down corresponds to the positive/negative curvature of a 2-simplex. The 2-simplex is built from three entangled atoms; its total spin is
. The 3-simplex has four atoms. It is considered as the entangled state with spin 2.
The curvature of 2-simplices is associated with their faces and described by an elementary holonomy. It contrasts the Regge model, where curvature resides at vertices [
68,
69,
70]. The elementary holonomy of a 2-simplex can be written as
, where
is its area,
is the curvature, and upper/lower sign corresponds to positive/negative curvature, respectively. The null curvature corresponds to the limit
[
67].
As an example, consider the homogeneous
network with positive curvature. This network can be mapped onto the two-dimensional sphere with radius
R, which we treat as a discrete toy model of the two-dimensional universe (see
Figure 2): Left. The network is wholly disconnected, and the space is represented by the simplicial 0-complex. Middle. A partially triangulated space, described by a simplicial 2-complex. Together with isolated 0-simplices and 1-simplices, one can observe the formation of two-dimensional clusters connected by simplicial 1-complexes. Right. The network becomes completely connected, corresponding to the triangulated space.
4. Spacetime as a Complex Network
We consider the simplified version of the Hamiltonian proposed in [
45]
where
J is a coupling constant,
denotes the elementary holonomy of 2-simplex associated with the triplet of sites
forming a left triangle concerning the edge
, and
. The first term in (
11) describes the contribution of the quantum bosonic vacuum fluctuation and curvature, and the second one represents the quantum fermionic vacuum fluctuations. The summation is over 1- simplices and 2-simplices denoted as
and
, respectively.
Within the mean field approximation, the Hamiltonian (
11) is replaced by the effective Hamiltonian
where the effective energy is given by
The equilibrium state of the system is described by the generalized Fermi-Dirac distribution for the connection probability (
8),
Employing Eq. (
14), one can rewrite (
13) in the equivalent form
We assume that the network is highly connected for low temperatures, and it has a low connectance for high temperatures:
It implies that the space is represented by a simplicial 0-complex for high temperatures (or a completely disconnected discrete space), and it becomes a simplicial 3-complex (or a triangulated space) for low temperatures.
In the limit of low/high temperatures, the energy of the system is given by
where
is the total number of edges (1-simplices) of the network,
denotes the number of the 2-simplices in totally triangulated space,
, and
Here
is the average area of 2-simplices. The dimensionless parameter
defines the average curvature of the network. In particular,
describes the network with the null average spatial curvature. By employing Eq. (
17), in the limit of low temperature, we obtain the vacuum energy density as
where
V is the volume of the universe.
4.1. Toy Homogeneous Model
In what follows, we consider the case of a homogeneous network, assuming that
. With this assumption, the Hamiltonian (
11) can be recast as follows:
We assume that each 2-simplex is represented by an equilateral triangle with edges of length
so that
. Using this relation, we obtain
Within the mean-field approximation, the Hamiltonian (
20) is replaced by the effective Hamiltonian
where
is the total number of 1-simplices,
is the total number of 2-simplices of the the network, and
is the effective energy. The equilibrium state of the system is described by the Fermi-Dirac distribution,
Substituting
, one can rewrite (
23) as
In the limit of low/high temperatures, the energy of the system is given by
Eq. (24) imposes the following constraints on the chemical potential:
Thus, for
we obtain a triangulated spacetime and for
the spacetimes becomes disconnected.
For illustrative purposes, we choose the dependence of the chemical potential on the temperature as
. The typical behavior of the connection probability is shown in
Figure 3. The graph is depicted for the choice of parameters:
,
and
, where
is the Planck temperature. One can see that the system experiences a phase transition at the critical temperature
forming the fermionic condensate with the maximally entangled states. This transition is speedy, and the universe becomes a wholly triangulated space at the temperature
, where
is the temperature of Grand unification epoch beginning.
4.2. Spacetime Evolution
In our approach, space and time are quantized in the Planck units, and a random/stochastic process governs spacetime evolution. The cosmological scale factor,
, is defined by a random walk with a reflecting barrier
at
and a step
, where
is the Planck length. After
n steps, the average cosmological scale factor grows as [
46]
where
is the scale factor related to the initial 3-simplex defining the minimal size of the universe. The initial singularity is absent, and the universe starts with the micro-universe formed by a 3-simplex.
Numerical simulations show that the probability
of creating a 3-simplex corresponds to the temperature
(see
Figure 3). Therefore, considering
as an initio of the universe’s evolution is reasonable. After
steps in Planck units of time, which corresponds to pre-inflationary time,
(time of the beginning of inflation) and temperature
, the cosmological scale factor would be
. Comparison with the pre-inflationary universe’s scale factor,
, estimated within the
CDM-model as
, yields a good agreement with our prediction.
5. The Cosmological Constant Problem
Writing the Einstein equations as
one can observe that space with non-zero cosmological constant produces the same gravitational field as the matter with mass density
and pressure
. Thus, one can speak about the energy density of the vacuum and its pressure.
According to the equivalence principle of general relativity, the energy of the quantum vacuum fluctuations must produce a gravitational field, and the semiclassical Einstein equations describe their contribution:
where
is the expectation value of the quantum vacuum energy-momentum tensor.
Lorentz invariance requires that
takes the form
where
is the expectation value of the energy density of the matter fields in the vacuum state. Employing (
29), one can recast Eq.(
27) as
where
is the effective (observable) cosmological constant.
In the standard QFT, the vacuum density energy is estimated as
, where
is the Planck energy. The current Hubble expansion rate
yields an upper bound for
[
71],
where
is the Hubble constant describing acceleration of the universe. Thus, one has a huge disagreement between the observed value of the cosmological constant and the theoretical prediction of its value resulting from the quantum field theory.
The general form of the vacuum energy density, including the contribution of different fermionic and bosonic fields, can be written as [
72,
73,
74]
where
M is the mass of corresponding field, and
is the ultraviolet energy cut-off. In particular, if the vacuum is completely homogeneous and static, all the parameters vanish, corresponding to zero cosmological constant. In the presence of the interface between two different vacua, the energy density can depend not only on the mass
M but also on the mass of the quantum field in the neighboring vacuum [
74].
The standard computation of the cosmological constant assumes that the spacetime is homogeneous and isotropic, and the theory is Lorentz invariant. These assumptions are reasonable at a cosmological scale but questionable at a small (Planck) scale [
71,
75,
76]. In addition, the QFT can not be applied on Planckian scales since spacetime is discrete. Instead of QFT, we use the statistical description of the spacetime, treating the latter as a complex network. Following the same approach as in the QFT, we define the effective cosmological constant as
where
is the energy density of the vacuum state defined by the contribution of all bosonic and fermionic fields of the simplicial manifold
. If we take into account the contributions from all pieces of the simplicial complex, we obtain
where
is the vacuum energy of
i-simplex. In the limit of
and assuming
, we get
where
is the number of
i-simplices in the network, and
is the Planck mass defining the ultraviolet cut-off,
. Employing Eq.(33), we obtain
where
is the Euler characteristic of the simplicial complex
[
77]. Thus, a non-vanishing cosmological constant implies a non-trivial topology of spacetime.
To estimate the Euler characteristic,
, we use Eq.(
36) writing
The real size of the universe is unknown. For an estimate, we take
V as the volume of the observable universe,
. This yields
. Taking
, we get
. This huge number means that the topology of spacetime is highly non-trivial due to vacuum fluctuations. The fluctuations lead to forming the foamy structure, a version of Wheeler’s ”spacetime foam" [
18].
6. Conclusion
We have shown how complex networks with hidden geometry lead to the emergence of discrete spacetime from entanglement. This phenomenon can be treated as a first-order phase transition from the free fermionic gas to the fermionic condensate formed by maximally entangled states of spacetime’s atoms. This transition and the formation of our universe occurred before the Grand Unification Epoch.
In our approach, spacetime is treated as evolving complex network, and a random/stochastic process governs its evolution. The irreversibility of evolution occurs due to information loss and the existence of the fundamental Planck length. In the pre-inflation epoch, the cosmological scale factor grows as , which agrees with changing the cosmological scale factor for a radiation-dominated era.
We point out that the standard formulation of the cosmological constant problem is problematic since the spacetime and the vacuum energy density are highly inhomogeneous and wildly fluctuate at the Planck scales. We show that the "foamy" structure, analogous to Wheeler’s "spacetime foam" [
18], significantly contributes to the effective cosmological constant defined by spacetime’s topology. Explicitly the effective cosmological constant,
, is given by
, where
is the Euler characteristic of the simplicial complex
, and
V is its volume. Taking
V as the volume of the observable universe, we find that
. This enormous number implies that the topology of spacetime is highly non-trivial due to vacuum fluctuations. These fluctuations lead to forming the foamy structure that may be formed by micro-universes connected by the Einstein-Rosen bridges with wormhole topology.
Our approach opens the avenue to solving "The Worst Theoretical Prediction in the History of Physics”. The proposed solution may be considered a self-tuning solution to the cosmological problem, free from the fine-tuning problem associated with the cosmological constant.