The alert reader must have anticipated the main result of the previous section, namely, that consists of freely moving particles. By linearity, particles can move through one another uninterrupted and if so, they are noninteracting particles which should better have straight paths. Enabling their mutual interaction therefore requires some form of nonlinearity, either in the coarsener, , or in the scaling part. Further recalling our commitment to general covariance as a precondition for any fundamental physical theory, nonlinearity is inevitably and, in a sense, uniquely forced upon us. A nonlinear model also supports a plurality of particles, having different sizes which are different from the common in a linear theory. This frees , ultimately estimated at km, to play a role at astrophysical scales.
Relation (
19) is formally equivalent to Maxwell’s equations with
sourcing
’s wave equation. However,
is not an independent object as in classical electrodynamics but a marker of the locus of privileged points at which the Maxwell coarsener does not annihilate
; distinct
’s differing by some
therefore have identical
’s. For
and
to mimic those of classical electrodynamics,
must also be localized along curved worldlines traced by solutions of the Lorentz force equation in
(which as already shown in the scalar case, necessitates a nonlinear scale flow). And just like in the scalar-particle case, where higher order cumulants (
) are ‘awakened’ by its center’s nonuniform motion, deforming its stationary shape, so does the
“adjunct" (in the jargon of action-at-a-distance electrodynamics) to each such
gets deformed. Due to the extended nature of an
A-particle, and unlike in
models
3, these deformations at
are
not encoded in the local motion of its center at time
t, but rather on its motion at retarded and advanced times,
(assuming flat spacetime for simplicity). However, associating such temporal incongruity with ‘radiation’ can be misleading, as it normally implies the freedom to add any homogeneous solution of Maxwell’s equations to
which is clearly nonsensical from our perspective. Consequently, the retarded solution cannot be imposed on
and in general,
contains a mixture of both advanced and retarded parts, which varies across spacetime. The so-called radiation arrow of time manifested in every macroscopic phenomenon must therefore receive an alternative explanation (see
Section 3.5.2).
Now, why should
be confined to the neighborhood of a worldline? As already seen in the linear, time-dependent case, a scale-flow such as (
16) suffers from instability in both
s-directions; Local variations of
along space-like/time-like directions which are not annihilated by the coarsener are rapidly amplified in the
/
direction resp. If we therefore examine the scale flow of
inside a ‘lab’ of dimension much smaller than
, centered at the origin without loss of generality, then the coarsener would completely dominate the flow of
whose scale of variation is much smaller than
(unlike (
18)!) leading to its rapid divergence, unless it is
almost annihilated by
. An exception to this rule could take place around privileged points, where the scaling-field grows to such large magnitude as to allow for the scaling piece to counter a large coarsener contribution. This is where
is focused and, as shown in
Section 3.1.2 below, it is only around world-lines marking the center of an extended particle, where
peaks, that
can indeed grow to such large magnitude. “Almost" is emphasized above because it is precisely the fact that, at distances from
that are much smaller than
, the action of the coarsener is only reduced to the order of that of the scaling piece, which gives rise to deviations from classical physics. This will be a recurrent theme in the rest of the paper.
3.2. The Motion of Matter Lumps in a Weak Gravitational Field
Equation (
51) prescribes the scale flow of the metric (in the Newtonian approximation), given the a set,
, of worldlines associated with matter lumps. To determine this set, an equation for each
, given
, is obtained in this section. This is done by analyzing the scale flow of the first moment of
associated with a general matter lump, using (
22). An obstacle to doing so comes from the fact that,
now incorporates both gravitational and non gravitational interactions in a convoluted way, as the existence of gravitating matter depends on it being composed of charged matter. In order to isolate the effect of gravity on
, we first analyze the motion of a body in the absence of gravity, i.e.,
,
with
the Minkowskian coordinates and
the structural component from the previous sections. To this end, a better understanding of the scale flow (
22) of
is needed. Using (
17) and (
19) plus some algebra the scaling piece in (
22) reads
with
(where
is of course only defined up to a divergence free piece). The first two terms in (
52) are the familiar
conversion, to which a ‘matter vector’,
, is added. In the absence of gravity the commutator on the l.h.s. of (
22) vanishes. Combined, we get
Note that, since
, taking the divergence of (
54) necessitates
, which is indeed the case by virtue of the antisymmetry of
. Defining
and
the (conserved in time-) electric and ‘matter’ charges resp., and integrating (
54) over three-space implies
i.e., electric charge is conserved in scale if and only if the matter charge vanishes. That the latter is identically true follows from the divergence form (
53) of
and
. This result readily generalizes to curved spacetime
4 for an
s-independent metric, by virtue of the (covariant) divergence of the r.h.s. of (
22) vanishing, implying that the charge of a matter-lump is conserved in scale whenever spacetime is approximately
s-independent around
some point on its world-line; For then
where the second implication follows from charge conservation in time. Note that, unlike in the linear model of
Section 2, identical charge conservation holds true whether or not
corresponds to a member in
, but this does not seem to carry to the
s-dependent-metric case, which involves the delicately choreographed, regularity preserving scale-flows of both
(hence also
) and
. However since gravity is assumed to play a negligible role in the stricture of charged matter, such
s-dependence is inconsequential to members in
. Alternatively, if as in the linear model, zooming into matter must reveal only more and more copies of the same charge-quantized (fixed-point) particles, then the issue of identical charge conservation even for
becomes moot.
Next, multiplying (
54) by
and integrating over a ball,
B, containing a body of charge
q, results in
where
is an object’s ‘center-of-charge’ (c.o.c.). Above and in the rest of this section, the charge of a body, assumed nonzero for simplicity, is only used as a convenient tracer of matter. Now, as
can be both positive and negative, the c.o.c. is not necessarily confined to the support of
, as with positive distributions. Nonetheless, since
,
does follow the particle up to some constant displacement, reflecting to a large extent the arbitrariness in defining the exact position of an extended body, and assumed much smaller than any competing length.
At the (sub-)atomic level,
would be rapidly fluctuating in time, endowing
with a ‘jitter motion’ component. To remove it, (
55) is convolved with a normalized symmetric kernel of macroscopic extent
T. Defining
the form of (
55) is retained for
(with
), except for a correction
coming from the time-scaling term. If
is slowly varying over
T, this correction is at most on the order of
, negligibly renormalizing the
term for
. For economical notations, then, the ‘bar’ is dropped henceforth from all quantities.
The integral in (
55) is the first moment of a distribution,
, whose zeroth moment vanishes, which is therefore invariant under
—as is expected of a ‘force term’, competing with the
, acceleration term. Buried in it are presumably all forms of non gravitational interactions preventing
for some constant
and
, from being a solution of (
55) in the absence of gravity (In particular, elements of the Lorentz force can explicitly be excavated from it, but this route shall not be explore here because the Lorentz force is implicit in the basic tenets of classical electrodynamics and, moreover, as there is only a single, global
A-field, its decomposition into
, an adjunct ‘self-field’, and an external field, is far from being obvious). Accordingly, this term is ignored for an isolated body, for which all forces are internal and necessarily integrate to zero (this can be verified explicitly, e.g., for a spherically symmetric
). Thus (
55) becomes (
14), and as proved in that case, solutions for
must all be straight, non tachyonic worldlines
.
The effect of gravity on those straight worldlines is derived by including a weak field in the flow of the first-moment projection of (
22). Recalling from
Section 3.1.2 that weak gravity has no effect on the scaling field, and since gravity is assumed to play a negligible role in the structure of matter, the way this field enters the flat spacetime analysis is by ‘dotting the commas’ in partial derivatives. To this end the Newtonian metric (
40) is substituted into (
22), which is then multiplied by
, where
is the charge of the matter lump, conserved in both time and scale, and integrated over
B, assuming
is approximately constant over the extent of the lump. A straightforward calculation to first order in
, incorporating
, gives
Ignoring
corrections to the isotropic coarsener, the net effect of of the potential in the Newtonian approximation is to render the coarsener anisotropic through its gradient, with an added relativistic correction in the form of the last term on the r.h.s. of (
57). At non-relativistic velocities the double time-derivative piece equals
, where
. The
correction turns out
at large scale and completely negligible at small, hence ignored. In the second term on the r.h.s. of (
57) the
cancels the
factor multiplying it, which is then integrated by parts. To
accuracy the result is
. Using the continuity equation for
, integration by parts of the last term in (
57) yields a relativistic,
term, neglected in the Newtonian approximation. The modification to the scaling piece (
52) introduces an
correction to the
term in (
55) which is neglected in the Newtonian approximation. Since
, the contribution of the curvature term vanishes, as is that of the covariant
term by our assumption that the lump would otherwise be freely moving. Moving to the contribution of the two terms on the l.h.s. of (
22), the commutator is evaluated using
(viz., ordinary derivatives can replace covariant ones) in the definition (
19),
To
accuracy its first moment projection gives
. Combined with the contribution of the
term their sum is the expected
.
Combining all pieces, the first moment projection of (
22) reads
This equation is just (
14) with an extra ‘force-term’ on its r.h.s. which could salvage a non uniformly moving solution,
, from the catastrophic fate at
suffered by its linear counterpart.
At sufficiently large scales,
, when all relevant masses contributing to
occupy a small ball of radius
centered at the origin of scaling without loss of generality (
can similarly be assumed) the scaling part on the r.h.s of (
58) becomes negligible compared to both force and acceleration terms, rather benefiting from such crowdedness. It follows that each
would grow—extremely rapidly as we show next—with increasing
even when the weak-field approximation is still valid, implying that the underlying
is not in
. The only way to keep the scale evolution of
under control is for the force and acceleration terms to
almost cancel one another but not quite, which is critically important; it is the fact that the sum of these two terms, both originating from the coarsener, remains on the order of the scaling term, which is responsible for a nontrivial, non pure scaling
. This means that each worldline converges at large scales to that satisfying Newton’s equation
At small scales the opposite is true. The scaling part dominates and any
scaling path, i.e.,
is well behaved. Combined: at large scales
is determined, gradually transitioning at small scales to a purely scaling form.
The large scale asymptotic Newtonian motion (
59) implies that well behaved solutions of (
58) for a system of multiple, gravitationally interacting, scale-independent masses
(p a particle index) must take the form
where
are Newtonian paths of interacting point-masses
. The scaling form (
60) is an exact symmetry of Newtonian gravity. Plugging it into (
58) results in
violating the equality, coming from the
and
terms, which vanishes for
. Propagating such an asymptotically Newtonian solution to small scale in the stable direction of the flow, extends that exact solution to any
, thus providing a constructive algorithm for generating well behaved solutions for (
58).
From the scale flow (
58) of individual bodies one can deduce the flow of collective attributes of a composite system. One such example is a loosely bound system, e.g., a wide binary, moving in a strong external field. Neglecting (external) tidal forces on such a binary, the counterpart of (
58) for the relative position vector
is independent of that external field, i.e., the equivalence principle is respected (and no external field effect as in MOND). The contributions of the coarsener and scaling pieces to that flow may then be comparable, with highly non Keplerian solutions at
for large eccentricities. Another example is the scale flow of the center-of-mass of a system, which can readily be shown to be that of a free particle. By our previous result is motion must be uniform with a scale invariant velocity.
Deriving a manifestly covariant generalization of (
58) is certainly a worthwhile exercise. However, in a weak field the result could only be
with
some scalar parameterization of the worldline traced by
, and
the gravitational part of the scaling field making
, well-approximated by
in Minkowsky coordinates. Above,
are the Christoffel symbols associated with
, i.e., the analytic continuation of the metric, seen as a function of Newton’s constant, to
. Recalling from
Section 3.1.3 that the fixed-point
is a solution of the standard EFE analytically continued to
,
in that case is therefore just the Christoffel symbol associated with standard solutions of EFE’s. The previous, Newtonian approximation is a private case of this, where
contains a factor of
G. Note however that the path of a particle in our model is a covariantly defined object irrespective of the analytic properties of
. Resorting to analyticity simply provides a constructive tool for finding such paths whenever
is analytic in
G. In such cases, the covariant counterpart of (
59) becomes the standard geodesic equation of GR which gives great confidence that this is also the case for non-analytic
.
The reasons for trusting (
61) are the following. It is manifestly scale- and general-covarinat, as is our model; it is
-shift invariant, i.e.,
, parameterizing the same, scale dependent world-line, also solves (
61) for any
(in
Section 3.2.2 it is further proved that
retains the meaning of proper-time at large scales). At nonrelativistic velocities in a Minkowskian background,
solves (
61) which, when substituted into the
i-components of (
61), recovers (
58); The scaling regime ansatz,
, solves
, i.e., each point on the world-line traced by
, indexed by a fixed
, flows along integral curves of the scaling field—as must be the case when the coarsener is negligible; It only involves local properties of
and
, i.e., their first two derivatives, which must also be a property of a covariant derivation, as is elucidated by the non-relativistic case. Thus (
61) is the only candidate up to covariant, higher derivatives terms involving
and
, or nonlinear terms in their first or second derivatives, all becoming negligible in weak fields/ at small accelerations.
3.2.1. Application: Rotation Curves of Disc Galaxies
As a simple application of (
58), let us calculate the
rotation curve,
, of a scale-invariant mass,
M, located at the origin, as it appears to an astronomer of native scale
. Above,
r is the distance to the origin of a test mass orbiting
M in circles at velocity
v. Since
in (
58) is time-independent, the time-dependence of
can only be through the combination
for some function
. Looking for a circular motion solution in the
plane,
and equating coefficients of
and
for each component, the system (
58) reduces to two, first order ODE’s for
and
. The equation for
readily integrates to
for some integration constant
, and for
r it reads
Solutions of (
62) with
as initial condition are all pathological for
except for a suitably tuned
, for which
in that limit (
Figure 1); the map
is invertible. We note in advance that, for a mistuned
, the pathological fate of
is determined well before the weak field approximation breaks down due to the
term, and neglected relativistic and self-force terms become important and those would not tame a rogue solution.
It follows that there is no need to complicate our hitherto simple analysis in order to conclude that is a necessary conditions for r to correspond to .
Solutions of (
62) which are well-behaved for
admit a relatively simple analytic form. Reinstating
c and defining
the result is
having the following power law asymptotic forms
with the corresponding asymptotic circular velocity,
With these asymptotic forms the reader can verify that, in the large
regime, (
59), which in this case takes the form:
is indeed satisfied for any
, and that (
64) has the scaling form (
60). Finally, for a scale-dependent mass in (
62), an
is obtained by the large-scale regularity condition which is not of the form
. This results in a rotation curve
which is not flat at large
, and an
which, depending on the form of
, may not even converge to zero at large
.
Moving to a realistic representation of a disc galaxy, the insight behind (
60) lends itself to the following algorithm for finding its rotation curve, namely
-
1.
Start with a guess for the mass distribution of a galaxy at some large enough scale, , such that the motion of its constituents is nearly Newtonian.
-
2.
Let this Newtonian system flow via (
58)(
51) to
—no divergence problem in this, stable direction of the flow—comparing the resultant mass distribution and its velocity field at
with the those observed.
-
3.
-
Repeat step 1 with an improved guess based on the results of 2, so as to minimize the discrepancy.
This algorithm for finding the rotation curve, although conceptually straightforward, is numerically challenging and will be attempted elsewhere. However, much can be inferred from it without actually running the code. Mass tracers lying at the outskirts of a disc galaxy experience almost the same,
potential, where
M is the galactic mass, independently of
. This is clearly so at
, as higher order multipoles of the disc are negligible far away from the galactic center, but also at larger
, as all masses comprising the disc converge towards the center, albeit at different paces. The analytic solution (
64) can therefore be used to a good approximation for such traces
5, implying the following power-law relation between the asymptotic velocity,
, of a galaxy’s rotation curve and its mass,
M,
Such an empirical power law, relating
M and
, is known as the
Baryonic Tully-Fisher Relation (BTFR), and is the subject of much controversy. There is no concensus regarding the conssistency of observations with a zero intrinsic scatter, nor is there an agreemnet about the value of the slope—3 in our case—when plotting
vs.
. Some groups [
2] see a slope
while other [
2] insisting it is closer to 4 (both ‘high quality data’ representatives, using primary distance indicaors). While some of the discrepancy in slope estimates can be attributed to selection bias and different estimates of the galactic mass, the most important factor is the inclusion of relatively low-mass galaxies in the latter. When restricting the mass to lie above
, almost all studies support a slope close to 3. The recent study [
2] which includes some new, super heavy galaxies, found a slope
and a
-axis intercept of
for the massive part of the graph. Since the optimization method used in finding those two parameters is somewhat arbitrary, imposing a slope of 3 and fitting for the best intercept is not a crime against statistics. By inspection this gives an intercept of
, consistent with [
2], which by (
67) corresponds to
to within a factor of 2.
With an estimate of
at hand, yet another prediction of our model can be put to test, pertaining to the radius at which the rotation curve transitions to its flat part. The form (
64) of
implies that the transition from the scaling to the coarsening regime occurs at
. At that scale the radius assumes a value
, which is also the radius at which the force term equals the scaling term. Using standard units where velocities are given in km/s and distances in kpc, gives
. Now, in galaxies with a well-localized center—a combination of a massive bulge and (exponential) disc—most of the mass is found within a radius
(right to the Newtonian curve’s maximum). Approximating the potential at
by
, the transition of the rotation curve from scaling to coarsening, with its signature rise from a flat part seen in
Figure 2, is expected to show at
, followed by a convergence to the galaxy-specific Newtonian curve. This is corroborated in all cases—e.g. galaxies NGC2841, NGC3198, NGC2903, NGC6503, UGC02953, UGC05721, UGC08490... in fig.12 of [
2]
The above sanity checks indicate that the rotation curve predicted by our model cannot fall too far from that observed, at least for massive galaxies; it is guarantied to coincide with the Newtonian curve near the galactic center, depart from it approximately where observed, eventually flattening at the right value. However, these checks do not apply to diffuse, typically gas dominated galaxies, several orders of magnitude lighter. More urgently, a slope
is difficult to reconcile with [
2] which finds a slope
when such diffuse galaxies are included in the sample. Below we therefore point to two features of the proposed model possibly explaining said discrepancy. First, our model predicts that, insoffat as the enclosed mass does not grow much beond the radius of the last velocity tracer
6,
attributed in [
2] to most such galaxies would turn out to be an overestimation should their rotation curves be
significantly extended beyond the handful of data points of the flat portion. By the algorithm described above, the rotation curve solution curve is Newtonian at
by construction, having a
tail past the maximum, whose
rightmost part ultimately evolves into the flat segment at
. A major difference between the flows to
of massive and diffuse galaxies’ rotation curves stems from the fact that, the hypothetical Newtonian curve at
—that which is based on baryonic matter only—is rising/leveling at the point of the outmost velocity tracer in the diffuse galaxies of [
2]. It is therefore certain that this tracer was at the rising part/maximum of the
curve, rather than on its
tail as in massive galaxies. This means that the short, flat segment of a diffuse galaxy’s r.c., is a fake one, corresponding to a massive galaxy’s r.c. short flat region at its maximum, seen in most such galaxies near the maximum of the hypothetical Newtonian curve.
A second possible contributer to the slopes discrepancy, which would further imply an intrinsic scatter around a straight BTFR, involves a hitherto ignored transparent component of the energy-momentum tensor. As emphasized throughout the paper, the
A-field away from a non-uniformly moving particle (almost solving Maxwell’s equations in vacuum) necessarily involves both advanced and retarded radiation. Thus even matter at absolute zero constantly ‘radiates’, with advanced fields compensating for (retarded) radiation loss, thereby facilitating zero-point motion of matter. The
A-field at spacetime point
away from neutral matter is therefore rapidly fluctuating, contributed by all matter at the intersection of its worldline with the light-cone of
. We shall refer to it as the Zero Point Field (ZPF), a name borrowed from Stochastic Electrodynamics although it does not represent the very same object. Being a radiation field, the ZPF envelopes an isolated body with an electromagnetic energy ‘halo’, decaying as the inverse distance squared—which by itself is not integrable!—merging with other halos at large distance. Such ‘isothermal halos’ served as a basis for a ‘transparent matter’ model in a previous work by the author [
2] but in the current context its intensity likely needs to be much smaller to fit observations. Space therefore hosts a non-uniform ZPF peaking where matter is concentrated, in a way which is sensitive to both the type of matter and its density. This sensitivity may result both in an intrinsic scatter of the BTFR, and in a systematic departure from ZPF-free slope=3 at lower mass. Indeed, in heavy galaxies, typically having a dominant massive center, the contribution of the halo to the enclosed mass at
is tiny. Beyond
orbiting masses transition to their scaling regime, minimally influenced by additional increase in the enclosed mass at
r. The situation is radically different in light, diffuse galaxies, where the ratio of
is much higher throughout the galaxy, and much more of the non-integrable tail of the halo contributes to the enclosed mass at the point where velocity tracers transition to their scaling regime (the same is true for the circumgalactic gaseous halo mentioned in footnote
6). This under estimation of the effective galactic mass, increasing with decreasing baryonic mass, would create an illusion of a BTFR slope greater than 3.
3.2.2. Other Probes of ‘Dark Matter’
Disc galaxies are a fortunate case in which the worldline of a body transitions from scaling to coarsening at a common scale along its entire worldline (albeit different scales for different bodies). They are also the only systems in which the velocity vector can be inferred solely from its projection on the line-of-sight (in idealized galaxies). In pressure supported systems, e.g., globular clusters, elliptical galaxies or galaxy clusters, neither is true. Some segments of a worldline could be deep in their scaling regime while others in the coarsening, rendering the analysis of their collective scale flow more difficult. Nonetheless, our solution scheme only requires that the worldlines of a bound system are deep in their coarsening regime at sufficiently large scale, where their fixed-
dynamics is well approximated using Newtonian gravity. Starting with such a Newtonian system at sufficiently large
, the integration of (
58) to small
is in its stable direction, hence not at risk of exploding for any initial choice of Newtonian paths. If the Newtonian system at
is chosen to be virialized, a ‘catalog’ of solutions of pressure supported systems extending to arbitrarily small
can be generated, and compared with line-of-sight velocity projections of actual systems. As remarked above, the transition from coarsening to scaling generally doesn’t take place at a common scale along the worldline of any single member of the system. However, if we assume that there exists a rough transition scale,
, for the system as a whole in the statistical sense, which is most reasonable in the case of galaxy clusters, then immediate progress can be made. Since in the scaling regime velocities are unaltered, the observed distribution of the line-of-sight velocity projections should remain approximately constant for
, that of a virialized system, viz., Gaussian of dispersion
. On the other hand, at
a virialized system of total mass
M satisfies
where
is the velocity dispersion, and
r is the radius of the system, which is just (
66) with
. On dimensional grounds it then follows that
would be the counterpart of
from (
67), implying
which is in rough agreement with observations. The proportionality constant can’t be exactly pinned using such huristic arguments, but its observed value is on the same order of magnitude as that implied by (
67).
Applying our model to gravitational lensing in the study of dark matter requires better understanding of the nature of radiation. This is murky territory even in conventional physics and in next section initial insight is discussed. To be sure, Maxwell’s equations in vacuum are satisfied away from
, although only ‘almost so’, as discussed in
Section 3. However, treating them as an initial value problem, following a wave-front from emitter to absorber is meaningless for two reasons. First, tiny,
local deviations from Maxwell’s equations could become significant when accumulated over distances on the order of
. Second, in the proposed model extended particles ‘bump into one another’ and their centers jolt as a result—some are said to emit radiation and other absorb it, and an initial-value-problem formulation is, in general, ill-suited for describing such process. Nonetheless, incoming light—call it a photons or a light-ray—does posses an empirical direction when detected. In flat spacetime this could only be the spatial component of the null vector connecting emission and absorption events, as it is the only non arbitrary direction. A simple generalization to curved spacetime, involving multiple, freely falling observers, selects a path,
, everywhere satisfying the
light-cone condition . Every null geodesic satisfies the light-cone condition, but not the converse. In ordinary GR, the only non arbitrary path connecting emission and absorption events which respects the light-cone condition and locally depends on the metric and its first two derivatives is indeed a null geodesic. In our model, a solution of (
61) which is well behaved on all scales, further satisfying the light-cone condition at large scales is an appealing candidate: By our previous remarks it selects geodesics at large scales, but it still needs to be shown that (
61) preserves the light-cone condition at large scales. We shall not attempt to rigorously prove this here, but instead show that (
61) is consistent with this assumption. Indeed, denoting
, taking the covariant derivative
along
of both sides of the vector equation (
61) and multiplying the result by
, one gets
with
the coarsener piece. The easiest way to arrive at (
69) is to evaluate the equality of scalars resulting from the previous two steps in Gaussian coordinates, making use of the identity
Using (
27) and (
46) the last term in (
69) can be approximated by
, canceling with the term preceding it, plus a
correction for
s-dependent metric. By (
47), this correction cancels with an equal term coming from the
piece on the l.h.s., neglecting the radiative component,
(see
Section 3.4). Moving to the first piece on the r.h.s. of (
69), coming from the coarsener—it identically vanishes for any geodesic
—but recall that at finite
,
is not not exactly a geodesic. From the two surviving terms then follows that
at large scales (or at least very nearly so). At small enough scales—e.g. at distances away from a mass much greater than
—the geodesic term in (
69) become arbitrarily small for any
; note also that
could still vanish identically even when
doesn’t. Property (
70) then still holds insofar as the light-cone condition is inherited from large scale. However, it is currently unclear to the author whether that is the case
exactly which, in and of itself, is not a necessary condition for the proposed candidate so long as no conflict with observations arises.
As a test for the above putative scale flow of light, consider the deflection angle of a light ray passing near a compact gravitating system of mass
M, which in GR is given by
where
R is the impact parameter of the ray
. When
is in its scaling regime, our model’s
remains constant,
. If the system is likewise in its scaling regime, (
68) implies that
, and its virial mass,
, similarly scales
, as does the impact parameter of
,
. The conventional mass estimate based on the virial theorem, of this
-dependent family of gravitating systems, would then agree with that which is based on (conventional) gravitational lensing,
—which is the case in most observations pertaining to galaxy clusters—up to a constant, common to all members; recall that this entire family appears in the ‘catalog’ of
systems. Extending this family to large
, the two estimates will coincide by virtue of (
61) selecting null geodesics at large scale. It is therefore expected that this proportionality constant is close to 1 (proving this involves a calculation avoided thus far due to the non-uniform transition in scale from coarsening to scaling). Specifically, comparing (
71) with the Newtonian
at small
R, and
at large
R, the form (
67) of
suggests
.
3.3. Quantum Mechanics as a Statistical Description of the Realistic Model
The basic tenets of classical electrodynamics (
19), (
30) and (
31), which must be satisfied at
any scale on consistency grounds (up to neglected curvature terms), strongly constrain also statistical properties of ensembles of members in
, and in particular constant-
sections thereof. In a previous paper by the author [
2] it was shown that these constraints could give rise to the familiar wave equations of QM, in which the wave function has no ontological significance, merely encoding certain statistical attributes of the ensemble via the various currents which can be constructed from it. It is through this statistical description that
ℏ presumably enters physics, and so does ‘spin’ (see below).
This somewhat non-committal language used to describe the relation between QM wave-equations and the basic tenets is for a reason. Most attempts to provide a realist (hidden variables) explanation of QM follow the path of statistical mechanics, starting with a single-system theory, then postulating a ‘reasonable’ ensemble of single-systems—a reasonable measure on the space of single-system solutions—which reproduces QM statistics. Ignoring the fact that no such endeavor has ever come close to fruition, it is rarely the case that the measure is ‘natural’ in any objective way, effectively
defining the statistical theory/measure (uniformity over the impact parameter in an ensemble representing a scattering experiment being an example of an objectively natural attribute of an ensemble). Even the ergodicity postulate, as its name suggests, is a postulate—external input. When sections of members in
are the single-systems, the very task of defining a measure on such a space, let alone a natural one, becomes hopeless. The alternative approach adopted in [
2] is to derive constraints on any statistical theory of single-systems respecting the basic tenets, showing that QM non-trivially satisfies them. QM then, like any measure on the space of single-system solutions, is
postulated rather than derived, and as such enjoys a fundamental status, on equal footing with the single-system theory. Nonetheless, the fact that the QM analysis of a system does not require knowledge of the system’s orbit makes it suspicious from our perspective. And since a quantitative QM description of any system but the simplest ones involves no less sorcery than math, that fundamental status is still pending confirmation (refutation?).
Of course, the basic tenets of classical electrodynamics are respected by all (sections of-) members of
, not only those associated Dirac’s and Schrödinger’s equations. The focus in [
2] on ‘low energy phenomena’ is only due to the fact that certain simplifying assumptions involving the self-force can be justified in this case. In fact, the current realization of the basic tenets, involving fields only instead of interacting particles, is much closer in nature to the QFT statistical approach than to Schrödinger’s.
3.3.1. The Origin of Quantum Nonlocality
“Multiscale locality", built into the proposed formalism, readily dispels one of QM’s greatest mysteries—its apparent non-local nature. In a nutshell: Any two particles, however far apart at our native scale, are literally in contact at sufficiently large scale.
Two classic examples where this simple observation invalidates conventional objections to local-realist interpretations of QM are the following. The first is a particle’s ability to ‘remotely sense’ the status of the slit through which it does not pass, or the status of the arm of an interferometer not traversed by it (which could be a meter away). To explain both, one only needs to realize that for a giant physicist, a fixed-point particle is scattered from a target not any larger than the particle itself, to which he would attribute some prosaic form-factor; At large enough the particle literally passes though both arms of the interferometer (and through none!). This global knowledge is necessarily manifested in the paths chosen by it at small . Of course, at even larger the particle might also pass through two remote towns etc., so one must assume that the cumulative statistical signature of those infinitely larger scales is negligible. A crucial point to note, though, is that the basic tenets, which imply local energy-momentum conservation at laboratory scales, are satisfied at each separately. For this large- effect to manifest at , local energy-momentum conservation alone must not be enough to determine the particle’s path, which is always the case in experiment manifesting this type of nonlocality. Inside the crystal serving as mirror/beam-splitter in, e.g., a neutron interferometer, the neutron’s classical path (=paths of bulk-motion derived from energy-momentum conservation) is chaotic. Recalling that, what is referred to as a neutron—its electric neutrality notwithstanding—only marks the center of an extended particle, and that the very decomposition of the A-field into particles is an approximation, even the most feeble influence of the A-field awakened by the neutron’s scattering, traveling through the other arm of the interferometer, could get amplified to a macroscopic effect. This also provides an alternative, fixed-scale explanation for said ‘remote sensing’. In the double-slit experiment such amplification is facilitated by the huge distance of the screen from the slits compared with their mutual distance.
The second kind of nonlocality is demonstrated in Bell’s inequality violations. As with the first kind, the conflict with one’s classical intuition can be explained both at a fixed scale, or as a scale-flow effect. Starting with the former, and ever so slightly dumbing down his argument, Bell assumes that physical systems are small machines, with a definite state at any given time, propagating (deterministically or stochastically) according to definite rules. This generalizes classical mechanics, where the state is identified with a point in phase-space and the evolution rule with the Hamiltonian flow. However, even the worldlines of particles in our model, represented by sections of members in
, are not solutions of any (local) differential equation in time. Considering also the finite width of those worldlines, whose space-like slices Bell would regard as possibly encoding their ‘internal state’, it is clear that his modeling of a system is incompatible with our model; particles are not machines, let alone particle physicists. Spacetime ‘trees’ involved in Bell’s experiments—a trunk representing the two interacting particles, branching into two, single particle worldlines—must therefore be viewed as a single whole, with Bell’s inequality being inapplicable to the statistics derived from ‘forests’ of such trees.
7 This spacetime-tree view gives rise to a scale-flow argument explaining Bell’s inequality violations: The two branches of the tree shrink in length when moving to larger scale, eventually merging with the trunk and with one another. Thus the two detectors at the endpoints of the branches cannot be assumed to operate independently, as postulated by Bell.
3.3.2. Fractional Spin
Fractional spin is regarded as one of the hallmarks of quantum physics, having no classical analog, but according to [
2], much like
ℏ, it is yet another parameter—discrete rather than continuous—entering the statistical description of an ensemble. At the end of the day, the output of this statistical description is a mundane statement in
, e.g., the scattering cross-section in a Stern-Gerlach experiment, which can be rotated with
. Neither Bell’s- nor the Kochen-Specker theorems are therefore relevant in our case as the spin is not an attribute of a particle. For this reason the spin-0 particle from
Section 3.1.2 is a legitimate candidate for a fractional-spin particles, such as the proton, for its ‘spin measurement/polarization’ along some axis is by definition a dynamical happening, in which its extended world-current bends and twists, expands and contracts in a way compatible with- but not dictated by the basic tenets. As stressed above, there is no natural measure on the space of such objects, and the appearance of two strips on Stern
Gerlach’s plate rather than one, or three etc. need not have raised their eyebrows. Nonetheless, the proposed model does support spinning solutions, viz.
in the rest frame of the particle, and there is a case to be made that those are more likely candidates for particles normally attributed with a spin, integer or fractional.
3.3.3. Photons (or Illusion Thereof?)
Einstein invented the ‘photon’ in order to explain the apparent violation of energy conservation occurring when an electron is jolted at a constant energy from an illuminated plate even when the plate is placed far enough from the source, such that the time-integrated Poynting flux across it becomes smaller than the energy of the jolted electron. It is entirely possible that Einstein’s explanation could be realized in the proposed formalism, although the rest-frame analysis of a fixed-point particle from
Section 3.1.2 must obviously be modified for massless (neutral) particles which might further require extending
and
to include distributions. Maxwell’s equations would then serve as the photonic counterpart of a massive-particle’s QM wave equation, describing the statistical aspects of ensembles of photons. Indeed, since in a ‘lab’ of dimension
individual photons (almost) satisfy the basic tenets of classical electrodynamics (and (
32)) for a chargeless current (i.e.,
), the construction from [
2] would result in Maxwell’s equations, with the associated
being the ensemble energy-momentum tensor. However, since the
A-field (almost) satisfies Maxwell’s equations regardless of it being a building block of photons, it is highly unlikely that photons exhaust all radiation-related phenomena. For example, is there any reason to think that a radio antenna transmits its signal via radio photons, rather than radio (
A-) waves? This suggests an alternative explanation for photon-related phenomena, which does not require actual, massless particles. Its gist is that, underlying the seeming puzzle motivating Einstein’s invention of the photon, is the assumption that an electron’s radiation field is entirely retarded which, as emphasized throughout the paper, cannot be the case for the
A-field. Advanced radiation converging on the electron could supply the energy necessary to jolt it, further facilitating violation of Bell’s inequality in entangled ‘photons’ experiments. This proposal, first appearing in [
2] and further developed in [
2], was, at the time, the only conceivable realist explanation of photon related phenomena. In the proposed model, apparently capable of representing ‘light corpuscles’, it may very well be the wrong explanation. Photons would then be just ephemeral massless particles created in certain structural transitions of matter, then disappearing when detected. Note that these two processes are entirely mundane, merely representing a relatively rapid changes in
and
at the endpoints of a photon’s (extended) worldline. Such unavoidable transient regions might result in an ever-so-slight smoothing of said distributions, which are otherwise excluded from
.
3.4. Vacuum Waves and the (Illusion of) Neutrinos
It was well known already to Einstein that the geodesic equation follows from local energy-momentum conservation under reasonable assumptions. Similarly, the Lorentz force equation follows quite generally from the basic tenets of classical electrodynamics (
31),(
30),(
19). Yet, by the results of
Section 3.2,
simply scales at small
, breaking away from its large scale geodesic/Lorentz motion. Note that self-force/radiative corrections, which are
in both cases (
a being the acceleration) become negligible at small scales compared with
. This contradiction might seems to disappear by virtue of the fact that the energy-momentum tensor
is
not exactly conserved in (
28) but, referring to
Section 3.1.3 and specifically to (
48), the tensor
is identically conserved, and is still susceptible to the above contradiction. Below it is shown that radiative corrections associated with the unusual structure of
are capable of eliminating this problem, and that neutrino related phenomena could be a manifestation of this hitherto ignored radiative component of the energy-momentum tensor, manifested in structural changes of matter.
To compute
one needs to solve for the radiative component,
, of the scaling field the second order PDE (
32) at a fixed
, with the
term on the r.h.s. removed
and b.c.
at spatial infinity. This is a massless wave equation, not too dissimilar to Maxwell’s, therefore expected to participate in radiative, energy-momentum transfer, but three features set it apart. First, the two terms on the l.h.s. of (
32) enter with the ‘wrong’ relative sign, spoiling gauge covariance. As a result an extra longitudinal mode exists, i.e.,
with
in flat spacetime for simplicity (which in the Maxwell case is a pure gauge), on top of the two transverse modes,
. Those two modes satisfy the Lorentz gauge condition
, hence (
72) becomes Maxwell’s equation in the Lorentz gauge. Second, unlike
,
is only linear in
, an impossibility for a Noether current. Finally, the source of (
72) has a vanishing monopole for a body in equilibrium. In fact, it is proportional to the expression on the r.h.s. of Poyinting theorem (
30), which is the work done on a radiating system, vanishing for a system in equilibrium. It follows that
generated during the bulk motion of a body in equilibrium involves only longitudinal waves, and those can be derived from a scalar potential:
, satisfying
To model the monopole-less source on the r.h.s. associated with the motion of a point-particle, it is simpler to solve
for an appropriate
of length dimension 3, representing the constant dipole density distribution on the world line. The standard machinery of Liénard-Wiechert potentials can then be used to compute the integrated, retarded/advanced (proper-) rate of energy-momentum flux,
, of
across the boundary,
, of a ball of radius
cut from the
hyper-plane and centered at
, with the nonstandard result
where
,
. In the limit
integral (
74) becomes an instantaneous correction to the Lorentz/geodesic equation which cannot be ignored, being
(also note the absence of the infamous
term in the corresponding electromagnetic/gravitational treatment). Now, even assuming that the body goes “off-shell" at small
, i.e.,
, when
is constant and a fixed mixture of advances and retarded waves is evaluated along
, the correction to the Lorentz/geodesic equation would be a local one, which would be impossible to reconcile with
being manifestly non-local at small
. Consistency therefore implies that a
-dependent mixture of retarded and advanced waves exists along the world-line of
, ‘tailored’ to the local form of
(energy-momentum conservation would be identically satisfied for any such mixture). Concretely for on-shell motion,
would amount to a
-dependent suppression of a body’s effective inertial mass, explaining its excessively strong acceleration in a weak force field. It has been noted several times that a spacetime dependent mixture of retarded and advanced solutions is the rule in the case of the
A-field, and there is no apparent reason to suspect that this isn’t so in the case of the scaling field.
Moving to neutrinos. When a system undergoes structural transitions, as in
-decay, the monopole of the source on the r.h.s. of (
72) does not vanish, and its dipole may further rapidly change. Transverse radiation then also plays a role in energy-momentum balance. The radiative component of
is therefore a natural candidate for a ‘classical neutrino field’, whose relation to neutrino phenomena parallels that of the
A-field to photon phenomena. As with photons, it is a particle’s advanced
Z-field converging on it which supplies the energy-momentum necessary to jolt it, conventionally interpreted as the result of being struck by a neutrino. Similarly, hitherto ignored energy-momentum removed from a system undergoing structural changes by its retarded
Z-waves, is conventionally interpreted as the release of neutrinos. As pointed out in
Section 3.3.2 above, the (fractional) spin-
attributed to the neutrino, as is the spin-1 of the photon, only labels the statistical description of phenomena involving such jolting of charged particles. As for the alleged nonzero mass of neutrinos contradicting (
72)—
“God is subtle but not malicious" was Einstein’s response to claims that further repetitions of the Michelson-Morley experiment did show a tiny directional dependence of the speed of light. This attitude is adopted here vis-á-vis the neutrino’s mass problem. All direct measurements based on time-of-flight are consistent with the neutrino being massless; the case for a massive neutrino relies entirely on indirect measurements and a speculative extension of the Standard Model.
3.5. Cosmology
Cosmological models are stories physicists entertain themselves with; they can’t truly know what happened billions of years ago, billions of light-years away, based on the meager data collected by telescopes (which covers of the electromagnetic spectrum and taken from a single point in space). Moreover, in the context of the proposed model, the very ambition implied by the term “cosmology" is at odds with the humility demanded of a physicist, whose entire observable universe could be another physicist’s oven. On the other hand, astronomical observations associated with cosmology, also serve as a laboratory for testing ‘terrestrial’ physical theories, e.g., atomic-, nuclear-, quantum-physics, and this would be particularly true in our case, where the large and the small are so intimately interdependent. When the most compelling cosmological story we can devise, fitting those observations, requires contrived adjustments to terrestrial-physics theories, confidence in those, including GR, should be shaken.
Reluctantly, then, a cosmological model is outlined below. Its purpose at this stage is not to challenge CDM in the usual arena of precision measurements, but to demonstrate how the novel ingredients of the proposed formalism could, perhaps, lead to a full-fledged cosmological model free of the aforementioned flaw.
3.5.1. A Newtonian Cosmological Model
As a warm-up exercise, we wish to solve the system (
58)(
51) for a spherical, uniform, expanding cloud of massive particles originating from the scaling center (without loss of generality). The path of a typical particle is described by
where
a constant vector. It is easily verified that the same homogeneous expanding cloud would appear to an observer fixed to any particle, not just the one at the origin. The mass density of the cloud depends on
a via
, retaining its uniformity at any time and scale if creation/annihilation of matter in scale is uniform across space. The gravitational force acting on a particle is given by
(the uniform vacuum energy is ignored as its contribution to the force can only vanish by symmetry) and (
58) gives a single, particle-independent equation for
a
with
etc.
Two types of solutions for (
76) which are well behaved at all scales should be distinguished: Bounded and unbounded. In the former
is identically zero at
and a
-dependent ‘big-crunch’ time,
. By our previous remarks, at large scale the coarsening terms—those multiplied by
on the r.h.s. of (
76)—dominate the flow and must almost cancel each other or else
a would rapidly blow up with increasing
. The resulting necessary condition for a regular
on all scales is a
-dependent o.d.e. in time, which is simply the time derivative of the (first) Friedmann equation for non-relativistic matter
The
k above disappears as a result of this derivative, meaning that it resurfaces as a second integration constant of any magnitude—not just
. Denoting
bounded solutions in which mass is conserved in time are therefore described by some flow in
parameter space for which
shrinks to zero for
. For example, as
k in (
77) plays the role of minus twice the total energy of the explosion per unit mass, for a scale independent
,
monotonically increases with increasing
.
Given a solution of (
77) at large enough
one can then integrate (
76) in its stable, small
direction, where the scaling piece becomes important, but due to the
constraint, some parts of a solution remain deep in their coarsening regime. The same is true for unbounded solutions, but in this case there is no
to start from, rendering the task of finding solutions more difficult; instead of b.c.
for
, we have
for some initial time,
, and the large-
t asymptotic
for some
. Note the consistency with
for some constant
C, which is an exact solution for the
-free (
76) (and its only solution not wildly diverging in magnitude at large
t). One exception to the hardness of the open-solution (applicable also to closed solutions) is a scaling solution,
, where
is an exact solution of (
77) with
and
. Note that the asymptotic b.c. is automatically satisfied for
. Another is the scale invariant solution of (
76), integrated backwards from
to
, implicitly defining
(integrating forward from
leads to nonphysical solutions).
The Newtonian-cloud model, while mostly pedagogical, nonetheless captures a way—perhaps the only way—cosmology is to be viewed within the proposed framework: It does not pertain to The Universe but rather to a universe—an expanding cloud as perceived by a dwarf amidst it. A relative giant, slicing the cloud’s orbit at a much larger , might classify the corresponding section as, e.g., the expanding phase of a Cepheid/red-giant, or a runaway supernova. An even mightier giant may see a decaying radioactive atom. Of course, matter must disappear in such flow to larger and larger scales—a phenomenon already encountered in the linear case which is further discussed below. The rate (in scale) at which this takes place, in the above models, must be compatible with our analysis of galaxies, where mass was assumed conserved in scale. This would be true for small enough global rate, or if around our native scale, mass annihilation takes place primarily outside galaxies (commencing in a galaxy only after scale flow has compressed it to an object currently not identified as a galaxy).
Suppose now for concreteness that a giant’s section is an expanding star. The dwarf’s entire observable universe would in this case correspond to a small sphere, non-concentrically cut from the star. The hot thermal radiation inside that sphere at
, after flowing with (
16) to
, would be much cooler, much less intense, and much more uniform, except for a small dipole term pointing towards the star’s center, approximately proportional to the star’s temperature gradient at the sphere, multiplied by the sphere’s diameter. Similarly for the matter distribution at
, only in this case the distribution of accumulated matter created during the flow is expected to decrease in uniformity if new matter is created close to existing matter. Thus the distribution of matter at
is proportional to the density at
only when smoothed over a large enough ball, whose radius coresponds to a distance at
much larger than the scale of density fluctuations. This would elegantly explain the so-called dipole problem [
9,
10]—the near perfect alignment of the CMB dipole with the dipole deduced from matter distribution, but with over
discrepancy in magnitude; Indeed, the density and temperature inside a star typically have co-linear, inward-pointing gradients, but which differ in magnitude. Note that a uniform cloud ansatz is inconsistent with the existence of such a dipole discrepancy and should therefore be taken as a convenient approximation only, rendering the entire program of precision cosmology futile. The horizon problem of pre inflation cosmology is also trivially explained away by such orbit view of the CMB. Similarly, the tiny but well-resolved deviations from an isotropic CMB (after correcting for the dipole term) might be due to acoustic waves inside the star.
Returning to the scale-flow of interpolating between ‘a universe’ and a star, and recalling that stands for a spacetime phenomenon as represented by a physicist of native scale s, a natural question to ask is: What would this physicist’s lab notes be? A primary anchor facilitating this sort of note-sharing among physicists of different scales is a fixed-point particle, setting both length and mass standard gauges. We can only speculate at this stage what those are, but the fact that the mass of macroscopic matter must be approximately scale invariant—or else rotation curves would not flatten asymptotically—makes atomic nuclei, where most of the mass is concentrated, primary candidates. Note that in the proposed formalism the elementarity of a particle is an ill-defined concept, and the entire program of reductionism must be abandoned. For if zooming into a particle were to ‘reveal its structure’, even a fixed-point would comprise infinitely many copies of itself as part of its attraction basin.
If nuclei approximately retain their size under scale-flow to large , while macroscopic molecular matter shrinks, then some aspects of spacetime physics (at a fixed-scale section) must change. Instinctively, one would attribute the change to a RG flow in parameter space of spacetime theories, e.g., the Yukawa couplings of the Standard Model of particle physics, primarily that of the electron. However, this explanation runs counter to the view advocated in this paper, that (spacetime) sections should always be viewed in the context of their (scale) orbit; If the proposed model is valid, then the whole of spacetime physics is, at best, a useful approximation with a limited scope. Moreover, an RG flow in parameter space cannot fully capture the complexity involved in such a flow, where, e.g., matter could annihilate in scale (subject to charge conservation); ‘electrons’ inside matter, which in our model simply designate the A-field in between nuclei—the same A-field peaking at the location of nuclei—‘merging’ with those nuclei (electron capture?); atomic lattices, whose size is governed by the electronic Bohr radius , might initially scale, but ultimately change structure. At sufficiently large an entire star or even a galaxy would condense into a fixed-point—perhaps a mundane proton, or some more exotic black-hole-like fixed-point which cannot involve a singularity by definition. Finally, we note that, by definition, the self-representation of that scaled physicist slicing at his native scale s, is isomorphic to ours, viz., he reports being made of the same organic molecules as we are made of, which are generically different from those he observes, e.g., in the intergalactic medium. So either actual physicists (as opposed to hypothetical ones, serving as instruments to explain the mathematical flow of ) do not exist in a continuum of native scales, only at those (infinitely many) scales at which hydrogen atoms come in one and the same size; or else they do, in which case we, human astronomers, should start looking around us for odd-looking spectra, which could easily be mistaken for Doppler/gravitational shifts.
3.5.2. Relativistic Cosmology
In order to generalize the Newtonian-cloud universe to relativistic velocities, while retaining the properties of no privileged location and statistical homogeneity, it is convenient to transfer the expansion from the paths of the particles to a maximally symmetric metric—a procedure facilitated by the general covariance of the proposed formalism. Formally, this corresponds to an ‘infinite cloud’ which is a good approximation whenever the size of the cloud and the distance of the observer from its edge are both much greater than
and
. Alternatively, the cosmological principle could be postulated as an axiom. For clarity, the spatially flat (
), maximally symmetric space, with metric
is considered first, for which the only non-vanishing Christoffel symbols are
The gravitational part,
, of the scaling field, appropriate for the description of a universe which is electrically neutral on large enough scales, i.e.,
, is given by solutions of (
33) which, for the metric (
78), reads
However, the generally covariant boundary condition (
34) “far away from matter" is not applicable here. Instead,
is required to be compatible with the (maximal) symmetry of space—its Lie derivative along any Killing field of space must vanish.
The general form of
consistent with the metric (
78) is
Spatial scaling is taken care of by the metric, hence the vanishing
. This implies that,
in cosmic coordinates the size of a gravitationally bound system whose outmost matter is deep in its scaling regime, e.g. a galaxy with a flat r.c., also scales as
a, rather than
in Minkowskian coordinates.
Inserting (
81) into (
80) results is a single equation
Importantly,
is a solution when
a is of either scaling forms,
resp. This exposes the fact that, in the generally covariant setting, the scale-direction of giants could be either
(equiv.
) or
, depending on the coordinate system and its associated solution for the scaling field. As soon becomes apparent, compatibility with the Newtonian model selects a negative
—which at any fixed
contains two, free,
dependent integration constants, referred to below—and a
direction of giants.
An important issue which must be addressed before proceeding, concerns the ontological status of the energy-momentum tensor. In GR, sourcing the Einstein tensor is a phenomenological device, equally valid when applied to the hot plasma inside a star, or to the ‘cosmic fluid’. In contrast,
and the scaling field from which
is derived, both enter (
24) as fundamental quantities, on equal footing with
. To make progress, this fundamental status must be relaxed, and the following way seems reasonable: The fundamental scaling field is written
, with
the above, coarse grained gravitational part, and
the field inside matter. The space averages of the fundamental
and
(derived from
) are written at
,
. That such coarse grained pseudo tensor, respecting the symmetries of the coarse grained metric (
78), has the perfect fluid form, can easily be shown.
Plugging
thus defined and (
78) into the metric flow (
24), results in space-space and time-time components given, respectively, by
with
and
p incorporating
and
while the remaining terms are entirely due to
. Another equation which can be extracted from those two, or directly from (
28), in conjunction with (
82), is energy-momentum conservation in time
Only two of the above three equations are independent due to the Bianchi identity and (
82).
Equations (
83) and (
84) can be combined to
Remembering that paths of co-moving masses can be deduced by analytically continuing solutions of (
24),
, and solving (
61) in the resultant metric (which for the metric (
78) gives:
,
with
a constant), we might as well solve (
86) directly for
. In accordance with
one should also change
(or
), for
in (
86) to be the direction of giants. The result is an equation which, for
, is quite similar to (
76)
only with
multiplying the coarsening piece (due to the different scale-flows involved) and a dark energy term resulting from splitting
, such that for
the scaling piece in (
76) is recovered.
With the above modifications in mind, (
84) becomes
which is the first Friedmann equation with an extra term mimicking dark energy. Since (
88) is satisfied (also) at
, reasonably assuming that
is on the order of the current baryonic density based on direct ‘count’,
, most likely a lower bound, and
based on local measurements (validated below), an estimate
km is obtained, hence
, i.e., the
term in (
88) mimics dark energy which is currently positive.
Let us summarize the computational task of finding a solution for the relativistic cosmological model. The single scale-flow equation is for
a, (
87), whose solutions must be positive, not wildly diverging at large
. Equations (
88), and (
82) with
, act as constraints, which for a given
a and
couple
and
at any (fixed-
) section. The Propagation of
a in scale depends on
p which, as in a standard Friedmann model, requires extra physical input regarding the nature of the energy-momentum tensor, e.g., an equation-of-state relating
p and
. Since both
and
p represent some large-volume average of
, removed of
’s ‘dark’ component, the contribution from inside matter (where
), denoted
, can be assumed to be that of non-relativistic (“cold") matter, i.e.,
. Outside matter the
A-field is nearly a vacuum solution of Maxwell’s equations with an associated traceless
contributing
and
to the total
and
p. If we proceed as usual, identifying
with the energy of retarded radiation emitted by matter, observations would then imply
in the current epoch. However,
incorporates also the ZPF which could potentially even outweigh
. The contribution of the ZPF, being an ‘extension’ of matter outside the support of its
, although having a distinct
dependence, is not an independent component. Properly modeling the combined matter-ZPF fluid, e.g. as interacting fluids, or using some exotic equation-of-state, will be attempted elsewhere.
Returning to the fixed-
constraint, (
88) and (
82), we first note that
in the latter describes the motion of a damped, harmonic oscillator with a negative spring coefficient and a force term (whose sign and magnitude depend on
). A general
negative solution has a single local maxima at
(by
). Since at a fixed
, (
82) is second order in time, only one of its two integration constants is fixed by (
88) evaluated at
. The second one can then be used to further tune
in matching the observed, current acceleration, via
Above,
is the
CDM cold matter density estimate based on supernova- and transverse BAO-distance observations, and
is determined by (
28) (evaluated at
). Thus, the two integration constants of (
82) can conspire to result in an illusion of both a positive cosmological constant, and a cold dark matter addition to
(even if
).
Moving to the early universe, or star-phase of the explosion, at
, the
term in (
88) switches sign, and rapidly decreases with decreasing
t, countering the opposite trend in
, dramatically slowing the shrinkage of
, likely eliminating the horizon problem plaguing a generic Friedmann model. The precise outcome of such a battle of divergences depends on the details of a solution, but a natural, physically motivated scenario follows from the fact that,
,
and
is a solution for the system (
82)(
88) when the two constants are chosen so that the r.h.s. of (
88) vanishes. Namely, the growth of
eventually ‘catches-up’ with that of
, meaning that there is no big-bang in the remote past, just a static universe/star. During that epoch, a pertubative analysis of
,
p,
and
can be performed. A notable departure from standard such analysis is the appearance of ‘vacuum waves’, perturbative solutions of (
36), with associated
masquerading as dark matter of some sort.
Relating cosmological observations to
entails extra steps which are different in the proposed formalism, therefore expected to yield different relations. Remarkably, this isn’t so in most cases. Consider, e.g., the redshift. To calculate the redshift of a distant, comoving object at
, two adjacent, time-ordered points along its worldline are to be matched with similar two points for earth at
. The matching is done by finding two solutions of (
61) which are well behaved on all scales, satisfying the light-cone condition, connecting the corresponding points at
. For the metric (
78) and scaling field (
81), the equation for
(denoting
) and
of each path becomes
subject to the light-cone condition. The two adjacent solutions at
, indexed by
(earlier) and
, trace trajectories
,
and
,
, and the redshift is calculate from the equality
as
. Now, on the two, non overlapping,
parts of their supports,
(almost) satisfy the light-cone condition which, for the highly symmetric metric (
78) implies
. Assuming that
a changes very little over
, the difference between the two integrals in (
91) coming from those end parts is
which would give the standard expression for the redshift in terms of
a. However, in addition there is also an
contribution from the overlap
(rounding the boundary points for clarity which is legitimate to leading order), as the two solutions for (
89)(
90) see slightly different Hubble parameters, on the order of
, and a slightly different scaling field in (
90),
. Nonetheless, since
vanishes at its endpoints, contribution (
93) also vanishes. Note that only the light-condition entered the above analysis, rather than the explicit, conjectured (
89) and (
90). Similarly for the angular diameter distance and the luminosity distance, the latter further requiring exact conservation of
, which is true also in our model.
Finally, the flatness problem of pre-inflation cosmology is elegantly dismissed as follows. First, generalizing the relativistic model to a curved-space FLRW metric is straightforward, and the Friedmann equation (
88) receives a
addition to its r.h.s. Denoting the ratio between its
k and
r.h.s. terms,
, the flatness problem can be stated as the “unrealistic" fine-tuning of
to near mathematical zero at early times, needed to bring its current, observed value to zero within measurement uncertainties. For example, if
, with
encoding the creation/annihilation of matter in scale flow, then
which, at a fixed
, grows by many orders of magnitude over the history of the universe. However, in our formalism the universe is not a machine, propagating in time its state at an earlier time, as previously explained in the context of Bell’s theorem; Friedmann’s equation (
88) enters the relativistic model as a constraint, not an evolution rule, and a cosmological solution is just what emerges out of the set of all constraints. Moreover, even when seen as an evolution rule, (
88) may lead to the following counter argument: At a fixed time, a reasonable
interpolating between a star and a ‘universe’ would counter the growth of
a in the
direction (i.e., the scale-rate of density growth due to matter creation is greater than the third root of its geometric depletion rate). Thus, unless
is fantastically large close to the scale,
, at which a giant’s section corresponds to a star—and why should it be?—
to within measurement uncertainties is perfectly “realistic".