1. Introduction
Light, as a fundamental physical phenomenon, has been at the center of our understanding of the universe for more than three centuries. The nature of light has been constantly re-conceptualized with each new revolution in physics, yet a fundamental question remains open: is light truly a three-dimensional phenomenon, similar to other physical objects in our world, or is its nature fundamentally different?
Modern physics has established that the photon, the quantum of the electromagnetic field, possesses zero rest mass. Experimental constraints indicate an upper limit on the photon mass of kg, which is in good agreement with exact masslessness. However, the masslessness of the photon creates a profound contradiction with the assumption of three-dimensionality of light phenomena. A massless particle cannot have finite statistical moments of distribution—a fundamental mathematical requirement that is inconsistent with the Gaussian character of distribution expected for three-dimensional phenomena.
This contradiction points to the need to revise our basic ideas about the nature of light and space. This paper proposes a radically new view of these fundamental concepts, based on two principles that will be formulated in the next section.
For readers interested in the historical context of the development of ideas about light, space, and time, Appendix A presents a historical overview. The contribution of key figures, including Hendrik and Ludwig Lorentz, Augustin-Louis Cauchy, Henri Poincaré, Albert Einstein, in shaping modern views on electromagnetic phenomena and the dimensionality of space or space-time is considered there.
The evolution and historical background of the development of the concept of time is a separate extensive topic, which is partially addressed in the appendices, but it is important to mention now that, surprisingly, there is no unified theory of time, clocks, or measurements. The question is still unresolved, and there is no definition or consensus about which parameter is used for differentiating the Lagrangians that are at the heart of dynamics. The uncertainty of time manifests itself in other areas of physics as well.
2. Fundamental Principles
To make some progress in understanding the essence of time, it is proposed to replace the direct question "What is time?" with two related and more precise questions.
The first question in different connotations can be phrased as:
"What is the mechanism of synchronization?"
"Why is it as we observe it, and not different?"
"Why is there a need for synchronization at all?"
"Why would we break the U-axiom without synchronization, laws of physics here and there would be different, the experimental method would not be useful, the very concept of universal laws of physics would not happen?"
The essence of the second question in different connotations:
"Why don’t we observe absolute synchronization?"
"What causes desynchronization?"
"How do this chaos (desynchronization) and order (synchronization) balance?"
"Why is it useful for us to introduce the dichotomy of synchronizer and desynchronizer as a concept?"
The sought answers should be as concise, economical, universal as possible, have maximum explanatory power and logical consistency (not lead to two or more logically mutually exclusive consequences), open new doors for research rather than close them, push toward new questions for theoretical and practical inquiries, not just prohibitions.
The sought answers should provide a clear zone of falsification and verification.
Without a clear answer to these questions, it is impossible to build a theory of measurements.
Here are this work’s answers to these questions:
Principle I: Electromagnetic phenomena are two-dimensional and follow the Cauchy distribution law.
Principle II: There exists a non-integer variable dimension of spaces.
These principles are sufficient.
3. Theoretical Foundation
3.1. Dimensional Nature of Electromagnetic Phenomena
It is traditionally assumed that space has exactly three dimensions, and all physical phenomena exist in this three-dimensional space. In some interpretations, they speak of 3+1 or 4-dimensional space, where the fourth dimension is time. However, upon closer examination, the dimensionality of various physical phenomena may differ from the usual three dimensions. By dimensionality D, we mean a parameter that determines how physical information scales with distance.
For electromagnetic phenomena, there are serious theoretical grounds to assume that their effective dimensionality is
(exactly). Let’s consider the key arguments (a more detailed analysis of numerous arguments for the two-dimensional nature of electromagnetic phenomena is presented in [
1]):
3.1.1. Wave Equation and Its Solutions
The wave equation in
D-dimensional space has the form:
This equation demonstrates qualitatively different behavior exactly at
, where solutions maintain their shape without geometric dispersion:
At any dimensionality above or below 2.0, waves inevitably distort. At , waves geometrically dissipate as they propagate, with amplitude decreasing as . At , waves experience a form of "anti-dispersion", with amplitude increasing with distance. Only at exactly do waves maintain perfect coherence and shape—a property observed in electromagnetic waves at astronomical distances.
3.1.2. Green’s Function for the Wave Equation
The Green’s function for the wave equation undergoes a critical phase transition exactly at
:
The case represents the exact boundary between two fundamentally different regimes, transitioning from power-law decay to power-law growth. The logarithmic potential at exactly represents a critical point in the theory of wave propagation.
3.1.3. Limitations on Parameter Measurement
A profound, but rarely discussed property of electromagnetic waves is the fundamental limitation on the number of independent parameters that can be simultaneously measured. Despite centuries of experimental work with electromagnetic phenomena, no experiment has ever been able to successfully measure more than two independent parameters of a light wave simultaneously.
This limitation is not technological, but fundamental. For a truly three-dimensional wave, we should be able to extract three independent parameters corresponding to the three spatial dimensions. However, electromagnetic waves consistently behave as if they possess only two degrees of freedom—exactly what we would expect from a fundamentally two-dimensional phenomenon.
3.2. Connection Between Dimension D=2 and the Cauchy Distribution
The connection between exact two-dimensionality (D=2) and the Cauchy distribution is not coincidental but reflects deep mathematical and physical patterns.
For the wave equation in two-dimensional space, the Green’s function, describing the response to a point source, has a logarithmic form:
The gradient of this function, corresponding to the electric field from a point charge in 2D, is proportional to:
The wave intensity is proportional to the square of the electric field:
This asymptotic behavior of
is characteristic of the Cauchy distribution. In a one-dimensional cross-section of a two-dimensional wave (which corresponds to measuring intensity along a line), the intensity distribution will have the form:
which exactly corresponds to the Cauchy distribution.
3.2.1. Lorentz Invariance and Uniqueness of the Cauchy Distribution
From the perspective of probability theory, the Cauchy distribution is the only candidate for describing massless fields for several fundamental reasons:
1. Exceptional Lorentz Invariance: The Cauchy distribution is the only probability distribution that preserves its form under Lorentz transformations. Under scaling and shifting (), the Cauchy distribution transforms into a Cauchy distribution with transformed parameters. This invariance with respect to fractional linear transformations is the mathematical expression of invariance with respect to Lorentz transformations.
2. Unique Connection with Masslessness: Among all stable distributions, only the Cauchy distribution has infinite moments of all orders. This mathematical property is directly related to the massless nature of the photon—any other distribution having finite moments is incompatible with exact masslessness.
3. Conformal Invariance: Massless quantum fields possess conformal invariance—symmetry with respect to scale transformations that preserve angles. In statistical representation, the Cauchy distribution is the only distribution that preserves conformal invariance.
Thus, the Cauchy distribution is not an arbitrary choice from among many possible candidates, but the only distribution that satisfies the necessary mathematical and physical requirements for describing massless fields within the framework of Lorentz-invariant theory, which has very strong experimental support.
3.2.2. Manifestations of the Cauchy Distribution in Quantum Physics
Notably, the Cauchy distribution arises in many resonant phenomena of quantum physics. At the quantum level, the interaction between photons and charged particles has a deeply resonant nature:
1. In quantum electrodynamics (QED), the interaction between light and matter is carried out through the exchange of virtual photons. Mathematically, the photon propagator has the form , where is an infinitesimal quantity that determines the causal structure of the theory. In coordinate representation, this leads to a power-law decay of potential, characteristic of the Cauchy distribution. This structure of the propagator with a pole in the complex plane is a direct mathematical consequence of the masslessness of the photon.
2. Spectral lines of atomic transitions have a natural broadening, the shape of which is described by the Lorentz (Cauchy) distribution. This is a direct consequence of the uncertainty principle and the finite lifetime of excited states.
3. In scattering theory, the resonant cross-section as a function of energy has the form of the Breit-Wigner distribution, which is essentially a Cauchy distribution with a physical interpretation of parameters.
This universality indicates that the Cauchy distribution is not just a mathematical convenience, but reflects the fundamental nature of massless quantum fields.
Thus, if light (more strictly speaking, electromagnetic phenomena; I said light merely for greater clarity) is indeed a two-dimensional phenomenon, its intensity in the shadow of a thin object should follow the Cauchy distribution (more precisely, half-Cauchy, if we are talking about shadow intensity, since we need to make an adjustment for only the positive region).
3.3. Relationship Between the sinc² Function and the Cauchy Distribution
In classical diffraction theory, the sinc² function is often used to describe the intensity of the diffraction pattern from a rectangular slit in the Fraunhofer approximation. However, this function, convenient for mathematical analysis, can be viewed more as a "mathematical crutch" rather than a reflection of fundamental physical reality.
3.3.1. Origin of the sinc² Function in Diffraction Theory
The sinc² function appears in diffraction theory as a result of the Fourier transform of a rectangular function describing the transmission of light through a rectangular slit:
where a is the slit width, is the light wavelength, is the diffraction angle.
This mathematically elegant solution, however, is based on several idealizations:
Assumption of an ideal plane wave
Perfectly rectangular slit with sharp edges
Far field (Fraunhofer approximation)
Any deviation from these conditions (which is inevitable in reality) leads to deviations from the sinc² profile.
3.3.2. Asymptotic Behavior of sinc² and the Cauchy Distribution
Despite their apparent difference, the sinc² function and the Cauchy distribution have the same asymptotic behavior for large values of the argument. For large
x:
which coincides with the asymptotic of the Cauchy distribution:
This coincidence is not accidental and indicates a deeper connection between these functions.
3.4. Nature of Mass
The concept of the two-dimensional nature of electromagnetic phenomena and the Cauchy distribution allows us to take a fresh look at the fundamental nature of mass. Traditionally, mass is viewed as an inherent property of matter, but upon deeper analysis, surprising patterns related to dimensionality are revealed.
3.4.1. Absence of Mass at D=2
Remarkably, at an effective space dimension of exactly D=2.0, mass as a phenomenon simply cannot exist. This is not a coincidental coincidence, but a consequence of deep mathematical and physical regularities:
1. In two-dimensional space, the Cauchy distribution is a natural statistical description, which does not have finite moments—a property directly related to masslessness.
2. For space with D=2.0, the Green’s function acquires a logarithmic character, creating a critical point at which massive solutions are impossible.
3. Only when deviating from D=2.0 (both upward and downward) does the possibility of massive particles and fields arise.
This fundamental feature explains why the electromagnetic field (with effective dimensionality D=2.0) is strictly massless—not because of some random properties, but due to the impossibility of the very concept of mass in a two-dimensional context.
3.4.2. Interpretation of the Relations and
The classical relations between energy, mass, and frequency acquire new meaning in the context of variable dimension theory:
1. The relation connects energy with mass through the square of the speed of light. The quadratic nature of this dependence is not accidental— indicates the fundamental two-dimensionality of the synchronization process carried out by light. In fact, this expression can be viewed as a measure of the energy required to synchronize a massive object with a two-dimensional electromagnetic field.
2. The relation connects energy with angular frequency through Planck’s constant. In the context of our theory, represents intensity, speed (without temporal context), or a measure of synchronization; it is also true that the faster synchronization occurs, the higher the energy. Planck’s constant ℏ acts as a fundamental measure of the minimally distinguishable discrepancy between two-dimensional and three-dimensional descriptions, fixing the threshold value of informational minimally detected misalignment, which can be perceived as discretization.
These two formulas can be combined to obtain the relation
, which can be rewritten as:
In this expression, mass appears not as a fundamental property, but as a measure of informational misalignment between two-dimensional (electromagnetic) and non-two-dimensional (material) aspects of reality, normalized by the square of the speed of light.
3.4.3. Origin of Mass as a Dimensional Effect
Combining the ideas presented above, we arrive at a new interpretation of the nature of mass:
1. Mass arises exclusively as a dimensional effect—the result of interaction between spaces of different effective dimensionality.
2. For phenomena with effective dimensionality exactly D=2.0 (such as the electromagnetic field), mass is impossible for fundamental mathematical reasons.
3. For phenomena with effective dimensionality , mass is a measure of informational misalignment with the two-dimensional structure of the space of electromagnetic interactions.
4. Planck’s constant ℏ fixes the minimum value of this dimensional discrepancy that can be registered in an experiment.
This concept radically changes our understanding of mass, transforming it from a fundamental property of matter into an emergent phenomenon arising at the boundary between spaces of different dimensionality.
4. Zones of Falsification and Verification
One of the possible zones of verification and falsification that is experimentally accessible now, and inexpensive, which is also important, is in the detailed examination of the shadow from a super-thin object.
The essence of the question can be easily represented as a reverse slit experiment (instead of studying light passing through a slit, we study the shadow from a single thin object). According to the proposed theory, the spatial distribution of light intensity in the shadow region should demonstrate a slower decay, characteristic of the Cauchy distribution (with "heavy tails", decaying as ), than what is predicted by the standard diffraction model with the function. Although asymptotically at large distances the function also decays as , the detailed structure of the transition from shadow to light and the behavior in the intermediate region should be better described by a pure Cauchy distribution (or half-Cauchy for certain experimental geometries).
For conducting such an experiment, it may be preferable to use X-ray radiation instead of visible light, as this will allow working with objects of smaller sizes and achieve better spatial resolution. Modern laboratories equipped with precision detectors with high dynamic range are fully capable of implementing such an experiment and reliably distinguishing these subtle features of intensity distribution.
It is separately important to note that the actual observation of a form different from the Cauchy family, also with a high degree of statistical significance and with reliable exclusion of external noise, will reveal a huge tension in modern physics and pose a crucial question: "Why do we record the phenomenon of Lorentz invariance with high reliability in a bunch of independent experiments?" In this sense, even if reality gives a non-Cauchy form (which would seriously undermine the theory of this work), the experiment is win-win, as a negative result will also be fundamentally useful, and the direct costs of asking this question to reality are relatively small.
4.1. Comparison with Existing Experimental Data on Diffraction
Our proposed experiment should be considered in the context of existing high-precision measurements of diffraction patterns on thin obstacles. In the last 15 years, a number of significant experiments have been conducted that indirectly confirm the heavy-tailed character of light intensity distribution.
4.1.1. Experiments with Diffraction on a Single Edge
Experiments on diffraction on a half-plane (single edge) have allowed high-precision measurements of the light intensity profile behind an obstacle. For example, Ganci’s studies (2010) [
2] were aimed at verifying Sommerfeld’s rigorous solution for diffraction on a half-plane. These experiments confirmed the theoretical predictions, including the phase shift of the wave diffracted at the edge, which is a key property of the theory of diffraction waves at the boundary.
Recently, Mishra and co-authors (2019) [
3] published a study in Scientific Reports in which they used the built-in edge of a photodetector as a diffracting aperture for mapping the intensity of bands. They observed a clear Fresnel diffraction pattern from the edge and even noted subtle effects (such as alternating band amplitudes due to slight edge curvature) in excellent agreement with wave theory.
These edge diffraction experiments provide quantitative data on intensity profiles across several orders of magnitude, and their results are consistent with classical models (for example, Fresnel integrals), confirming the nature of intensity distribution in the "tails" far from the geometric shadow.
4.1.2. Diffraction on a Thin Wire
Thin wires (or fibers) create diffraction patterns similar to patterns from a single slit (by Babinet’s principle). A notable high-precision study was conducted by Ganci (2005) [
4], who investigated Fraunhofer diffraction on a thin wire both theoretically and experimentally. This work measured the intensity profile behind a stretched wire using a laser and analyzed it statistically.
Importantly, it showed that assuming an ideal plane wave illumination produces a characteristic intensity profile with pronounced heavy tails (side lobes), but real deviations (such as a Gaussian laser beam profile) can cause systematic differences from the ideal pattern. In fact, Ganci demonstrated that naive application of Babinet’s principle can be erroneous if the incident beam is not perfectly collimated, leading to measured intensity distributions that differ in the far "tails".
This study provided intensity data spanning many diffraction orders and performed a careful statistical comparison with theory. The heavy-tailed nature of ideal wire diffraction (power-law decay of band intensity) was confirmed, while simultaneously highlighting how a Gaussian incident beam leads to faster decay in the wings than the ideal behavior.
4.1.3. Measurements with High Dynamic Range
To directly measure diffraction intensity in a range of 5-6 orders of magnitude, researchers have used high dynamic range detectors and multi-exposure methods. A striking example is the work of Shcherbakov et al. (2020) [
5], who measured the Fraunhofer diffraction pattern up to the 16th order of diffraction bands, using a specialized LiF photoluminescent detector.
In their experiment at a synchrotron beamline, a 5 m wide aperture (approximation to a slit) was illuminated with soft X-rays, and the diffraction image was recorded with extremely high sensitivity. They achieved a limiting dynamic range of about in intensity. They were not only able to detect bands extremely far from the center, but also quantitatively determine the intensity decay: the dose in the central maximum was about (in arbitrary units), while by the 16th band it dropped to , a difference of 7 orders of magnitude. The distance between bands and intensity statistics were analyzed, confirming the expected envelope even at these extreme angles.
4.1.4. Analysis of Intensity Distribution: Gaussian and Heavy-Tailed Models
Several studies explicitly compare observed diffraction intensity profiles with various statistical distribution models (Gaussian and heavy-tailed). Typically, diffraction from apertures with sharp edges creates intensity distributions with heavy tails, while an aperture with a Gaussian profile creates a Gaussian decay with minor side lobes. This was highlighted in Ganci’s experiment with a thin wire: with plane-wave illumination, the cross-sectional intensity profile follows a heavy-tailed pattern (formally having long tails), while a Gaussian incident beam "softens" the edges and makes the wings closer to Gaussian decay.
Researchers have applied heavy-tailed probability distributions to model intensity values in diffraction patterns. For example, Alam (2025) [
6] analyzed X-ray powder diffraction intensity datasets using distributions from the Cauchy family, demonstrating superior approximation for strong outliers in intensity. By considering intensity fluctuations as a heavy-tailed process, this work covered the statistical spread from the brightest Bragg peaks to the weak background, across many orders of magnitude. The study showed that half-Cauchy or log-Cauchy type distributions can model the intensity histogram much better than Gaussian, which would significantly underestimate the frequency of large deviations.
5. Conclusion
5.1. Summary of Main Results and Their Significance
This paper presents two fundamental principles with revolutionary potential for understanding light, space, and time:
1. Electromagnetic phenomena are two-dimensional and follow the Cauchy distribution law. 2. There exists a non-integer variable dimension of spaces.
These principles form the basis for a new approach to understanding physical reality. The spatial dimension D=2.0 represents a special critical point where waves maintain coherence without geometric dispersion, the Green’s function undergoes a phase transition from power-law decay to logarithmic dependence, and the existence of mass becomes fundamentally impossible. These mathematical features precisely correspond to the observed properties of the electromagnetic field—masslessness, preservation of coherence over cosmological distances, and the universality of the Cauchy distribution in resonant phenomena.
The proposed "reverse slit experiment" will directly test the hypothesis about the distribution of light intensity in the shadow of a thin object. If it is confirmed that this distribution follows the Cauchy law, rather than the sinc² function (as predicted by standard diffraction theory), this will provide direct evidence for the special status of the Cauchy distribution for electromagnetic phenomena and, consequently, their two-dimensional nature.
The actual observation of a form different from the Cauchy family, with high statistical significance and reliable exclusion of external noise, will uncover a huge tension in modern physics and pose the fundamental question: "Why is the phenomenon of Lorentz invariance recorded with high reliability in multiple independent experiments?" In this sense, even if reality gives a non-Cauchy form (which would seriously undermine the presented theory), the experiment remains win-win, as a negative result would be equally fundamentally useful for physics, exposing deep contradictions in the modern understanding of the nature of light and interactions.
5.2. Ultraviolet Catastrophe and the Origin of Quantum Theory
The historical "ultraviolet catastrophe," which became a crisis of classical physics at the end of the 19th century after experiments with black body radiation in Planck’s furnaces, acquires a natural explanation within the framework of the proposed theory. The Cauchy distribution, characterizing two-dimensional electromagnetic phenomena, fundamentally does not have finite statistical moments of higher orders, which directly explains the divergence of energy at high frequencies.
Quantum physics in this concept is not a separate field with its unique laws, but naturally arises in spaces with dimensionality D < 2.0. When the effective dimensionality decreases below the critical boundary D=2.0, the statistical properties of distributions change radically, creating conditions for the emergence of quantum effects. These effects manifest as projections from lower-dimensional spaces (D < 2.0) into the three-dimensional world of observation, which explains their seeming paradoxical nature when described in terms of three-dimensional space.
5.3. Nature of Mass as a Dimensional Effect
The presented concept reveals a new understanding of the nature of mass. At the point D=2.0 (electromagnetic field), mass is fundamentally impossible due to the fundamental mathematical properties of two-dimensional spaces and the Cauchy distribution. When deviating from this critical dimensionality (both upward and downward), the possibility of massive states arises.
Mass in this interpretation appears not as a fundamental property of matter, but as a measure of informational misalignment between two-dimensional electromagnetic and non-two-dimensional material aspects of reality, normalized by the square of the speed of light. This explains the famous formula as an expression of the energy needed to synchronize a massive object with a two-dimensional electromagnetic field.
This approach to understanding mass allows explaining the observed spectrum of elementary particle masses without the need to postulate a Higgs mechanism, presenting mass as a natural consequence of the dimensional properties of spaces in which various particles exist.
5.4. New Interpretation of Cosmological Redshift
One of the revolutionary implications of this theory concerns the interpretation of cosmological redshift. Instead of universe expansion, a fundamentally different explanation is proposed: redshift could be the result of light passing through areas with different effective dimensionality.
This interpretation represents a modern version of the "tired light" hypothesis, but with a specific physical mechanism for weakening photon energy. When light (D=2.0) passes through areas with a different effective dimensionality, the energy of photons is weakened in proportion to the dimensional difference, which is observed as redshift.
This explanation of redshift is consistent with the observed "redshift-distance" relationship, while completely eliminating the need for the hypothesis of an expanding universe with an initial singularity. This approach removes fundamental conceptual problems associated with the beginning and evolution of the universe, offering a model of a static universe with dimensional gradients.
5.5. Hypothesis of Grand Unification at High Energies
The theory of variable dimension of spaces opens a new path to the Grand Unification of fundamental interactions. At high energies, according to this concept, the effective dimensionality of all interactions should tend toward D=2.0—the point at which electromagnetic interaction naturally exists.
This prediction means that at sufficiently high energies, there should be a unification of all fundamental forces of nature not through the introduction of additional symmetries or particles, but through the natural convergence of their effective dimensionalities to the critical point D=2.0. In this regime, all interactions should exhibit properties characteristic of light—masslessness, universality of interaction force, and optimal information transfer.
This approach to Grand Unification does not require exotic additional dimensions or supersymmetric particles, offering a more elegant solution based on a single principle of dimensional flow.
5.6. Historical Perspective and Paradigm Transformation
From a historical perspective, the proposed experiment can be viewed as a natural stage in the evolution of ideas about the nature of light:
1. Newton’s Corpuscular Theory (17-18th centuries) viewed light as a stream of particles moving in straight lines.
2. Young and Fresnel’s Wave Theory (early 19th century) established the wave nature of light through observation of interference and diffraction.
3. Maxwell’s Electromagnetic Theory (second half of 19th century) unified electricity, magnetism, and optics.
4. Planck and Einstein’s Quantum Theory of Light (early 20th century) introduced the concept of light as a stream of energy quanta—photons.
5. Quantum Electrodynamics by Dirac, Feynman, and others (mid-20th century) created a theory of interaction between light and matter at the quantum level.
6. The Proposed Reverse Slit Experiment potentially establishes the fundamental dimensionality of electromagnetic phenomena and the Cauchy statistical distribution as their inherent property.
It is interesting to note that many historical debates about the nature of light—wave or particle, local or non-local phenomenon—may find resolution in the dimensional approach, where these seeming contradictions are explained as different aspects of the same phenomenon, perceived through projection from a space of different dimensionality.
5.7. Call for a Radical Rethinking of Our Ideas about the Nature of Light
The results of the proposed experiment, regardless of which distribution is confirmed—Cauchy or sinc² (default model)—will require a radical rethinking of ideas about the nature of light:
1. If the Cauchy distribution is confirmed, this will be direct evidence of the special statistical character of electromagnetic phenomena, consistent with their masslessness and exact Lorentz invariance. This will require a revision of many aspects of quantum mechanics, wave-particle duality, and possibly even the foundations of energy discreteness.
2. If the sinc² distribution is confirmed, a profound paradox will arise, requiring explanation: how can massless phenomena have characteristics that contradict their massless nature. This will create serious tension between experimentally verified Lorentz invariance and the observed spatial distribution of light.
In any case, it is necessary to overcome the conceptual inertia that automatically assumes the three-dimensionality of all physical phenomena. Perhaps different physical interactions have different effective dimensionality, and this is the key to their unification at a more fundamental level.
It is also important to note that a practical experiment and a view through the prism of such principles in any case tells us something new about the essence of time. And also to recognize that there is enormous inertia and an unspoken ban on investigating the essence of time.
Scientific progress is achieved not only through the accumulation of facts but also through bold conceptual breakthroughs that force us to rethink fundamental assumptions. The proposed experiment and the underlying theory of dimensional flow represent precisely such a potential breakthrough.
Affiliation and Acknowledgments
The author has no formal affiliation with scientific or educational institutions. This work was conducted independently, without external funding or institutional support.
I express my deep gratitude to Anna for her unwavering support, patience, and encouragement throughout the development of this research.
I would like to express special appreciation to the memory of my grandfather, Vasily, a physicist-thermodynamicist, who instilled in me an inexhaustible curiosity and taught me to ask fundamental questions about the nature of reality. His influence is directly reflected in my pursuit of new approaches to understanding the basic principles of the physical world.
Additionally, I express my appreciation for some non-public conversations, questions, and critiques, parts of which have been transformed into appendices and thought experiments in this work.
Appendix A Historical Perspective: From the Lorentzes to Modernity
The history of the development of our ideas about the nature of light, space, and time contains amazing parallels and unexpected connections between ideas from different eras. This appendix presents a historical overview of key figures and concepts that have led to modern views on the dimensionality of space and the statistical nature of electromagnetic phenomena.
Appendix A.1. Two Lorentzes: Different Paths to the Nature of Light
The history of light physics is marked by the contributions of two outstanding scientists with the same surname—Hendrik Lorentz and Ludwig Lorentz—whose works laid the foundation for modern electrodynamics and optics, but contained important elements that did not receive proper development in subsequent decades.
Appendix A.1.1. Hendrik Antoon Lorentz (1853-1928): Transformations and Ether
Hendrik Lorentz, a Dutch physicist and Nobel laureate of 1902, entered history primarily as the author of the famous transformations that became the mathematical basis of special relativity theory. However, his view of the transformations themselves differed substantially from the subsequent Einsteinian interpretation.
For Lorentz, his transformations represented not a fundamental property of space-time, but a mathematical description of the mechanism of interaction of moving bodies with the ether. In his "Theory of Electrons" (1904-1905), he developed a detailed model explaining the contraction of the length of moving bodies and the slowing down of processes occurring in them through physical changes caused by movement through the ether.
Notably, even after the wide recognition of Einstein’s special theory of relativity, Lorentz did not abandon the concept of ether, considering it a necessary substrate for the propagation of electromagnetic waves. In his 1913 lecture, he said: "In my opinion, there is still room for the ether, even if we accept Einstein’s theory."
Appendix A.1.2. Ludwig Valentin Lorentz (1829-1891): Theory of Luminescence and the Lorentz-Lorentz Formula
Ludwig Lorentz, a Danish physicist and mathematician, is significantly less known to the general public, although his contribution to electromagnetic theory was no less significant. He independently of Maxwell developed an electromagnetic theory of light, using a bold mathematical approach based on retarded potentials.
His main achievement—the Lorentz-Lorentz formula (independently discovered by Hendrik Lorentz), connecting the refractive index of a medium with its polarizability:
where n is the refractive index, N is the number of molecules per unit volume, is the polarizability.
This formula assumes the existence of local fields within the medium that differ from the macroscopic electromagnetic field. Ludwig Lorentz understood that the interaction of light with matter occurs at the level of atomic and molecular oscillators, which anticipated the quantum theory of light-matter interaction.
Especially important is that Ludwig Lorentz considered the medium as a discrete ensemble of oscillators, unlike Maxwell’s continuum models. This anticipated modern ideas about the quantum nature of light-matter interaction.
Appendix A.1.3. Commonality and Differences in the Approaches of the Two Lorentzes
The works of both Lorentzes are united by a deep understanding of the need for a medium or structure for the propagation of electromagnetic waves, although they approached this question from different angles. Hendrik insisted on the existence of ether as a physical medium, while Ludwig worked with mathematical models of discrete oscillators.
It is noteworthy that both scientists intuitively strived for a description of the spatial structure in which electromagnetic radiation propagates, which can be viewed as a harbinger of modern ideas about the specific dimensionality of electromagnetic phenomena.
Appendix A.2. Augustin-Louis Cauchy (1789-1857): From Analysis of Infinitesimals to Distributions with “Heavy Tails”
Augustin-Louis Cauchy, an outstanding French mathematician, is known for his fundamental work in mathematical analysis, theory of differential equations, and theory of complex functions. However, his contribution to the development of statistical theory and, indirectly, to the understanding of the nature of wave processes, is often underestimated.
Appendix A.2.1. Discovery and Properties of the Cauchy Distribution
Cauchy discovered the distribution named after him while studying the limiting behavior of certain integrals. The Cauchy distribution, expressed by the formula:
possesses several unique properties:
It does not have finite statistical moments (no mean value, variance, or higher-order moments)
It is a stable distribution—the sum of independent random variables with a Cauchy distribution also has a Cauchy distribution
It has "heavy tails," decaying as for large values of the argument
Cauchy viewed this distribution as a mathematical anomaly, contradicting intuitive ideas about probability distributions. He could not foresee that the distribution he discovered would become the key to understanding the nature of massless fields in 20th-21st century physics.
Appendix A.2.2. Connection with Resonant Phenomena and Wave Processes
Already in the 19th century, it was established that the Cauchy distribution (also known as the Lorentz distribution in physics) describes the shape of resonance curves in oscillatory systems. However, the deep connection between this distribution and wave processes in spaces of different dimensionality was realized much later.
Appendix A.3. Henri Poincaré (1854-1912): Conventionalism and Transformation Groups
Henri Poincaré, an outstanding French mathematician, physicist, and philosopher of science, played a fundamental role in shaping both the mathematical apparatus and the philosophical foundations of modern physics.
Appendix A.3.1. Principle of Relativity and Lorentz Transformations
Poincaré was the first to formulate the principle of relativity in its modern understanding and obtained the complete Lorentz transformations before Einstein’s publication. In his work "On the Dynamics of the Electron" (1905), he showed that these transformations form a group, which he called the "Lorentz group."
Critically important, Poincaré viewed the mathematical group structure of transformations as defining the physical invariance of the laws of nature. This approach anticipated the modern understanding of the fundamental role of symmetries in physics.
Appendix A.3.2. Philosophical Conventionalism and the Geometry of Space
Poincaré adhered to a philosophical position known as conventionalism, according to which the choice of geometry for describing physical space is a matter of convention (agreement), not an empirical discovery.
In his book "Science and Hypothesis" (1902), he wrote: "One geometry cannot be more true than another; it can only be more convenient." This position suggests that the effective dimensionality of space for various physical phenomena may differ depending on the context, which is surprisingly consonant with modern ideas about the variable dimensionality of various fundamental interactions.
Appendix A.3.3. Poincaré and the Problem of the Scale Factor
Interestingly, it was Poincaré who established the value of the scale coefficient equal to unity in the Lorentz transformations, guided by the mathematical requirement of group structure. This decision, mathematically elegant, possibly missed the physical depth of the full form of transformations, as later argued by Verkhovsky.
Appendix A.4. Albert Einstein (1879-1955): Relativization of Time and Geometrization of Gravity
Albert Einstein’s contribution to the development of modern physics is difficult to overestimate. His revolutionary theories fundamentally changed our understanding of space, time, and gravity.
Appendix A.4.1. Special Theory of Relativity: Time as the Fourth Dimension
In his famous 1905 paper "On the Electrodynamics of Moving Bodies," Einstein made a revolutionary step, reconceptualizing the notion of time as a fourth dimension, equal to spatial coordinates. This reconceptualization led to the formation of the concept of four-dimensional space-time.
Einstein’s approach differed from Lorentz’s approach by rejecting the ether and postulating the fundamental nature of Lorentz transformations as a reflection of the properties of space-time, rather than the properties of material objects moving through the ether.
However, by accepting time as the fourth dimension, Einstein implicitly postulated a special role for dimensionality D=4 (three-dimensional space plus time) for all physical phenomena, which may not be entirely correct for electromagnetic phenomena if they indeed have a fundamental dimensionality of D=2.
Appendix A.4.2. General Theory of Relativity: Geometrization of Gravity
In the general theory of relativity (1915-1916), Einstein took an even more radical step, interpreting gravity as a manifestation of the curvature of four-dimensional space-time. This led to the replacement of the concept of force with a geometric concept—geodesic lines in curved space-time.
Einstein’s geometric approach showed how physical phenomena can be reconceptualized through the geometric properties of space with special dimensionality and structure. This anticipated modern attempts to geometrize all fundamental interactions, including electromagnetism, through the concept of effective dimensionality.
Appendix A.5. Hermann Minkowski (1864-1909): Space-Time and Light Cone
Hermann Minkowski, a German mathematician and former teacher of Einstein, created an elegant geometric formulation of special relativity theory by introducing the concept of unified space-time—"world" (Welt).
Appendix A.5.1. Minkowski Space and Invariant Interval
In his famous 1908 lecture "Space and Time," Minkowski presented a four-dimensional space with the metric:
where is the invariant interval between events.
This metric defines the structure of Minkowski space, in which Lorentz transformations represent rotations in four-dimensional space-time. Minkowski showed that all the laws of special relativity theory can be elegantly expressed through this four-dimensional geometry.
Appendix A.5.2. Light Cone and Special Role of Light
Minkowski’s particularly important contribution was the concept of the light cone—a geometric structure that defines the causal structure of space-time. An event on the light cone of a certain point corresponds to a ray of light passing through that point.
Minkowski was the first to clearly realize that light plays a special, fundamental role in the structure of space-time. He wrote: "The entity that we now call ’ether’ may in the future be perceived as special states in space."
This intuition of Minkowski about the fundamental connection between light and the geometric structure of space-time anticipates modern ideas about light as a phenomenon defining the specific structure of space.
Appendix A.6. Gregorio Ricci-Curbastro (1853-1925): Tensor Analysis and Differential Geometry
Gregorio Ricci-Curbastro, an Italian mathematician, developed tensor calculus—the mathematical apparatus that became the basis of general relativity theory and modern differential geometry.
Appendix A.6.1. Tensor Analysis and Absolute Differential Calculus
Ricci developed what he called "absolute differential calculus"—a systematic theory of tensors that allowed formulating physical laws in a form invariant with respect to arbitrary coordinate transformations.
This work, published jointly with his student Tullio Levi-Civita in 1900, laid the mathematical foundation for the subsequent development of general relativity theory and other geometric theories of physics.
Appendix A.6.2. Spaces of Variable Curvature and Dimensionality
Ricci’s tensor analysis is naturally applicable to spaces of arbitrary dimensionality and curvature. This universality of the tool opened the way for exploring physical theories in spaces of various dimensionality and structure.
In the modern context, Ricci’s work can be viewed as creating a mathematical apparatus allowing correct formulation of physical laws in spaces of variable dimensionality, which is critically important for understanding the effective dimensionality of fundamental interactions.
Appendix A.7. Dimensionality and Its Perception in the History of Physics
The history of the development of ideas about dimensionality in physics represents an amazing evolution from intuitive three-dimensional space to a multitude of spaces of variable dimensionality and structure.
Appendix A.7.1. From Euclidean Three-Dimensional Space to n-Dimensional Manifolds
Euclidean geometry, based on Euclid’s axioms, silently assumed the three-dimensionality of physical space. This representation dominated science until the 19th century.
The development of non-Euclidean geometries (Lobachevsky, Bolyai, Riemann) in the 19th century opened the possibility of mathematical description of spaces with different curvature. In parallel, the concept of n-dimensional space developed in the works of Grassmann, Cayley, and others.
These mathematical developments were initially perceived as abstract constructions, having no direct relation to physical reality. However, they created the necessary conceptual and mathematical foundation for subsequent revolutions in physics.
Appendix A.7.2. Dimensionality in Quantum Field Theory and String Theory
The development of quantum field theory in the mid-20th century led to the realization of the importance of dimensional analysis in physics. Concepts of critical dimensionality, dimensional regularization, and anomalous dimensionality emerged.
In the 1970s-80s, string theory introduced the idea of microscopic dimensions curled up to Planck scales (compactification). According to this theory, our world could have 10, 11, or 26 dimensions, most of which are not observable due to their compactness.
These developments prepared the ground for the modern understanding of effective dimensionality as a dynamic parameter depending on the scale of observation and the type of interaction.
Appendix A.7.3. Fractal Dimensionality and Dimensional Flow
The concept of fractal (fractional) dimensionality, introduced by Benoit Mandelbrot in the 1970s, revolutionized our understanding of dimensionality, showing that it could be a non-integer number.
In recent decades, in some approaches to quantum gravity (causal dynamical triangulation, asymptotic safety), the concept of dimensional flow has emerged—the effective dimensionality of space-time can change with the scale of energy or distance.
These modern developments naturally lead to the hypothesis that different fundamental interactions may have different effective dimensionality, which in the case of electromagnetism could be exactly D=2.
Appendix A.8. Unfinished Revolution: Missed Opportunities in the History of Physics
Looking back at the history of the development of ideas about light, space, and time, one can notice several critical moments where scientific thought could have gone in a different direction, possibly leading more directly to the modern understanding of the two-dimensional nature of electromagnetic phenomena and the variable dimensionality of space.
Appendix A.8.1. Lorentz’s Ether as Space of Specific Structure
Hendrik Lorentz never abandoned the concept of ether, even after his recognition of the special theory of relativity. His intuitive conviction of the necessity of a special medium for the propagation of light can be viewed as a premonition that electromagnetic phenomena require a space of special structure, different from ordinary three-dimensional space of matter.
If this intuition had been developed in the direction of studying the effective dimensionality of the "ether," perhaps the two-dimensional nature of electromagnetic phenomena would have been discovered much earlier.
Appendix A.8.2. Poincaré’s Conventionalism and the Choice of Geometry
Poincaré’s philosophical conventionalism suggested that the choice of geometry for describing physical space is a matter of convention, not empirical discovery. This profound methodological principle could have led to a more flexible approach to the dimensionality of various physical phenomena.
However, historically, this philosophical position was not fully integrated into physical theories. Instead, after the works of Einstein and Minkowski, the four-dimensionality of space-time came to be perceived as physical reality, rather than as a convenient convention for describing certain phenomena.
Appendix A.8.3. Lost Scale Factor in Lorentz Transformations
As noted by Verkhovsky [
7], in the original formulations of Lorentz transformations, there was an additional scale factor
, which was subsequently taken to be equal to unity.
This mathematical "normalization," produced by Poincaré and Einstein, could have missed an important physical aspect of the transformations. If the scale factor had been preserved and interpreted through the prism of the Doppler effect and the two-dimensionality of electromagnetic phenomena, perhaps this would have led to a more complete theory, naturally including the Cauchy distribution as a fundamental statistical description of light.
Appendix A.8.4. Cauchy Distribution as a Physical, Not Just Mathematical Phenomenon
Although the Cauchy distribution was known to mathematicians since the 19th century, its fundamental role in the physics of electromagnetic phenomena was not fully realized. The Cauchy distribution (or Lorentz distribution in physics) was used as a convenient approximation for describing resonant phenomena, but its connection with the masslessness of the photon and the two-dimensionality of electromagnetic phenomena was not established.
This omission led to the Gaussian distribution, more intuitively understandable and mathematically manageable, becoming a standard tool in physical models, even when it did not quite correspond to the massless nature of the studied phenomena.
Appendix A.9. Conclusion: Historical Perspective of the Modern Hypothesis
A consideration of the historical context of the development of the physics of light and concepts of space-time shows that the hypothesis about the two-dimensionality of electromagnetic phenomena and their description through the Cauchy distribution has deep historical roots. It is not an arbitrary innovation, but rather a synthesis and logical development of ideas present in the works of outstanding physicists and mathematicians of the past.
In fact, many key elements of the modern hypothesis—the special role of light in the structure of space-time (Minkowski), the need for a specific medium for the propagation of electromagnetic waves (H. Lorentz), statistical description of resonant phenomena (Cauchy, L. Lorentz), the conventional nature of geometry (Poincaré), the possibility of geometrization of physical interactions (Einstein, Ricci)—were presented in one form or another in classical works.
The current hypothesis about the two-dimensionality of electromagnetic phenomena and non-integer variable dimensionality of spaces can be viewed as a restoration of a lost line of development of physical thought and completion of an unfinished revolution begun by the Lorentzes, Poincaré, Einstein, and other pioneers of modern physics.
Appendix B Life on a Jellyfish: A Thought Experiment about the Nature of Coordinates, Motion, and Time
In this thought experiment, we consider the fundamental limitations of our perception of space, motion, and time, illustrating them through the metaphor of life on a jellyfish.
Appendix B.1. Jellyfish vs Earth: Qualitative Difference in Coordinate Systems
Imagine two fundamentally different worlds:
Life on Earth
We are used to living on a beautiful, locally almost flat, solid earth. Our world is filled with many static landmarks on an almost flat surface. The Earth has a roughness of relief, making it easy to distinguish any "here" from "there." In addition, stable gravity creates a clear sense of "up" and "down," further fixing our coordinate system.
Life on a Jellyfish
Now imagine that instead, you live on the back of a huge jellyfish, floating in the ocean. This reality is radically different:
The surface of the jellyfish is not static and constantly fluctuates
The jellyfish moves by itself, and the movements are unpredictable and uneven
The surface of the jellyfish is homogeneous, without pronounced landmarks
"Gravity" constantly changes due to the movements of the jellyfish
In such conditions, building a stable coordinate system becomes fundamentally impossible.
Appendix B.2. Impossibility of Determining Rest and Motion
In the world of the jellyfish, the concepts of rest and motion lose clear meaning:
When you walk on the surface of the jellyfish, it is impossible to determine whether you are actually advancing relative to absolute space or whether the jellyfish itself is moving in the opposite direction
Perhaps when you "scratch" the surface of the jellyfish with your feet, it reacts by moving like a treadmill in the opposite direction
Without external landmarks, it is impossible to distinguish your own movement from the movement of the jellyfish
This situation is analogous to our real position in space: we are on the Earth, which rotates around its axis and around the Sun, the Solar System moves in the Galaxy, the Galaxy—in the Local Group, and so on. However, we do not directly feel these movements, perceiving only relative displacements and changes. And we have independent electromagnetic "beacons".
Appendix B.3. Gravity as a Local Gradient, Not an Absolute Value
Continuing the analogy with the jellyfish:
The raising and lowering of the jellyfish’s back will be perceived by you as unpredictable jumps in "gravity"
These changes will slightly warm you, creating a feeling of exposure to some force
However, you will perceive only the changes, the gradients of this "force," not its absolute value
This analogy illustrates an important principle: in reality, physics does not measure absolute values, such as "mass according to the Paris standard"—this is a simplification for schoolchildren. Physicists measure only gradients, transitions, changes in values.
We do not feel the gravitational attraction of the black hole at the center of the Galaxy, although it is enormous, because it acts on us uniformly. We feel only local gradients of the gravitational field.
Appendix B.4. Impossibility of the Shortest Path with a Changing Landscape
On the surface of a fluctuating jellyfish, the very concept of the "shortest path" loses meaning:
If the landscape is constantly changing, the geodesic line (shortest path) is also constantly changing
What was the shortest path a second ago may become a winding trajectory the next moment
Without a fixed coordinate system, it is not even possible to determine the direction of movement
This illustrates the fundamental problem of defining a "straight line" in a curved and dynamically changing space. General relativity faces a similar problem when defining geodesic lines in curved space-time.
Appendix B.5. Role of External Landmarks for Creating a Theory of Motion
In the case of the jellyfish without external beacons, theories of dynamics or motion from the 17th-18th century would not emerge
If the entire surface of the jellyfish is visually homogeneous, and "here" is no different from "there," the coordinate system becomes arbitrary and unstable
Only the presence of external, "absolute" landmarks would allow creating a stable reference system
Similarly, in cosmology, distant galaxies and cosmic microwave background radiation serve as such external landmarks, allowing the determination of an "absolute" reference frame for studying the large-scale structure of the Universe.
Appendix B.6. Thought Experiment with Ship Cabins and Trains
Classic thought experiments with ship cabins and trains can be reconceptualized in the context of the jellyfish:
Imagine a cabin of a ship sailing on the back of a jellyfish, which itself is swimming in the ocean
In this case, even the inertiality of motion becomes indefinite: the cabin moves relative to the ship, the ship relative to the jellyfish, the jellyfish relative to the ocean
Experiments inside the cabin cannot determine not only the speed but even the type and nature of the movement of the system as a whole
This thought experiment illustrates a deeper level of relativity than the classical Galilean or Einsteinian experiment, adding non-inertiality and instability to space itself.
Appendix B.7. Impossibility of an Imaginary Coordinate Grid
The attempt to create an imaginary coordinate grid will face fundamental obstacles:
All your measuring instruments (rulers, protractors) will deform along with the surface of the jellyfish
If the gradient (differential) of deformations is weak, this will create minimal deviations, and the imaginary grid will be almost stable
If the gradient is large and nonlinear, the very geometry of space will bend, and differently in different places
Without external landmarks, it is not even possible to understand that your coordinate grid is distorting
This illustrates the fundamental problem of measuring the curvature of space-time "from inside" that space-time itself. We can measure only relative curvatures, but not absolute "flatness" or "curvedness."
Appendix B.8. Time as a Consequence of Information Asymmetry
The most profound aspect of life on a jellyfish is the reconceptualization of the nature of time:
In conditions of constantly changing surface, when it is impossible to distinguish "here" from "there," the only structuring principle becomes information asymmetry
What we perceive as "time" is nothing other than a measure of the information asymmetry between what we already know (the past) and what we do not yet know (the future)
If the jellyfish completely stopped moving, and all processes on its surface stopped, "time" in our understanding would cease to exist
This thought experiment suggests reconceptualizing time not as a fundamental dimension, but as an emergent property arising from information asymmetry and unpredictability.
Appendix B.9. Connection with the Basic Principles of the Work
This thought experiment is directly connected to the fundamental principles presented in this paper:
1. Two-dimensionality of electromagnetic phenomena: Just as a jellyfish dweller is deprived of the ability to directly perceive the three-dimensionality of their world, so we cannot directly perceive the two-dimensionality of electromagnetic phenomena projected into our three-dimensional world.
2. Non-integer variable dimension of spaces: The oscillations and deformations of the jellyfish’s surface create an effective dimensionality that can locally change and take non-integer values, similar to how the effective dimensionality of physical interactions can change depending on the scale and nature of the interaction.
Life on a jellyfish is a metaphor for our position in the Universe: we inhabit a space whose properties we can measure only relatively, through impacts and changes. Absolute coordinates, absolute rest, absolute time do not exist—these are all constructions of our mind, created to structure experience in a world of fundamental uncertainty and variable dimensionality.
Appendix C Walk to a Tree and Flight to the Far Side of the Moon: A Thought Experiment about the Nature of Observation
This thought experiment allows us to vividly demonstrate the nature of observation through the projection of a three-dimensional world onto a two-dimensional surface, which is directly related to the hypothesis about the two-dimensional nature of electromagnetic phenomena discussed in the main text of the article.
Appendix C.1. Observation and Projective Nature of Perception
Imagine the following situation: you are in a deserted park early in the morning when there is no wind, and in the distance, you see a solitary tree. This perception has a strictly projective nature:
Light reflects from the tree and falls on the retina of your eyes, which is a two-dimensional surface.
Each eye receives a flat, two-dimensional image, which is then transmitted to the brain through electrical impulses.
At such a distance, the images from the two eyes are practically indistinguishable from each other.
At this moment, you cannot say with certainty what exactly you are seeing—a real three-dimensional tree or an artfully executed flat picture of a tree, conveniently positioned perpendicular to your line of sight. You lean toward the conclusion that it is a real tree, based only on your previous experience: you have seen many real trees and rarely encountered realistic flat images installed in parks.
Appendix C.2. Movement and Information Revelation
You decide to approach the tree. As you move, the following happens:
The image of the tree on the retina gradually increases.
Parallax appears—the slight differences between the images from the left and right eyes become more noticeable.
The brain begins to interpret these differences, creating a sense of depth and three-dimensionality.
This happens not immediately, but gradually, as you move.
You still see only one side of the tree—the one facing you. The back side of the tree remains invisible, hidden behind the trunk and crown. Your three-dimensional perception is partly formed from two slightly different two-dimensional projections, and partly built up by the brain based on experience and expectations.
Appendix C.3. Completeness of Perception and Fundamental Limitation
Finally, you approach the tree and can walk around it, examining it from all sides. Now you have no doubt that this is a real three-dimensional tree, not a flat image. However, even with direct proximity to the tree, there is a fundamental limitation to your perception:
You can never see all sides of the tree simultaneously.
At each moment in time, you have access only to a certain projection of the three-dimensional object onto the two-dimensional surface of your retina.
A complete representation of the tree is formed in your consciousness by integrating successive observations over time.
This fundamental limitation is a consequence of the projective nature of perception: the three-dimensional world is always perceived through a two-dimensional projection, and the completeness of perception is achieved only through a sequence of such projections over time.
Appendix C.4. Time as a Consequence of Information Asymmetry
Analyzing this experience, one can notice that your subjective sense of time is directly related to the information asymmetry arising from the projective nature of perception:
At each moment in time, you lose part of the information about the tree due to the impossibility of seeing all its sides simultaneously.
This loss of information is inevitable, even despite the fact that both you and the tree are three-dimensional objects.
The sequential receiving of different projections over time partially compensates for this loss.
If you did not move, the tree remained motionless, and no changes occurred around you, the image on your retina would remain static, and the subjective feeling of the flow of time could disappear.
Thus, the very flow of time in our perception can be viewed as a consequence of the projective mechanism of perception of the three-dimensional world through two-dimensional sensors.
Appendix C.5. Alternative Scenario: Flight to the Far Side of the Moon
The same principle can be illustrated with a more large-scale example. Imagine that instead of walking in the park, you are on a spacecraft approaching the Moon. From Earth, we always see only one side of the Moon—the visible part of the Moon represents a two-dimensional projection of a three-dimensional object onto an imaginary sphere of the celestial firmament.
As you approach the Moon, you begin to distinguish details of its relief, craters become three-dimensional thanks to the play of light and shadow. But you still see only the side facing you. Only by flying around the Moon will you be able to see its far side, which is never visible from Earth.
Even with the technical capability to fly around the Moon, you will never be able to see all its surfaces simultaneously. At each moment in time, part of the information about the Moon remains inaccessible for direct observation. A complete representation of the shape of the Moon can be constructed only by integrating a sequence of observations over time, with an inevitable loss of immediacy of perception.
Appendix C.6. Connection with the Main Research Topic
This thought experiment helps to better understand the fundamental nature of electromagnetic phenomena and their perception. Just as our perception of three-dimensional objects always occurs through a two-dimensional projection, light may have a fundamentally two-dimensional nature, which is projected into our three-dimensional space.
The information asymmetry that arises during projection from a space with one dimensionality into a space with another dimensionality may be the key to understanding many paradoxes of quantum mechanics, the nature of time, and fundamental interactions.
For a reader who finds it difficult to imagine a walk in a park with a solitary tree, it may be easier to visualize a flight to the Moon and flying around it, but the principle remains the same: our perception of the three-dimensional world is always limited by two-dimensional projections, and this fundamental information asymmetry has profound implications for our understanding of the nature of reality.
Appendix D Thought Experiment: Observation in Deep Space
Let’s consider another thought experiment that allows us to explore the limiting cases of perception and measurement in conditions of minimal information. This experiment demonstrates the fundamental limitations on the formation of physical laws with an insufficient number of measurable parameters.
Appendix D.1. Experimental Conditions
Imagine an observer located in deep space, where there are no visible objects except a single luminous point. The conditions of our experiment are as follows:
The observer has a special optical system allowing simultaneous vision in all directions (similar to spherical panoramic cameras or cameras with a "fisheye" lens).
The image of the entire sphere is projected onto a two-dimensional screen, creating a panoramic picture of the surrounding space.
On this screen, only complete darkness and a single luminous point are visible.
The point has constant brightness and color (does not flicker, does not change its characteristics).
The observer has some number of control levers (possibly controlling his own movement or other parameters), but there is no certainty about how exactly these levers affect the system.
From time to time, the position of the point on the screen changes (shifts within the two-dimensional projection).
Appendix D.2. Informational Limitation
In this situation, the observer faces extreme informational limitation:
The only measurable parameters are the two coordinates of the point on the two-dimensional screen.
There is no way to measure the distance to the point (since there are no landmarks for triangulation or other methods for determining depth).
There is no way to determine the absolute movement of the observer himself.
It is impossible to distinguish the observer’s movement from the movement of the observed point.
Appendix D.3. Fundamental Insufficiency of Parameters
Under these conditions, the observer faces the principal impossibility of deriving any physical laws, even if he has the ability to manipulate levers and observe the results of these manipulations:
To formulate classical physical laws, at least six independent parameters are needed (for example, three spatial coordinates and three velocity components to describe the motion of a point in three-dimensional space).
In our case, there are only two parameters—coordinates x and y on the screen.
Even accounting for the change of these coordinates over time, the information is insufficient to reconstruct the complete three-dimensional picture of motion.
An interesting question: what is the minimum number of observable parameters necessary to build a working physical model? Is a minimum of six parameters required (for example, coordinates and color characteristics of two points), or is it possible to construct limited models with a smaller number of parameters? This question remains open for further research.
Appendix D.4. Role of Ordered Memory
Suppose that the observer has a notebook in which he can record the results of his observations in chronological order:
The presence of an ordered record of observations introduces the concept of time, but this time exists only as an order of records, not as an observable physical parameter.
The notebook represents a kind of "external memory," independent of the observer’s subjective perception.
This allows accumulating and analyzing data but does not solve the fundamental problem of insufficiency of measurable parameters.
The introduction of such ordered independent memory represents a kind of "hack" in the thought experiment, since in the real world, any system of recording or memory already presupposes the existence of more complex physical laws and parameters than those available in our minimalist scenario.
Appendix D.5. Connection with the Main Research Topic
This thought experiment is directly related to our study of the two-dimensional nature of light:
It demonstrates how the projection of three-dimensional reality onto a two-dimensional surface fundamentally limits the amount of available information.
It shows that with an insufficient number of measurable parameters, it is impossible to reconstruct the complete physical picture of the world.
It emphasizes the importance of multiplicity of measurements and parameters for constructing physical theories.
In the context of our main hypothesis, this experiment illustrates how the two-dimensionality of electromagnetic phenomena can lead to fundamental limitations on the observability and interpretation of physical processes. If light is indeed a two-dimensional phenomenon, this could explain some paradoxes and limitations we encounter when trying to fully describe it within the framework of three-dimensional space.
Appendix E Critical Remarks on SRT, Verkhovsky’s Argument about Lost Scale Factor, and a Resolution Variant
Despite the enormous success of the special theory of relativity (SRT) and its undeniable experimental verification, this theory throughout its history has been subjected to various critical remarks. One of the most interesting and little-known critical arguments is Lev Verkhovsky’s idea about the "lost scale factor" in Lorentz transformations [
7].
Appendix E.1. Historical Context and Mathematical Basis of the Argument
Verkhovsky drew attention to the fact that in the original formulations of Lorentz transformations, developed by H. Lorentz, A. Poincaré, and A. Einstein, there was an additional factor
, which was later taken to be equal to unity. The original, more general Lorentz transformations had the form:
where is the traditional Lorentz factor, and is an additional scale coefficient.
All three founders of relativistic physics took for various considerations:
Lorentz (1904) came to this conclusion, stating that the value of this coefficient should be established when "comprehending the essence of the phenomenon."
Poincaré (1905) argued that the transformations would form a mathematical group only with .
Einstein (1905) reasoned that from the conditions and (symmetry of space), it should follow that .
Appendix E.2. Verkhovsky’s Argument
Verkhovsky’s central thesis is that the additional scale coefficient
should not be identically equal to unity but should correspond to the Doppler effect:
Verkhovsky believes that it is the Doppler effect that determines the change of scales when transitioning to a moving reference frame. The key observation is that approach and recession are physically inequivalent from the perspective of the Doppler effect, so there are no grounds to require , as Einstein assumed.
According to Verkhovsky, the new Lorentz transformations, taking into account the scale factor, take the form:
where the exponent takes into account whether the observer is approaching or moving away from the object.
Appendix E.3. Physical Implications of Verkhovsky’s Theory
If we accept Verkhovsky’s argumentation, a number of fundamental changes in relativistic physics arise:
Elimination of the Twin Paradox: Clocks on a moving object can both slow down and speed up depending on the direction of movement relative to the observer. On a circular path, the effects compensate each other, and the twins remain the same age.
Change in Lorentz Contraction: The length of an object can both decrease and increase depending on the sign of the velocity: .
Euclidean Geometry of a Rotating Disk: Ehrenfest’s paradox is resolved, since for points on the circumference of a rotating disk, the effect of length contraction is compensated by the Doppler scale factor.
Absence of Gravitational Redshift: In Verkhovsky’s interpretation, the gravitational potential merely calibrates the scale, rather than affecting the energy of photons.
Simplification of General Theory of Relativity: Gravity could be described by a scalar field, rather than a tensor, which significantly simplifies the mathematical apparatus.
Appendix E.4. Critical Assessment and Possible Connection with the Two-Dimensional Nature of Light
Verkhovsky’s theory faces serious difficulties when compared with experimental data:
The Hafele-Keating experiment (1971) confirms standard SRT and does not agree with Verkhovsky’s predictions about the compensation of effects in circular motion.
The behavior of particles in accelerators corresponds to the standard relativistic relation .
Experiments on gravitational redshift, including the Pound-Rebka experiment, confirm the predictions of Einstein’s GRT.
The detection of gravitational waves confirms the tensor nature of gravity, not scalar, as Verkhovsky suggests.
However, there is an interesting possible connection between Verkhovsky’s ideas and the hypothesis about the two-dimensional nature of light. If we assume that light is a fundamentally two-dimensional phenomenon and follows the Cauchy distribution, then the scale factor arises naturally.
The Cauchy distribution possesses a unique property of invariance under fractional linear transformations:
These transformations are mathematically equivalent to Lorentz transformations. When transforming the parameters of the Cauchy distribution, a factor emerges that exactly corresponds to the Doppler factor:
which leads to the scale factor .
Notably, if we abandon the assumption about the two-dimensional nature of light with the Cauchy distribution, but maintain the scale factor , then Lorentz invariance is destroyed, and with it, the principle of relativity becomes untenable. This means that Verkhovsky’s theory can be consistent only with the simultaneous acceptance of three conditions:
Two-dimensional nature of light
Cauchy distribution for describing light phenomena
Scale factor
Such a unified theory could offer an elegant alternative to the standard model, potentially solving some fundamental problems of modern physics, including the difficulties of unifying quantum mechanics and general relativity.
Appendix E.5. Conclusion
Verkhovsky’s argument about the "lost scale factor" represents one of the most fundamental critical views on SRT. Despite the inconsistency with modern experimental data, this idea opens interesting theoretical possibilities, especially in combination with the hypothesis about the two-dimensional nature of light and the Cauchy distribution.
Further research in this direction may include:
Development of a detailed mathematical model of the two-dimensional nature of light with the Cauchy distribution
Search for experimental tests capable of distinguishing between standard SRT and Verkhovsky’s model
Exploration of the possibilities of this approach for solving problems of quantum gravity
In any case, critical analysis of the fundamental foundations of physics remains an important source of new ideas and potential breakthroughs in our understanding of the Universe.
Appendix F Ontological Critique of SRT: Radovan’s View on the Nature of Time
In the context of critical analysis of the special theory of relativity, the works of Mario Radovan are of particular interest, especially his fundamental study "On the Nature of Time" [
8]. Unlike Verkhovsky’s mathematically oriented critique [
7], Radovan offers an ontological analysis that calls into question the very philosophical foundations of relativity theory.
Appendix F.1. Ontological Structure of Reality According to Radovan
The basis of Radovan’s approach is a three-component ontological classification of everything that exists:
C1 — physical entities: stones, rivers, stars, elementary particles, and other material objects.
C2 — mental entities: pleasures, pains, thoughts, feelings, and other states of consciousness.
C3 — abstract entities: numbers, languages, mathematical structures, and conceptual systems.
Radovan’s key thesis is that time is not a physical entity (C1), but belongs to abstract entities (C3), created by the human mind. In this context, he formulates the main postulate: "Time does not exist in the physical world: it does not flow, because it is an abstract entity created by the human mind" [
8].
Appendix F.2. Change as Ontologically Primary Phenomenon
One of Radovan’s most significant positions is the thesis about the primacy of change in relation to time:
"Change is inherent in physical reality; change is also a fundamental dimension of human perception and understanding of this reality. Change does not need anything more fundamental than itself to explain itself: it simply is" [
8].
Radovan argues that humans perceive change, not time. Time itself is created by the mind based on the experience of perceiving changes in physical reality. This thesis radically contradicts both the classical Newtonian concept of absolute time and the relativistic concept of space-time as a physical entity.
Appendix F.3. Critique of the Relativistic Concept of Time
Radovan subjects the relativistic interpretation of Lorentz formulas to the most acute criticism:
Mixing of Ontological Categories: "Discourse about the relativism of time is essentially a question of interpretation of formulas and their results, not a question of the formulas themselves... We argue that the relativistic interpretation of formulas (both STR and GTR) is inconsistent and mixes basic ontological categories" [
8].
Logical Inconsistency: Radovan analyzes in detail the twin paradox, showing that the standard relativistic interpretation inevitably leads to a contradiction: "STR generates statements that contradict each other, which means its inconsistency" [
8].
Unconvincingness of the Acceleration Argument: "The assertion that the turn of a spaceship has the power to influence not only the future but also the past is magic of the highest degree. This magical acceleration at the turn is the only thing that ’protects’ the discourse about the relativity of time from falling into the abyss of inconsistency and meaningless discourse" [
8].
Appendix F.4. Separation of Formulas and Their Interpretation
Radovan makes an important methodological distinction between formulas and their interpretation:
"Formulas can be empirically verified, but interpretations cannot be verified in such a direct way. A formula can be interpreted differently, and a correct formula can be interpreted incorrectly" [
8].
This distinction is fundamental for understanding the status of relativity theory. Radovan acknowledges the empirical adequacy of Lorentz transformations but rejects their standard relativistic interpretation, considering it logically contradictory.
Appendix F.5. Critique of the Relativistic Interpretation of Time Dilation
Radovan’s critique of the relativistic interpretation of the time dilation effect is particularly significant:
"The fact that with increasing speed of muon movement, these processes slow down, and, consequently, their lifetime increases—that’s all, and that’s enough. To interpret this fact by stating that time flows more slowly for them sounds exciting, but leads to serious difficulties (contradictions) and does not seem (to me) particularly useful" [
8].
Radovan offers an alternative interpretation: speed and gravity slow down not time, but physical processes. This approach allows explaining all empirical data without resorting to the contradictory concept of "time dilation."
Appendix F.6. Time as an Abstract Measure of Change
Radovan proposes to view time as an abstract tool for measuring changes:
"Time is an abstract measure of the quantity and intensity of change, expressed in terms of some chosen cyclical processes, such as the rotation of the earth around the sun and around its own axis, or the oscillations of certain atoms or other particles" [
8].
This approach removes many paradoxes of relativity theory, as it considers time not as an objective physical entity that can "slow down" or "curve," but as an abstract measure, similar to a meter or kilogram.
Appendix F.7. Critique of the Block Universe Model
Radovan also criticizes the concept of the block universe, associated with the relativistic interpretation of space-time:
"The block model of the Universe possesses mystical charm and the attractiveness of a fairy tale, but it lacks the clarity, precision, and consistency of scientific discourse. This model is also in complete disagreement with our perception of reality, which seems to be constantly changing at all levels of observation" [
8].
Instead of the block model, Radovan advocates presentism—a philosophical position according to which only the present really exists, the past no longer exists, and the future does not yet exist.
Appendix F.8. Metaphor of River and Shore
Of particular value for understanding Radovan’s concept is his metaphor of "river and shore":
"Physical reality is a river that flows: time is a measure of the quantity and intensity of this flow. Physical entities are not ’transported’ by time: they change by their own nature; things change, they are not transported anywhere. People express the experience of change in terms of time as a linguistic means. Time is an abstract dimension onto which the human mind projects its experience of changing reality" [
8].
In this metaphor, time acts not as a "river," but as a "shore"—an artificial coordinate relative to which we measure the flow of the "river of physical reality."
Appendix F.9. Influence of Radovan’s Approach on Understanding Verkhovsky’s Problem
Radovan’s approach offers an interesting perspective for analyzing the problem of the "lost scale factor" raised by Verkhovsky [
7]. If time is an abstract measure of change, created by the human mind, then the scale factor
can be considered not as a coefficient describing "time dilation," but as a parameter characterizing the change in the rate of physical processes during relative movement.
In this context, the erroneous equating of to unity can be interpreted as an ontological error, consisting in attributing to time (an abstract entity C3) physical properties (characteristic of entities C1). Such an interpretation allows combining Verkhovsky’s mathematical analysis with Radovan’s philosophical critique.
Appendix F.10. Conclusions from Radovan’s Analysis
Radovan’s critical analysis has several important implications for understanding the nature of time and evaluating relativity theory:
Time should be considered as an abstract measure of change, created by the human mind, not as a physical entity.
Change is ontologically and epistemologically primary in relation to time.
Empirical confirmations of the formulas of relativity theory do not prove the correctness of their standard interpretation.
The relativistic interpretation of Lorentz transformations is logically contradictory and mixes ontological categories.
The observed effects of "time dilation" should be interpreted as slowing down of physical processes, not as slowing down of time as such.
The block universe model represents a philosophically problematic interpretation, incompatible with our everyday experience.
Radovan’s critique represents not so much a refutation of the formal apparatus of relativity theory as an alternative philosophical interpretation of this apparatus, based on a clear separation of ontological categories. Such an approach can serve as a basis for resolving many paradoxes of relativity theory without abandoning its mathematical achievements.
Appendix G Goldfain Relation: Theoretical Justification and Information-Geometric Interpretation
Appendix G.1. Theoretical Justification of the Sum-of-Squares Relationship
In the work "Derivation of the Sum-of-Squares Relationship" [
9] and the fundamental monograph "Introduction to Fractional Field Theory" [
10], Ervin Goldfain presented and theoretically justified a fundamental relationship connecting the squares of masses of elementary particles with the square of the Fermi scale:
where:
, , and are the masses of the W-boson, Z-boson, and Higgs boson, respectively
is the sum of squares of masses of all fermions in the Standard Model
GeV is the vacuum expectation value of the Higgs field (Fermi scale)
This relationship has a deep theoretical justification and is not just an empirical observation. Goldfain strictly derived it based on the fractal structure of space-time, which manifests near the Fermi scale. Critically important, the contributions of bosons and fermions are divided almost exactly in half:
In his works, Goldfain mathematically proves that this relationship follows from the properties of the minimal fractal manifold with dimensionality , where . According to his theory, the sum-of-squares relationship is a direct consequence of the geometric properties of the Hausdorff dimension of fractal space and is related to the multifractal structure of quantum fields near the electroweak scale.
The theoretical significance of this relationship is difficult to overestimate: it establishes an unexpected connection between the mass spectrum of elementary particles and the fundamental properties of space-time, offering an elegant solution to the hierarchy problem in the Standard Model.
Appendix G.2. High-Precision Numerical Verification
Verification of this relationship using modern experimental values of elementary particle masses demonstrates the striking accuracy of the theoretical prediction. When normalizing all masses relative to the electroweak scale GeV, we get:
Appendix G.2.1. Contribution of Leptons
Electron: GeV,
Muon: GeV,
Tau-lepton: GeV,
Total contribution of leptons:
Appendix G.2.2. Contribution of Quarks
u-quark: GeV,
d-quark: GeV,
s-quark: GeV,
c-quark: GeV,
b-quark: GeV,
t-quark: GeV,
Total contribution of quarks:
Appendix G.2.3. Contribution of Gauge Bosons
W-boson: GeV,
Z-boson: GeV,
Total contribution of gauge bosons:
Appendix G.2.4. Contribution of the Higgs Boson
Appendix G.2.5. Overall Sum
Sum of all contributions:
The obtained value is strikingly close to unity. Considering the experimental uncertainties in measuring the masses of the heaviest particles (t-quark, Higgs boson, Z- and W-bosons), the deviation from the exact value of 1.0 is within one standard deviation (1). In particle physics, the standard threshold for confirming a new effect is considered to be a deviation of 5, so Goldfain’s relationship with the highest statistical reliability confirms the equality of the sum of squares of masses to the square of the electroweak scale. Such an exact coincidence cannot be explained by chance and requires a fundamental theoretical justification, which Goldfain provided within the framework of his theory of minimal fractal manifold.
G.3. Information-Geometric Interpretation
Within the framework of the information-geometric approach and variable dimension of spaces proposed in this work, Goldfain’s relationship can receive an alternative interpretation, complementing and expanding the original theoretical justification.
Appendix G.3.1. Fundamental Limitation of the D=2 Channel
The main premise of the information-geometric approach is that light exists exactly in D=2.0 dimensions. This is not an approximation, but an exact value, determined by the following factors:
The wave equation at D=2 supports perfect coherence of waves without geometric dispersion
The Green’s function for the wave equation undergoes a critical phase transition exactly at D=2.0
Light has exactly two independent polarization states regardless of the direction of propagation
D=2.0 represents the optimal configuration for transmitting information through space
Since all experimental observations of the physical world are mediated by light (D=2.0), this creates a fundamental limitation on our ability to distinguish and measure physical parameters.
Appendix G.3.2. Special Role of the Electroweak Scale
The electroweak scale GeV represents a critical transition point within the concept of dimensional flow for several reasons:
Dimensional Transition Point: It marks the scale where the electroweak symmetry SU(2)×U(1) is broken, separating the electromagnetic interaction (D=2.0) from the weak one. Within the concept of dimensional flow, this corresponds to a transition between different dimensional regimes.
Information Threshold: From the perspective of information geometry, can be interpreted as an energy threshold at which the D=2 light channel reaches the limit of its information capacity for distinguishing quantum states.
Symmetry Breaking Scale: Unlike other energy scales in physics, is determined by the process of spontaneous symmetry breaking in the Standard Model and is directly related to the Higgs mechanism that generates particle masses.
Observation Boundary: represents the boundary between well-studied physics (below ) and the largely unexplored area above, where dimensional flow effects may become more pronounced.
Natural Mass Unit: In the Standard Model, is the fundamental mass scale that determines all other particle masses through their Yukawa couplings to the Higgs field.
Appendix G.3.3. Squares of Masses and the Nature of the D=2 Channel
The appearance of squares of masses () in the relationship has a deep connection with the D=2 nature of the electromagnetic channel:
Form of the Propagator: In quantum field theory, particle propagators contain terms of the form , making a natural parameter that determines how particles "manifest" in measurements.
Signature of the D=2 Channel: Squaring masses directly reflects the dimensionality of the light channel (D=2.0), through which all measurements are made.
Information-Theoretical Measure: In information geometry, elements of the Fisher matrix for mass parameters naturally appear as second derivatives (quadratic terms), reflecting how distinguishable different mass states are.
Scale Consistency: For dimensionless ratios in quantum field theory, masses usually appear squared to maintain dimensional consistency with energy-momentum terms.
Appendix G.4. Information-Theoretical Interpretation of the Relationship
Given the above considerations, the relationship can be interpreted as a fundamental information-theoretical limitation:
The total information-theoretical "distinguishability" of all elementary particles, when measured through the D=2 light channel, is limited by a fundamental limit determined by the electroweak scale.
In more concrete terms, each term represents an "information share" or contribution to the distinguishability of a particle to the total information content available through the D=2 channel at the electroweak scale. The fact that these contributions sum to a value statistically indistinguishable from 1 (≈0.9954, within 1) indicates that the D=2 channel at is completely "saturated" by the existing particles of the Standard Model.
Appendix G.5. Implications
This interpretation has several important implications:
Fundamental Limitation of Space for New Physics: Since the sum is statistically indistinguishable from 1, this indicates a fundamental information limitation—for new heavy particles with masses comparable to , there is practically no "information space" in the D=2 observation channel.
Fundamental Resolution of the Hierarchy Problem: Goldfain’s relationship naturally explains why the Higgs mass cannot be much larger than without introducing new physics that would change the information-geometric structure of the observable world.
Natural Upper Bound: The relationship establishes a natural upper bound for the combined mass spectrum of elementary particles without the need for additional symmetries or fine-tuning.
Strict Testable Prediction: Any newly discovered particles must either have very small masses compared to , or must be accompanied by a corresponding modification of the masses of existing particles to maintain the summary rule.
Information-Theoretical Foundation of Particle Physics: This indicates that the mass spectrum of elementary particles is fundamentally limited by information-theoretical principles related to the D=2 nature of the observation channel.
Appendix G.6. Conclusion
The relationship , strictly theoretically justified by Goldfain through the concept of minimal fractal manifold, receives an additional natural interpretation in the information-geometric concept of dimensional flow. Instead of arising from small deviations from 4D space-time (as in Goldfain’s approach), it can also be viewed as a direct consequence of the D=2 nature of light as a fundamental channel through which all physical measurements are made.
The statistically significant confirmation of this relationship by experimental data (within 1 of the exact value 1.0) suggests that it reflects a deep truth about the information-theoretical structure underlying the Standard Model of particle physics.
In light of this interpretation, a fundamental question arises: is it rational to invest enormous resources in building new large hadron colliders? If the information space for new particles at the electroweak interaction scale and above is already fundamentally exhausted, as confirmed by Goldfain’s relationship with high statistical significance, shouldn’t we reconsider the strategy for searching for new physics?
Appendix H Mathematical Justification of the Connection Between Dimensionality and Statistical Distributions
Appendix H.1. Rigorous Proof of the Connection Between Masslessness and the Cauchy Distribution
Appendix H.1.1. Propagator of a Massless Field
The propagator of a massless scalar field in
D-dimensional space in momentum representation has the form:
The Fourier transform of this propagator to coordinate representation gives:
For the space-time propagator in
-dimensional space-time:
Let’s consider the equal-time spatial propagator at
:
The integral over
can be calculated using the residue theorem:
Switching to spherical coordinates in
D-dimensional space and integrating over angular variables:
where is the Bessel function of the first kind.
This integral has the following solution:
Appendix H.1.2. Critical Dimension D=2
At
, the propagator has a logarithmic dependence, which corresponds to a phase transition point. The electric field, which is the gradient of the potential, has the form:
The intensity of light is proportional to the square of the electric field:
In two-dimensional space (
), the intensity decreases as
, which in a one-dimensional cross-section corresponds to the Cauchy distribution:
where
is the distance from the observation line to the source, and
.
Appendix H.1.3. Absence of Finite Moments for the Cauchy Distribution
For the Cauchy distribution with probability density:
The
n-th moment is defined as:
For
, this integral diverges. To show this explicitly, let’s consider the first moment (mean value):
Let’s make the substitution
:
The first integral equals , and the second integral diverges, since the integrand has an asymptotic as . Similarly, divergence can be shown for moments of higher orders.
Appendix H.1.4. Connection with Masslessness
The masslessness of the photon is mathematically expressed in the fact that its propagator has a pole at zero momentum:
This pole leads to power-law decay of the propagator in coordinate space, which does not allow for finite moments of the distribution. Any non-zero mass
m modifies the propagator:
which leads to exponential decay at large distances:
This exponential decay ensures the existence of all moments of the distribution, but is incompatible with the exact masslessness of the photon.
Appendix H.2. General Properties of Statistical Distributions for Massive and Massless Particles
Appendix H.2.1. Characteristic Functions of Distributions
The characteristic function of a distribution is defined as:
For the Gaussian distribution:
For the Cauchy distribution:
Appendix H.2.2. Stable Distributions
Both distributions (Gaussian and Cauchy) belong to the class of stable distributions, which are characterized by the fact that the sum of independent random variables with such a distribution also has the same distribution (with changed parameters).
The characteristic function of a stable distribution has the form:
where
is the stability index,
is the asymmetry parameter,
is the scale parameter, and
is the shift parameter.
The Gaussian distribution corresponds to , and the Cauchy distribution to . For , moments of order and higher do not exist.
Appendix H.2.3. Physical Interpretation of the Stability Index
The stability index
can be interpreted as an indicator related to the effective dimension of the wave propagation space:
For , we get , which corresponds to the Cauchy distribution. For , we get , which corresponds to the Gaussian distribution.
This relationship explains why massless particles in space of dimension must have the Cauchy distribution with non-existent moments.
Appendix I Information Misalignment and Electromagnetic Synchronizers
This appendix presents an alternative interpretation of fundamental physical phenomena, based on the representation of the electromagnetic field as a two-dimensional Cauchy distribution, performing the function of a synchronizer of interactions when information misalignment occurs.
Appendix I.1. Electromagnetic Field as a Two-Dimensional Cauchy Distribution
The central hypothesis is that the electromagnetic field (including light) can be considered as a fundamentally two-dimensional phenomenon, described by the Cauchy distribution:
where:
is the localization (position) parameter
is the scale parameter, interpreted as a measure of information divergence
This distribution possesses unique properties:
It is naturally Lorentz-invariant without the need for artificial introduction of Lorentz transformations
It does not have finite mathematical expectation and variance
When convolving two Cauchy distributions, a Cauchy distribution is obtained again
The two-dimensionality of the electromagnetic field is a key assumption, without which the proposed concept does not work. This two-dimensionality allows explaining the preservation of image clarity when light propagates over cosmological distances, as well as a number of other observed phenomena.
Appendix I.2. Bayesian Nature of Electromagnetic Interaction
Instead of relying on the concept of time, we propose to consider electromagnetic interaction as a Bayesian process of information updating:
Appendix I.2.1. Prior Distribution and Information Misalignment
Each physical system forms a prior Cauchy distribution with parameters , which represents an "expectation" regarding the state of other systems:
When observed reality does not match the prediction, information misalignment occurs
EM interaction is activated only with significant misalignment, exceeding a threshold level
The greater the misalignment, the more intense the synchronization process
Appendix I.2.2. Synchronization Process
Synchronization represents a process of Bayesian updating of the parameters of the Cauchy distribution:
The localization parameter is updated in accordance with the observed shift
The scale parameter is adjusted in accordance with the level of uncertainty
The goal is to minimize the information divergence between the prior and posterior distributions
It is critically important that electromagnetic interaction occurs only in the presence of significant information misalignment. In the absence of misalignment, there is no need for synchronization, and interaction is not activated.
Appendix I.3. Informational Interpretation of the Parameters of the Cauchy Distribution
The parameters of the Cauchy distribution have a deep informational interpretation:
Appendix I.3.1. Localization Parameter x 0
This parameter can be interpreted as the "median" of the distribution, determining its central tendency:
The position reflects the most probable value of the observed quantity
Change in encodes directed information about displacement (for example, manifesting as the Doppler effect)
Different observers may have different values of for the same interaction, which reflects the relativity of observation
Appendix I.3.2. Scale Parameter γ as a Measure of Information Divergence
The scale parameter represents a fundamental measure of information divergence:
The larger , the more "blurred" the distribution becomes (peak lower, tails thicker)
This corresponds to a decrease in the accuracy with which the position parameter can be determined
An increase in means an increase in the information entropy of the interaction
Appendix I.4. Activation of Interaction with Information Misalignment
A key feature of the proposed concept: electromagnetic interaction is activated only in the presence of significant information misalignment:
Threshold Character: Small misalignments can be ignored (noise threshold), only significant misalignments trigger the synchronization process
Resource Economy: Nature is "economical"—there is no point in spending resources on synchronization if everything corresponds to expectations
Bidirectionality: Both interacting systems adjust their distributions, with the ultimate goal being the achievement of a coordinated Cauchy distribution
This mechanism explains why static states do not require constant synchronization—they lack information misalignment.
Appendix I.5. Duality of Information Encoding
Within our model, information can be encoded through both parameters of the Cauchy distribution:
Encoding through : Directed changes (for example, the Doppler effect) are encoded predominantly through a shift in the position parameter
Encoding through : Changes in the degree of uncertainty are encoded through the scale parameter
For different types of physical processes, one or the other encoding mechanism may predominate:
Uniform Motion: Predominantly encoded through the shift of , with the parameter changing insignificantly
Acceleration: Significantly affects the parameter , increasing information divergence
Temperature Change: Can affect both parameters, with the rate of temperature change determining the intensity of interaction
Appendix I.6. Elimination of Time as a Fundamental Concept
The proposed concept completely eliminates the need for time as a fundamental entity:
"Earlier/later" is replaced by "greater/lesser information misalignment"
"Duration" is replaced by "magnitude of distribution change"
"Flow of time" is replaced by "process of minimizing information misalignment"
What we perceive as time is merely a manifestation of the sequence of Bayesian updates when information misalignment occurs.
Appendix I.7. Alternative Interpretation of Cosmological Phenomena
This concept offers alternative explanations for a number of cosmological phenomena:
Appendix I.7.1. Redshift Without Universe Expansion
Redshift can be interpreted as the result of information misalignment accumulating when light passes through areas with different information structure:
When passing through such areas, the parameter is systematically shifted
This manifests as a change in the observed frequency of light
At the same time, the information content (image clarity) is preserved due to the two-dimensional nature of the EM field
Appendix I.7.2. Black Holes as Areas of Extreme Information Misalignment
In the context of this model, black holes can be interpreted as areas where information misalignment reaches extreme values:
In the vicinity of a black hole, information divergence () tends to infinity
This makes synchronization fundamentally impossible
A black hole disrupts the function of the EM field as a synchronizer, creating an area where Bayesian updating cannot be performed
This explanation does not require postulating an event horizon in the traditional sense.
Appendix I.7.3. Ultraviolet Catastrophe
The concept offers a natural resolution of the ultraviolet catastrophe through the information properties of the Cauchy distribution:
The Cauchy distribution does not have finite moments of higher orders, which naturally explains the divergence of energy at high frequencies
The two-dimensionality of the EM field limits information divergence in certain directions
This does not require artificial introduction of field quantization to resolve the catastrophe
Appendix I.8. Connection with Thermodynamics and Information Entropy
Of particular interest is the connection between information divergence and thermodynamic processes:
The scale parameter is related to the information entropy of the system
Temperature changes cause information misalignment requiring synchronization
This explains why irreversible processes (causing misalignment), rather than equilibrium states, lead to an increase in entropy
This gives a new interpretation of the second law of thermodynamics:
Entropy increase is an increase in information divergence between interacting systems
Irreversibility of processes is related to the impossibility of completely eliminating information misalignment after its occurrence
In a state of thermodynamic equilibrium, information divergence reaches a maximum for the given conditions
Appendix I.9. Implications and Testable Predictions
This concept offers a number of testable predictions:
When passing through areas with variable information structure, light should experience a shift in , but preserve information content (clarity)
In conditions of extreme information misalignment (for example, during black hole collisions), anomalies in electromagnetic radiation should be observed
There should be an observable connection between the scale parameter and the magnitude of information misalignment
Appendix I.10. Informational "Compression" of Physics through the Cauchy Distribution
One of the most important implications of the proposed concept is the "compression" of an entire field of physics through the information properties of the Cauchy distribution:
The Cauchy distribution naturally includes Lorentz invariance without the need to postulate it
It unifies informational, statistical, and geometric approaches to electromagnetism
It gives a unified informational explanation for many disparate observed phenomena
It eliminates the need for the fundamental concept of time, replacing it with information misalignment
It must be emphasized that without the assumption of two-dimensionality of the electromagnetic field, this logic ceases to work. Two-dimensionality is a key condition that allows the Cauchy distribution to maintain its unique informational properties during transformations.
Appendix I.11. Conclusion
The presented concept offers a radically new view of the nature of electromagnetic interaction and fundamental physical phenomena through the prism of information theory and Bayesian updates. By considering the electromagnetic field as a two-dimensional Cauchy distribution, performing the function of an information synchronizer when misalignment occurs, we get an elegant explanation of many observed effects and paradoxes.
In this interpretation, the classical concept of time is completely eliminated, replaced by the more fundamental concept of information misalignment and processes of its minimization. What we perceive as the "flow of time" is merely a manifestation of the sequence of Bayesian updates occurring in response to information misalignment.
This concept not only offers alternative explanations for known phenomena, but also opens new horizons for exploring fundamental interconnections between informational, statistical, geometric, and physical aspects of our Universe, completely avoiding the "virus of time" as an artificially introduced concept.
References
- Mikhail Liashkov. The information-geometric theory of dimensional flow: Explaining quantum phenomena, mass, dark energy and gravity without spacetime. Preprints 2025.
- Salvatore Ganci. Fraunhofer diffraction by a thin wire and babinet’s principle. Optik 2010, 121, 993–996.
- Neeraj Mishra, Ch Venkata Subba-Reddy, Balwinder Singh, et al. Characterisation of cmos sensors for x-ray imaging and development of edge-illumination phase-contrast imaging technique with cmos sensors. Scientific Reports 2019, 9, 3110.
- Salvatore Ganci. Fraunhofer diffraction by a thin wire and babinet’s principle. American Journal of Physics 2005, 73, 83–84. [CrossRef]
- Pavel Shcherbakov, Alexander Blagov, Andrey Butashin, et al. Point defect spectroscopy of lif via registration of soft x-rays by a hybrid pixel detector. Journal of Synchrotron Radiation 2020, 27, 1304–1312.
- Farouq Mohammad, A. Alam. On modeling x-ray diffraction intensity using heavy-tailed probability distributions: A comparative study. Crystals 2025, 15, 188. [Google Scholar]
- Lev, I. Verkhovsky. Memoir on the theory of relativity and unified field theory. ResearchGate Preprint 2020. [Google Scholar]
- Mario Radovan. On the nature of time. arXiv preprint 2015, arXiv:1509.01498:1–25.
- Ervin Goldfain. Derivation of the sum-of-squares relationship. June 2019.
- Ervin Goldfain. In Introduction to Fractional Field Theory; Aracne Editrice: Rome,Italy, December 2015.
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).