1. Introduction
Statistical mechanics (SM), in the formulation developed by E.T. Jaynes [
1,
2], is founded on an entropy optimization principle. Specifically, the Boltzmann entropy is maximized under the constraint of a fixed average energy
:
The Lagrange multiplier equation defining the optimization problem is:
where
and
are Lagrange multipliers enforcing the normalization and average energy constraints. Solving this optimization problem for
yields the Gibbs measure:
where
is the partition function.
For comparison, quantum mechanics (QM) is not formulated as the solution to an optimization problem, but rather consists of a collection of axioms [
3,
4]:
State Space: Every physical system is associated with a complex Hilbert space, and its state is represented by a ray (an equivalence class of vectors differing by a non-zero scalar multiple) in this space.
Observables: Physical observables correspond to Hermitian (self-adjoint) operators acting on the Hilbert space.
Dynamics: The time evolution of a quantum system is governed by the Schrödinger equation, where the Hamiltonian operator represents the system’s total energy.
Measurement: Measuring an observable projects the system into an eigenstate of the corresponding operator, yielding one of its eigenvalues as the measurement result.
Probability Interpretation: The probability of obtaining a specific measurement outcome is given by the squared magnitude of the projection of the state vector onto the relevant eigenstate (Born rule).
Physical theories have traditionally been constructed in two distinct ways. Some, like QM, are defined through a set of mathematical axioms that are first postulated and then verified against experiments. Others, like SM, emerge as solutions to optimization problems with experimentally-verified constraints.
We propose to generalize the optimization methodology of E.T. Jaynes to encompass all of physics, aiming to derive a unified theory from a single optimization problem.
To that end, we introduce the following constraint:
Axiom 1 (Nature).
where is an operator, is the average of its trace, p is the number of spatial dimensions (e.g. three in spacetime), and is an information density1.
This constraint, as it replaces the scalar energy with the operator , extends E.T. Jaynes’ optimization method to encompass non-commutative observables and symmetry group generators required for fundamental physics.
We then construct an optimization problem:
Definition 1 (Physics).
Physics is the solution to:
where t is the Lagrange multiplier2 enforcing the natural constraint.
This definition constitutes our complete proposal for reformulating fundamental physics—no additional principles will be introduced. By replacing the Boltzmann entropy with the relative Shannon entropy, the optimization problem extends beyond thermodynamic variables to encompass any type of experiment. This generalization occurs because relative entropy captures the essence of any experiment: the relationship between a final measurement and its initial preparation. Finally, the natural constraint defines the domain of applicability of the theory.
The crucial insight is that because our formulation maintains complete generality in the structure of experiments while optimizing over all possible natural theories, the resulting solution holds true, by construction, for all realizable experiments within its domain. This approach reduces our reliance on postulating axioms through trial and error, and simplifies the foundations of physics.
Specifically, when we employ the
natural constraint —the most permissive constraint for this problem (
Section 2.2)— the solution spawns its largest domain, yielding the Dirac theory in the language of spacetime algebra (STA). The resulting solution in fact yields a slight extension to standard quantum theory in the sense that it increases the group of transformations that preserve the probability density from unitary transformations to Spinc(3,1) transformations. It is within this extension that a unified physics where fundamental theories emerge naturally is uniquely recovered—e.g. general relativity (acting on spacetime) and Yang-Mills (acting on internal spaces of spacetime).
As we will see in
Section 2.1, Definition 1 automatically restricts the valid solutions to the specific case of 3+1 dimensions. In other dimensional configurations, various obstructions arise making the solution violate the axioms of probability theory. The following table summarizes the geometric cases and their obstructions:
where
designates the Clifford algebra of
p space dimensions and
q time dimensions.
We will investigate the unobstructed case in
Section 2.2 and then demonstrate the obstructions in
Section 2.1. These obstructions are desirable because they provide a mechanism for the observed dimensionality of our universe as solely satisfying the probabilistic structure of the solution.
2. Results
Theorem 1.
The general solution of the optimization problem (Definition 1) is:
Proof. We solve Definition 1 by taking the variation of the Lagrange multiplier equation with respect to
w. (To improve the legibility, we will drop the explicit parametrization in
and
in the proof.)
Finally, restoring the explicit parametrization, we end with:
□
The steps of this proof are standard for solving an entropy optimization problem in statistical mechanics, and simply mimic the steps in the usual derivation of the Gibbs measure, as shown in
Appendix A.
In
Section 2.2, we will show that when
is taken to be the most permissive constraint in 3+1D—a covariant derivative over all possible transformations of the elements of STA, this solution entails the hamiltonian form of the Dirac equation expressed in the language of STA.
2.1. Dimensional Obstructions
We begin by exploring the dimensional obstructions of this solution. We found that all geometric configurations except the 3+1D case are obstructed. By obstructed, we mean that the solution does not satisfy the non-negativity requirement of its interpretation as an information density (i.e. this would entail negative probabilities, non-real probabilities, or definability problems).
Let us now demonstrate the obstructions mentioned above.
Theorem 2 (Ill-defined probabilities). These Clifford algebras are isomorphic to direct sums of matrix algebras, rather than a single matrix algebra. Consequently, the determinant operation, as required by the solution form , is ill-defined or inapplicable in this context, making these algebras unsuitable.
Proof. These Clifford algebras are classified as follows:
The notion of determinant is ill-defined as we are dealing with a direct sum of matrices instead of a singular matrix. □
Theorem 3 (Non-real probabilities). The quantity resulting from the optimization procedure, when evaluated for the matrix representations of the Clifford algebras in this category, is either complex-valued or quaternion-valued, making them unsuitable.
Proof. These Clifford algebras are classified as follows:
Evaluating the function
derived from the entropy maximization procedure for operators
H associated with these algebras yields values in
(for Clifford algebras isomorphic to
) or
(for Clifford algebras isomorphic to
or
). Since
w must be real and non-negative, these are obstructed. □
Theorem 4 (Negative probabilities). The even sub-algebra of this dimensional configuration allows for negative probabilities, making it unsuitable.
Proof. This category contains one dimensional configuration:
Theorem 5 (Non-definability). The optimization problem is not definable for these dimensional configurations.
Proof. This category contains five dimensional configurations:
-
:
Definition 1 requires 1 time parameter for the Lagrange multiplier, and at least 1 space parameter for the integration measure. This configuration has neither.
-
:
Definition 1 requires 1 time parameter for the Lagrange multiplier, and at least 1 space parameter for the integration measure. This configuration has the time parameter, but lacks a space parameter.
-
:
Definition 1 requires 1 time parameter for the Lagrange multiplier, and at least 1 space parameter for the integration measure. This configuration has the space parameter, but lacks a time parameter.
-
:
Definition 1 requires 1 time parameter for the Lagrange multiplier, and at least 1 space parameter for the integration measurement. This configuration has two space parameters, but lacks the time parameter.
-
:
Definition 1 requires 1 time parameter for the Lagrange multiplier, and at least 1 space parameter for the integration measurement. This configuration has two space parameters, but has more time parameters than Lagrange multipliers.
□
In
Appendix C, we provide two conjectures suggesting that all dimensions above 6D are also obstructed. We recommend to read rest of this paper before reading Annex C, as we utilize concepts that we introduce in the following sections for these conjectures.
2.2. Spinors + Dirac Equation
We will now investigate this solution in the context of . We begin with a definition of the determinant for a multivector.
2.2.1. The Multivector Determinant
Our goal here will be to express the determinant of a real
matrix as a multivector self-product. To achieve that, we begin by defining a general multivector of
:
where
a is a scalar,
a vector,
a bivector,
is pseudo-vector and
a pseudo-scalar. Explicitly,
Definition 2 (Real-Majorana Algebra Isomorphism).
The map defined by:
extends linearly and multiplicatively to an isomorphism between and the algebra of real matrices.
To manipulate and analyze multivectors in , we introduce several important operations, such as the multivector conjugate, the pseudo-blade conjugate, and the multivector determinant.
Definition 4 (Multivector Conjugate—in
).
Definition 5 (Pseudo-Blade Conjugate—in
).
The pseudo-blade conjugate of is
Lundholm [
5] proposes a number of multivector norms, and shows that they are the
unique forms which carries the properties of the determinants such as
to the domain of multivectors:
Definition 6.
The self-products associated with low-dimensional Clifford algebras are:
where is a conjugate that reverses the sign of pseudo-scalar blade (i.e. the highest degree blade of the algebra).
We can now express the determinant of the matrix representation of a multivector via a self-product. This choice is unique:
Theorem 6 (The Multivector Determinant—in Cl(3,1))
Proof. As this form naively expands into
product terms, we utilize a computer-assisted proof of this equality in
Appendix B. □
As can be seen from this theorem, the relationship between determinants and multivector products in is a quartic form that cannot be reduced to a simpler bilinear form.
2.2.2. The Optimization Problem
The relative Shannon entropy requires measures that are everywhere non-negative. Consequently, we will first identify the largest sub-algebra of ) whose determinant is non-negative.
Theorem 7 (Non-Negativity of Even Multivectors). Let be an even multivector of . Then its multivector determinant is non-negative.
Proof.
| Let us calculate .
where and where
|
which is non-negative—the sum of two squares of real numbers is in
. □
To define the optimization problem in , we note the following:
Flat Spacetime:
The optimization problem will be as follows:
The base field is identified from the multivector determinant, and is written as:
where
As such, Equation (
90) is formally equivalent to Equation (
89).
The following expression, obtained by removing the determinant
3, satisfies the solution:
where
is defined as:
The base field leads to a variant of the Schrödinger equation obtained by taking its derivative with respect to t:
Definition 8 (Spinor-valued Schrödinger Equation).
The above expression is simply the massless Dirac equation in Hamiltonian form. Specifically, the Dirac equation is obtained as follows:
where
is the Dirac operator over all 4 spacetime coordinates. The end result is the massless Dirac equation in the language of STA.
From Noether’s theorem, it is known that the Dirac equation contains a conserved charge current , which is the Dirac current.
Theorem 8 (Positive-Definite Inner Product). The inner product, defined as is positive definite.
Proof. Let
and
, then:
which is positive-definite. □
Consequently, the quantity can be understood as a probability density when normalized such as in single particle relativistic quantum theory.
Theorem 9 (Equivalence to David Hestenes’[
6] formulation).
where R is a rotor
Proof. Let
. Then
which is a complex number. In polar form,
, which implies iff
, that
. □
We also note that the definition of the Dirac equation recovers David Hestenes’ formulation of the same in the massless case . Indeed, posing .
Curved Spacetime:
In curved spacetime, we utilize the ADM formalism. We foliate spacetime in hypersurfaces
of constant
t:
The optimization problem Lagrangian remains similar, but the constraint now acquires lapse and shift functions:
where
is the STA representation of the the normal vector to the spatial hypersurface
is the trace of the extrinsic curvature
is the 3D spatial covariant derivative on the slice .
The problem is solved in a manner similar to the flat case and leads to the Schrödinger equation:
This is the Hamiltonian form of the massless Dirac equation with covariant derivative expressed with lapse and shift functions and containing a spin and pseudoscalar connection.
Field Functional:
The field functional version of the optimization problem integrates over all possible geometries:
where
The configuration includes the spatial dreibein , lapse , and shift , representing a full specification of the spatial geometry and its embedding coordinates for the slice .
The optimization problem is solved in a manner similar to the previous cases and leads to the Schrödinger equation:
2.3. Yang-Mills + Gravity
Having considered the most general linear constraint, we now consider the most general quadratic constraint; that is we do the following replacement:
The optimization problem will be as follows:
where
, and where
f and
are a regularization function and constant, respectively.
The solution
is:
and the value of the hamiltonian functional at the maximum is
Hence the stationary condition of the entropy-maximisation problem is identical to the spectral action introduced by Connes and Chamseddine [?]. As shown by Connes and Chamseddine, by applying the heat-kernel (Seeley–DeWitt) expansion to the elliptic operator
one obtains, for large
,
with universal coefficients
that are the moments of
f and with the Seeley–DeWitt coefficients
. As shown in [?], the first few non-vanishing terms reproduce precisely the familiar Lagrangians of gravity and gauge theory:
yields a cosmological-constant term ;
gives the Einstein–Hilbert action ;
contains the Yang–Mills kinetic term together with the scalar-field kinetic and potential contributions.
Thus the spectral action automatically encodes (i) General Relativity, (ii) the full Yang–Mills sector of the Standard Model (and possible extensions), and (iii) the Higgs-type scalar sector, with all coupling constants expressed in terms of the moments of the cut-off function f and the single scale . Consequently, the optimisation problem we have solved not only reproduces the Connes-Chamseddine spectral action, it also provides a statistical-mechanical interpretation of why the Einstein–Hilbert and Yang–Mills terms emerge as the degree-two contribution to the optimization problem.
3. Discussion
When asked to define what a physical theory is, an informal answer might be that it is a set of equations that applies to all experiments realizable within a domain, with nature as a whole being the most general domain. While physicists have expressed these theories through sets of axioms, we propose a more direct approach—mathematically realizing the fundamental definition itself. This definition is realized as a constrained optimization problem (Axiom 1 and Definition 1) that can be solved directly (Theorem 1). The solution to this optimization problem yields precisely those structures that realize the physical theory over said domain. Succinctly, physics is the solution to:
The relative Shannon entropy represents the basic structure of any experiment, quantifying the informational difference between its initial preparation and its final measurement.
The natural constraint is chosen to be the most general structure that admits a solution to this optimization problem. This generality follows from key mathematical requirements. The constraint must involve quantities that form an algebra, as the solution requires taking exponentials:
which involves addition, powers, and scalar multiplication of X. The use of the trace operation further necessitates that X must be representable by square
matrices, yielding Axiom 1:
The trace is utilized because the constraint must be a scalar for use in the Lagrange multiplier equation. Finally, the operator is selected to include the set of all transformations that can be performed on the base field, and is thus the least restrictive constraint definable within the specific geometric configuration.
These mathematical requirements demonstrate that the natural constraint, as it admits the most permissive mathematical structure required to solve an arbitrary entropy maximization problem, can be understood as the most general extension to the standard entropy maximization problem of statistical mechanics.
Thus, having established both the mathematical structure and its generality, we can understand how this minimal ontology operates. Since our formulation keeps the structure of experiments completely general, our optimization considers all possible theories for that structure, and the constraint is the most permissive constraint possible for that structure, the resulting optimal physical theory applies, by construction, to all realizable experiments within its domain.
This ontology is both operational, being grounded in the basic structure of experiments rather than abstract entities, and constructive, showing how physical laws emerge from optimization over all possible predictive theories subject to the natural constraint. This represents a significant philosophical shift from traditional physical ontologies where laws are typically taken as primitive.
The next step in our derivation is to represent the determinant of the matrices through a self-product of multivectors involving various conjugate structures. By examining the various dimensional configurations of Clifford algebras, we find that Cl(3,1), representing real matrices, admits a sub-algebra whose determinant is non-negative for its invertible members. All other dimensional configurations fail to admit such a non-negative structure.
The solution reveals that the 3+1D case harbors a new type of field amplitude structure analogous to complex amplitudes, one that exhibits the characteristic elements of a quantum mechanical theory. Instead of complex-valued amplitudes, we have amplitudes valued in the invertible subset of the even sub-algebra of . When normalized, this amplitude is identical to David Hestenes’ wavefunction, but comes with an extended Born rule represented by the determinant. The quartic structure of this rule automatically incorporates gravity via the connection and local gauge theories as Yang-Mills theories. Specifically, the powers of the Dirac operator, automatically generated by the Lagrangian, contains the invariants of gravity and of the Yang-Mills theory, which are made explicit via a power series expansion, along with the matter fields quantifying the system’s information density via surprisal and limiting its propagation speed.
3.1. Proposed Interpretation of QM
An experiment begins with a known initial preparation , evolves under a constraint (Axiom 1), and ends with a final measurement . By treating the experiment as the fundamental ontic entity, we resolve a redundancy inherent in traditional physical theories: Physics is not a set of laws that are simultaneously axiomatic and validated by experiments (i.e., a redundancy—that which is validated by something else is not axiomatic) but an optimal interpolation device connecting to under the constraint of nature. The experiment is fundamental, but the physical laws derived from it are not.
3.1.1. Resolving QM’s Interpretive Issues Through Measure-to-Measure Evolution.
We propose that problems with the interpretation of quantum mechanics, such as the measurement problem, stem from the traditional view that a physical system evolves from to —that is, from an initial wavefunction to a final measure. However, our framework derives quantum mechanics not from to but from to , with to emerging via inference. This slight enlargement in scope is able, we suggest, to resolve the measurement problem and other interpretational difficulties.
This operational perspective aligns with laboratory practice but challenges the standard formulation, which takes as its initial preparation instead of . Building on this, let us examine how starting with as the initial state works in practice, ensuring experiments are well-defined and derivable from optimization.
Core Argument:
We propose that a well-defined experiment begin with a measurement outcome , not an abstract quantum state .
-
Example: Preparing requires:
- (a)
Measure systems to collapse to or .
- (b)
Discard all systems in state .
- (c)
Apply a Hadamard gate H to .
- (d)
The preparation is complete.
Neglecting the initial measurement (a) implies that systems of unknown states are sent into the Hadamard gate—the resulting experiment is ill-defined.
Challenges and Solutions:
-
Objection 1: Preparation Without Collapse
- (a)
Issue: Traditional QM superficially appears to allow preparing without collapsing it (e.g. cooling).
- (b)
Response: In practice, all preparations are validated by measurement (or an equivalent).
- (c)
-
Example:
Cooling various qubits to is non-invertible (one cannot return to the initial because of dissipative effects). The end result is mathematically equivalent to a measurement or followed by a discard of .
Creating requires assuming the initial , validated by prior conditions.
-
Objection 2: Loss of Quantum Coherence
- (a)
Issue: If preparation starts with a measurement, how do we account for coherence (e.g., interference)?
- (b)
Response: Coherence emerges operationally.
- (c)
-
Example:
Measure systems to collapse to or .
Discard all systems in state .
Apply H to many initial -verified states.
Aggregate final measurements () show interference patterns, even though individual experiments start with collapsed states.
-
Objection 3: Entanglement and Nonlocality
- (a)
Issue: Entangled states require joint preparation of superpositions.
- (b)
Response: Entanglement is preparable from an initial measurement like any other state.
- (c)
-
Example:
Measure systems to collapse to , , , or .
Discard all systems in state , , and .
Apply a Hadamard gate to the first qubit:
Apply a gate (with first qubit as control, second as target):
The final state is an entangled state—specifically, it’s one of the Bell states (sometimes denoted as ).
In all cases, neglecting the initial measurement results in systems of unknown state entering the experiment and making it ill-defined – preventing us from obtaining QM as an inferred solution to an optimization problem.
3.1.2. Atomic Experiments as the Fundamental Evolving Objects
For a physical evolution defined from to , the wavefunction is the natural ontological focus, as in many interpretations. But what is the fundamental ontological object of a to evolution? To address this, recall that Definition 1 represents the set of all experiments realizable within a domain. In practice, we perform them atomically—the full set emerges from many such instances, which are the true building blocks of reality.
An
atomic experiment is a pair of elements from an ensemble
, with the first as the initial outcome and the second as the final. Consider a run of
n atomic experiments over
:
These pairs are the objects that evolve—the brute facts of reality. Aggregation via the law of large numbers yields representative measures
and
(or normalized
and
), by counting occurrences and dividing by
n. This provides the endpoints for optimization over all realizable experiments.
The map from such runs to measures is many-to-one and non-invertible (e.g., different runs can yield identical and ). Thus, interpretive issues like the measurement problem arise as artifacts of this aggregation: It discards atomic details, and attempts to "collapse" back to outcomes fail due to non-invertibility. In our framework, atomic experiments are the irreducible constituents of reality from which laws emerge, fully resolving paradoxes while grounding QM in operational terms.
3.1.3. Stabilizing Physics on Empirical Foundations
This framework eliminates traditional ontological commitments—such as fields or particles as primitive entities—in favor of concepts grounded in empirical science. In conventional theories, foundational notions are subject to replacement as experimental data evolves: For instance, the electron, initially conceived as a discrete particle, is supplanted by field descriptions in quantum field theory. Such shifts, while empirically driven, introduce instability, as each paradigm may drift from direct verifiability toward metaphysical assumptions.
By contrast, our approach posits experiments as the mathematical foundation of physics, as formalized in Definition 1. Here, atomic experiments—pairs evolving from initial to final outcomes—serve as the irreducible building blocks of reality, interpolated via optimization under the natural constraint. Unlike fields or particles, experiments cannot be replaced by any future concept deemed more scientifically fundamental than experiments themselves; they are the brute facts from which all theoretical constructs emerge via inference, by virtue of the definition of science.
This stabilization aligns physics with the core of scientific method: empirical testability and falsifiability, without unverifiable primitives. The wavefunction , spacetime geometry, and gauge fields arise not as fundamental but as derived tools for connecting to . The significance is that physics’ foundations become immune to conceptual upheavals, rooted eternally in the verifiable structure of experiments. In this operational ontology, science’s iterative nature refines interpolations over these primitives, ensuring consistency and empirical fidelity.
4. Conclusions
E.T. Jaynes fundamentally reoriented statistical mechanics by recasting it as a problem of inference rather than mechanics. His approach revealed that the equations of thermodynamics are not arbitrary physical laws but necessary consequences of maximizing entropy subject to constraints. This work extends Jaynes’ inferential paradigm to address a more fundamental question: what is a physical theory itself?
A physical theory, at its essence, is a set of equations that applies to all experiments realizable within a domain. While this definition is informal, our contribution lies in making this concept mathematical. By formulating it as an optimization problem—minimizing the relative entropy of measurement outcomes subject to the natural constraint—we transform the abstract definition of a physical theory into a precise, solvable mathematical problem.
This approach represents a profound methodological shift. Rather than constructing physical theories through trial-and-error enumerations of axioms, we derive them as necessary solutions to a well-defined optimization problem. Physics thus emerges not as a collection of independently discovered laws but as the unique optimal interpolation device between arbitrary experimental preparation and measurement under the constraint of nature.
The power of this formulation lies in its generality allowing us to recover several established physical theories from entropy optimization. Jaynes showed that statistical inference with minimal assumptions yields thermodynamics; we suggest that this same principle, properly generalized, may yield the foundation to all of physics.
Funding
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
Data Availability Statement
No datasets were generated or analyzed during the current study. During the preparation of this manuscript, we utilized a Large Language Model (LLM), for assistance with spelling and grammar corrections, as well as for minor improvements to the text to enhance clarity and readability. This AI tool did not contribute to the conceptual development of the work, data analysis, interpretation of results, or the decision-making process in the research. Its use was limited to language editing and minor textual enhancements to ensure the manuscript met the required linguistic standards.
Conflicts of Interest
The author declares that he has no competing financial or non-financial interests that are directly or indirectly related to the work submitted for publication.
Appendix A. SM
Here, we solve the Lagrange multiplier equation of SM.
We solve the maximization problem as follows:
The partition function, is obtained as follows:
Finally, the probability measure is:
Appendix B. SageMath program showing ⌊u ‡ u⌋ 3,4 u ‡ u = detφ(u)
from sage.algebras.clifford_algebra import CliffordAlgebra
from sage.quadratic_forms.quadratic_form import QuadraticForm
from sage.symbolic.ring import SR
from sage.matrix.constructor import Matrix
# Define the quadratic form for GA(3,1) over the Symbolic Ring
Q = QuadraticForm(SR, 4, [-1, 0, 0, 0, 1, 0, 0, 1, 0, 1])
# Initialize the GA(3,1) algebra over the Symbolic Ring
algebra = CliffordAlgebra(Q)
# Define the basis vectors
e0, e1, e2, e3 = algebra.gens()
# Define the scalar variables for each basis element
a = var(’a’)
t, x, y, z = var(’t x y z’)
f01, f02, f03, f12, f23, f13 = var(’f01 f02 f03 f12 f23 f13’)
v, w, q, p = var(’v w q p’)
b = var(’b’)
# Create a general multivector
udegree0=a
udegree1=t*e0+x*e1+y*e2+z*e3
udegree2=f01*e0*e1+f02*e0*e2+f03*e0*e3+f12*e1*e2+f13*e1*e3+f23*e2*e3
udegree3=v*e0*e1*e2+w*e0*e1*e3+q*e0*e2*e3+p*e1*e2*e3
udegree4=b*e0*e1*e2*e3
u=udegree0+udegree1+udegree2+udegree3+udegree4
u2 = u.clifford_conjugate()*u
u2degree0 = sum(x for x in u2.terms() if x.degree() == 0)
u2degree1 = sum(x for x in u2.terms() if x.degree() == 1)
u2degree2 = sum(x for x in u2.terms() if x.degree() == 2)
u2degree3 = sum(x for x in u2.terms() if x.degree() == 3)
u2degree4 = sum(x for x in u2.terms() if x.degree() == 4)
u2conj34 = u2degree0+u2degree1+u2degree2-u2degree3-u2degree4
I = Matrix(SR, [[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]])
#MAJORANA MATRICES
y0 = Matrix(SR, [[0, 0, 0, 1],
[0, 0, −1, 0],
[0, 1, 0, 0],
[−1, 0, 0, 0]])
y1 = Matrix(SR, [[0, −1, 0, 0],
[−1, 0, 0, 0],
[0, 0, 0, −1],
[0, 0, −1, 0]])
y2 = Matrix(SR, [[0, 0, 0, 1],
[0, 0, −1, 0],
[0, −1, 0, 0],
[1, 0, 0, 0]])
y3 = Matrix(SR, [[−1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, −1, 0],
[0, 0, 0, 1]])
mdegree0 = a
mdegree1 = t*y0+x*y1+y*y2+z*y3
mdegree2 = f01*y0*y1+f02*y0*y2+f03*y0*y3+f12*y1*y2+f13*y1*y3+f23*y2*y3
mdegree3 = v*y0*y1*y2+w*y0*y1*y3+q*y0*y2*y3+p*y1*y2*y3
mdegree4 = b*y0*y1*y2*y3
m=mdegree0+mdegree1+mdegree2+mdegree3+mdegree4
print(u2conj34*u2 == m.det())
The program outputs
True
showing, by computer-assisted symbolic manipulations, that the determinant of the real Majorana representation of a multivector u is equal to the double-product: .
Appendix C. Obstructions in 6D and above
Conjecture 1 (No observables (6D)). The multivector representation of the norm in 6D restricts observables to the identity.
Argument. In six dimensions and above, the self-product patterns found in Definition 6 collapse. The research by Acus et al.[
7] in 6D Clifford algebra concludes that the determinant, so far defined through a self-products of the multivector, fails to extend into 6D. The crux of the difficulty is evident in the reduced case of a 6D multivector containing only scalar and grade-4 elements:
This equation is not a multivector self-product but a linear sum of two multivector self-products[
7].
The full expression is given in the form of a system of 4 equations, which is too long to list in its entirety. A small characteristic part is shown:
From Equation (
A12), it is possible to see that no observable
can satisfy this equation because the linear combination does not allow one to factor it out of the equation.
Any equality of the above type between
and
is frustrated by the factors
and
, forcing
as the only satisfying observable. Since the obstruction occurs within grade-4, which is part of the even sub-algebra it is questionable that a satisfactory theory (with non-trivial observables) is constructible in 6D, using our method. □
This conjecture proposes that the multivector representation of the determinant in 6D does not allow for the construction of non-trivial observables, which is a crucial requirement for a relevant quantum formalism. The linear combination of multivector self-products in the 6D expression prevents the factorization of observables, limiting their role to the identity operator.
Conjecture 2 (No observables (above 6D)). The norms beyond 6D are progressively more complex than the 6D case, which is already obstructed.
These theorems and conjectures provide additional insights into the unique role of the unobstructed 3+1D signature in our proposal.
We also note that it is interesting that our proposal is able to rule out even if in relativity, the signature of the metric versus does not influence the physics. However, in Clifford algebra, represents 1 space dimension and 3 time dimensions. Therefore, it is not the signature itself that is ruled out but rather the specific arrangement of 3 time and 1 space dimensions, as this configuration yields quaternion-valued "probabilities" (i.e. and ).
References
- Jaynes, E.T. Information theory and statistical mechanics. Physical review 1957, 106, 620. [CrossRef]
- Jaynes, E.T. Information theory and statistical mechanics. II. Physical review 1957, 108, 171. [CrossRef]
- Dirac, P.A.M. The principles of quantum mechanics; Number 27, Oxford university press, 1981.
- Von Neumann, J. Mathematical foundations of quantum mechanics: New edition; Vol. 53, Princeton university press, 2018.
- Lundholm, D. Geometric (Clifford) algebra and its applications. arXiv preprint math/0605280 2006.
- Hestenes, D. Spacetime physics with geometric algebra. American Journal of Physics 2003, 71, 691–714. [CrossRef]
- Acus, A.; Dargys, A. Inverse of multivector: Beyond p+ q= 5 threshold. arXiv preprint arXiv:1712.05204 2017.
| 1 |
An information density is a non-negative, unnormalized measure, . This contrasts with a probability density, which is additionally normalized to unity. An information density is the most general structure whose entropy is real-valued. The explicit form of w will be identified in Theorem 1 by solving the optimization problem. The unnormalized nature of w aligns with foundational concepts in Quantum Field Theory (QFT), where unnormalized field amplitudes and their associated conserved (but not unit-normalized) charge densities are primary, rather than single-particle probability amplitudes. The framework will later show how quantities suitable for probabilistic interpretation (Theorem 8), such as the Dirac current, emerge from the dynamics. |
| 2 |
As we solve the optimization problem, we will find that the Lagrange multiplier t takes on the role of time, yielding dynamical equations. This is similar to the Lagrange multiplier in statistical mechanics taking on the role of temperature via after solving the optimization problem for the Gibbs measure. |
| 3 |
The removal of the determinant implies an additional term , where . Furthermore, since implies , it is simply a gauge choice. Here, we choose . |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).