The arithmetic of uncertainty unifies quantum formalism and relativistic spacetime

The theories of quantum mechanics and relativity dramatically altered our understanding of the universe ushering in the era of modern physics. Quantum theory deals with objects probabilistically at small scales, whereas relativity deals classically with motion in space and time. We show here that the mathematical structures of quantum theory and of relativity follow together from pure thought, defined and uniquely constrained by the same elementary"combining and sequencing"symmetries that underlie standard arithmetic and probability. The key is uncertainty, which inevitably accompanies observation of quantity and imposes the use of pairs of numbers. The symmetries then lead directly to the use of complex"$\surd\mathord-1$"arithmetic, the standard calculus of quantum mechanics, and the Lorentz transformations of relativistic spacetime. One dimension of time and three dimensions of space are thus derived as the profound and inevitable framework of physics.


Introduction
Science relies on numeric quantification, which can be traced back to Euclid, Galileo, and Newton. While many now take this success for granted, some scientists have pondered the "unreasonable effectiveness of mathematics" [1,2] as well as its necessity [3], and others continue to strive to understand the foundations of the quantum mechanical formalism [4,5,6,7,8,9]. Such questions are deep. For example, we quantify speed with a single real number, a scalar. But we must quantify velocity with three real numbers. Why? And how do we know?
This paper is about engineering the formalism of quantification [10]. To accomplish this, we use the basic symmetries beneath combination and sequencing, which underpin formal science. Only such supremely elementary assumptions have such sufficient range and authority to be an acceptable base for the fundamental language of wide-ranging science. Requiring some subtle detail (continuity perhaps) would be less compelling because it could more easily be denied. Robust foundation needs to be simple. Hoping for wide accessibility, our account eschews excessive detail and is intended to be accessible to neophyte undergraduate students of reasonable diligence and moderate numeracy.
Classical objects might optimistically be measurable to arbitrary precision: "Given any precision requirement, there could exist technology to accomplish it". That would mean that quantity could be treated as a single number, ignoring any intrinsic but negligible uncertainty. But this is a double limit and the pessimistic view "Given any technological ability, there will always exist requirements that defeat it" could also be held. Classically, it's not clear whether "requirement → 0" or "technology → 0" should dominate, which leaves the status of single-number quantities uncomfortably in question.
Fortunately there is no ambiguity in the quantum world, where an elementary target can not be measured with precision by some arbitrarily smaller probe because there are no such probes. It becomes impossible to bootstrap complete knowledge of a target because we start correspondingly ignorant about any probe that we might use. It follows that a faithful description of a quantum target requires a pair of numbers representing a fusion of quantity and uncertainty. The connection will be more intimate than just "quantity ± error bar", which would really be just a conventional couple of scalars.
The basic symmetries impose a specific calculus on number pairs, in keeping with but more subtle than standard scalar arithmetic. We derive complex arithmetic operating on pairs which we recognise as quantum amplitudes, observable through modulus-squared probabilities. Not only do we construct the Feynman picture of quantum mechanics, but we find that these same symmetries also lead to the Pauli matrices which generate spin, energy and momentum, and beyond that to 3+1-dimensional relativistic spacetime. The physics of quantity-with-uncertainty is to be described within this required mathematical formalism.
This adopts the strategy of [11] to develop mathematical language in accordance with the relevant fundamental symmetries (here, of physics). With this perspective, it is not surprising that mathematics works as the language of physics [12]. The mathematics we use works because it is engineered to work.
Our guiding principle is parsimony of laws. Any operation that our mathematics does allow should be allowed unless prevented by some new law, and that means that the mathematics must be as simple as we can make it. If we occasionally stray into technical language, it is not with intent to claim erudition or sophistication, but rather to reach out to those of greater erudition and sophistication than ourselves while aiming to explain to wider readership why the framework of physics must be as it is.

Addition and Multiplication
Sum and product rules are the foundation of arithmetic and thence of the rich structure of mathematics that science uses to model the physical world.
We assume that separate objects can exist. Although we illustrate this here with spots on the faces of dice, we do not attempt to define the nature of the objects in question. Applications are legion and we do not place limits on the objects or properties that users might have in mind. We just assume commutativity (order doesn't matter for the purpose in hand) illustrated and associativity (brackets don't matter either) § with © with ¥ = § with © with ¥ associative, (a+b)+c = a+(b+c) (2) This associative commutativity implies that the mathematical representation of quantification is additive [13,14,15] (up to isomorphism, which allows changing the labels while preserving the content). Usage of addition is the sum rule, here obtained in a way that will upgrade into quantum theory.
We also assume that what can be added up can be replicated, subject to left-and right-distributivity (replication applies to any target) 4 of § with © = 4 of § with 4 of © This associative distributivity implies that its representation is multiplicative (up to change of units, for consistency with addition). Partitioning is the inverse, where proportions multiply down instead of replicates multiplying up [16,17,18,19,15]. Usage of multiplication is the product rule.
Associative commutativity and associative distributivity are our foundational symmetries.
Nothing else is needed. Children use these informally as they learn about addition and multiplication through shuffling and grouping. The supreme simplicity of these ideas tokens the generality of application that we need for the deepest basis of science. This explains the success of mathematical modelling of a world in which separate objects can exist (commutativity) and which can behave independently of others (distributivity). The symmetries force arithmetical rules to which we can have no alternative. The content of our modelling is up to us, but the language (in this case standard arithmetic) is defined.
One can think of the mathematics used in science as having been engineered to be consistent with these symmetries, thus ensuring its success. It is then no mystery that mathematics works [1,2], because it could not have been any other way [12]. The uniqueness (up to isomorphism [19,15,20]) and extreme familiarity of the rules appear to give them independent status, whereas in fact they were deliberately engineered. If the foundational symmetries are accepted, the rules are forced, and the resulting mathematics becomes the quantitative language of physics.
Specifically, measure theory applies the sum rule to quantification, with additivity being the unique formalism for ubiquitous situations. Probability calculus applies the sum and product rules to partitioning of allowed possibilities [15,20], which allows us to learn about the world by eliminating some of what was previously deemed possible. Again, probability is the unique calculus in ubiquitous situations. The content of the calculus is up to us, but the language is forced.
There are many applications of the rules, in human affairs as well as in science. For example, money is an application of measure leading to betting as a subsidiary application of probability [21,22]. But applications are not foundations.
Where objects obeying these symmetries have only one relevant property, the standard representation is scalar. However, objects may have several properties. For example, six-sided dice have individuality (1 per die) and can display different numbers of spots (ranging from 1 to 6). Those properties, the number of dice thrown and the number of displayed dots, are separately additive.
Multidimensional addition is straightforwardly componentwise. Multidimensional multiplication, though, is not quite uniquely defined by the founding symmetries. Distributivity requires that a product is bilinear (linear in each factor) but associativity does not altogether remove the remaining ambiguity. Investigating this leads simply and directly to the basic language of physics in the form of quantum formalism and relativistic spacetime.

Quantum Foundation
Our world is quantised. There is an irreducible quantum of targets of a given type, and there are likewise irreducible investigative probes. Our knowledge always derives from interactions, which means that we can never attain complete knowledge of any individual object, whether it be a target investigated with an incompletely known probe, or a probe interacting with an incompletely known target. Consequently, our knowledge of objects will always be accompanied by inherent uncertainty. The connection between quantity and uncertainty is potentially more intimate than just "quantity ± error bar", which means that our description needs to fuse quantity with uncertainty into two-parameter "pairs" that we write as Precisely how quantity and uncertainty are to be encoded by these pairs is to be determined.
Associative commutativity implies the sum rule now in two-dimensional pair-wise form.
Upon interaction, associative distributivity then gives because summation has to remain linear regardless of probing or targeting context (distributivity) and probing or targeting is a sequential process (associative). Left distributivity ensures that a pair product is linear in the second factor and right distributivity ensures linearity in the first, so that the multiplication is bilinear, taking the form where the γ's are eight constants which take standard values in standardised coordinates. This is why physics is fundamentally linear. Probing with a pair x applies a linear 2×2 matrix to the target pair y.
Unlike for scalars, pair multiplication is not unique. We encounter similarly nonunique multiplication through dot and cross products in vector calculus (though those products are not full-rank 3-vectors). Here, we seek multiplication in which the R 2 ×R 2 pair products remain full-rank pairs in R 2 . We find three settings for the γ's in which products x · y retain full non-degenerate pair status. (Details are in the Appendix.) They represent classes not related to each other by any real coordinate transformation. Tak-ing standard coordinates, these are Probing with x can thus be specified by any of the three 2×2 matrices These varied possibilities allow the richness of physics while limiting the possibilities to those allowed by A and B and C alone.
Lest repeated operation of (x ·) cause targets y to diverge towards infinity or collapse towards zero, thereby exploding or imploding them, we use det(x ·) = 1, which defines unit quantity of x. There is no loss of generality because (x ·) can always be rescaled in any particular case. These normalised operators have only one free parameter φ related to x 2 /x 1 and take the form We can also focus on the nature of the multiplication itself by building φ from infinitesimal increments of operators so that where the generator G is the matrix A or B or C.
The first of these (operator A) is rotation by phase φ. Taking (7) for addition and (12A) for multiplication, we recognise the sum and product rules of complex arithmetic, so that pairs are complex numbers obeying the standard rules. Moreover, unit quantity is identified with unit determinant, which for (12A) is modulus-squared det(x ·) = |x| 2 = 1. Hence the inherent uncertainty in a unit object refers to what remains undefined in x, namely phase φ, so that each new object brings with it an unknown phase. At this point, probability must enter the development.

Probability
Our ignorance of phase is uniformly distributed, giving phase a uniform probability distribution around the circle. Otherwise, we could use identically-formed components to build composite objects X = x 1 + x 2 + · · · + x n whose overall phase would be arbitrarily precise around the supposed mode, thus defeating the engineering requirement of inevitable uncertainty. Meanwhile, phase is necessarily continuous because any restriction would conflict with general application of the complex sum rule. Continuity is not an assumption, it's a requirement.
We have no way of extracting what might be a definitive "truth". The best we can do in the face of uncertainty is use replication to obtain predictions of average behaviour, with precision increasing statistically as the square root of the replication factor. These average predictions are called ensemble averages.
According to the standard rules of probability, ensemble averaging involves summing (integrating) over all the unknown parameters. Such averaging is known in statistics as "marginalization" by analogy with the treatment of tables of possibilities laid out on a page. In quantum theory here, we average over uniformly distributed phases to get Hence by summing over the unknown phases, the quantity we have access to via experiment is the probability, or likelihood, of a given outcome. This is why quantum mechanics is a probabilistic theory. This likelihood, which is additive because of the scalar sum rule, is found to refer to the squared modulus, which is the Born rule [23].
It remains to try the alternative operators B and C. Taking (12B) or (12C) for multiplication, we would again identify unit quantity with unit determinant. Uncertainty would again refer to φ -now a pseudo-phaseand ignorance of pseudo-phase would again be uniformly distributed.

Inference
Looking ahead and anticipating spacetime which has not yet been developed, physical apparatus produces beams of particles emitted sporadically at some average rate, with intensity determined by averaging over unknown phases.
Particles themselves will be secondary to beams and the predictions we make are probabilistic, in the form of an ensemble of possibilities. Standard probability, often redundantly called "Bayesian", affords a solid foundation for the traditional quantum formalism.
The probabilistic nature of quantum theory has been acknowledged by [24,25,26,27,28] in "Quantum Bayesianism" (QBism), though it's actually intrinsic and does not need to be argued or demonstrated separately. Indeed, our formalism was engineered to be automatically consistent with probability theory through obeying the same foundational symmetries [15,20]. Contradiction was never possible, so extra assumption was never needed.
Quantum formalism is part of physics: it predicts the likely (probabilistic) behavior of specified models of physical situations. Given those likelihoods, Bayesian analysis then computes posterior probabilities, which assess the models in the light of outcomes as actually observed. But Bayes' Theorem, which formalises our inferences about the world, is not a part of physics.
Physics makes predictions quantified probabilistically in terms of likelihoods.
The physical world proceeds independently of our thoughts about it (except insofar as we may decide to interrupt its natural evolution). We recommend keeping physics and inference conceptually apart [29].

Qubits
So far, we have investigated the calculus of objects whose only relevant property is existence (in some identified state). We now upgrade to objects that can exist in two states, ↑ and ↓ say, which we can produce by combining independent ↑ and ↓ ensembles. Instead of having a single-state "quantitywith-uncertainty" representation with two real numbers fused into a single complex, we now have a representation involving two complex (four real) numbers.
Physicists identify such objects as spin-half leptons while mathematicians call them 2-spinors of rank 1. Taking our cue from computation, we call them "qubits" for short. Any number of states greater than two can be decomposed in binary fashion, so using just two loses no generality.
Quantification of ↑ and ↓ separately is performed with detectors which are now represented by 2×2 projection matrices Under change of basis and linear combination, this generalises to q = ψ † Mψ for general quantification of a qubit where M is an arbitrary matrix -Hermitian by construction and because any anti-Hermitian part would cancel.
The observable ensemble average is q = ψ † Mψ , commonly expressed as q = trace(ρM) where ρ = ψψ † is the density matrix though we do not use that construction here.
Under combination, qubits still obey associative distributivity so that their representation is two-dimensional multiplicative as before, except that it's over the complex field instead of the real. We could now be allowed all three of A and B and C, now used as complex-to-complex C 2 ×C 2 → C 2 operators. In this expanded context there is no reason to exclude B or C, but that is the limit of the allowed freedom. Technically, A and B become inter-convertible by complex transformation but C retains its individuality.
This implies that C will provide a quantification distinct from that provided by A and B.
Note that φ in (13) which have been written in Hermitian form for convenience and which (together with the identity matrix σ 0 = 1) form a closed group under multiplication.
The three Pauli matrices yield quantification The overall quantity is not independent, the relationship being which is invariant under ordinary three-dimensional rotation of x, y, z. This is the first clue about the emergence of spacetime from quantum formalism.
On combining independent samples into an ensemble, the vector coordinates q x , q y , q z add and so do their radii q 0 , which allows the equality (25) to degrade to With complex coefficients φ, the three generators (14) define the 6-parameter Lorentz group of transformations under which
Correspondingly, the observable spins q in (23) and (24) rotate about z as All spins rotate equally, so the observable ensemble rotates undistorted about z.
In the context of three-dimensional rotation of x, y, z, the ensemble average q is given symbol 2J, where J transforms as with the general invariant (28) Although rotation is again recognisable in the equations, this is without explicit reference to space which is yet to be constructed. Observe, though, the commonly-remarked angle-doubling homomorphism between η/2 which operates on ψ (group SU(2) rotation of a complex plane) and η which operates on the observable spin q (group SO(3) usually visualised as rotation of a 3sphere). Rotational invariance under SO(3) has emerged automatically from the formalism, and is not imposed as a property of pre-supposed isotropic space [31,32].

Momentum
We can also apply a real coefficient φ z = −ξ/2 to σ z . This rebalances ψ ↑ and ψ ↓ as Correspondingly, the q's in (23) and (24) transform as On replacing the ensemble average q by symbol p for this new set of real properties, we have Instead of rotation, this is a Lorentz boost along z, with p x and p y unaffected and p 2 0 − p 2 z invariant. Recognising this from relativistic kinematics, we identify p 0 as energy E and (p x , p y , p z ) as momentum. Energy and momentum transform as scalar and vector under rotation, and we write the general invariant (28) under boost as where we identify m as (rest-)mass. We see that mass behaves under transformation (35) as a 4-vector with Minkowski metric diag(1, −1, −1, −1).
Qubits, and consequently all their various combinations, have quantity allied to a directional spin, and energy allied to a directional momentum.
Momentum is imaginary spin and spin is imaginary momentum. But we do not yet have the time and space within which energy and momentum are conventionally given meaning.

Spacetime
In the complex environment of quantum theory, generators A and B have yielded a real 3-vector called spin and a dual real 3-vector called momentum.
Each had an associated scalar making 8 properties in all. Complex 2×2 matrices can support 8 components, but the Hermitian matrices which define observables are only a 4-parameter subset (the Pauli matrices with identity).
Something's missing, and it's the third and final allowed multiplication operator C.
This generates a new parameter U as the integral of an existing parameter u without changing the latter (Galileo's principle). Conversely, u can be recovered as its differential.
We now inquire into the phase of ψ as in ψ ∝ e −iθ (16), with the minus sign being physicists' convention. For an object of mass m, we can define a scaled phase τ through dθ = m dτ (38) in which we identify θ with U in (37). Phase cannot be rescaled because it's 2π-periodic, so neither can phase differences δθ. So, with dθ being invariant to re-orientation and m transforming as a 4-vector under (36), it follows that dτ also transforms as a 4-vector under the same Minkowski metric with having inner product from which we see that phase θ is the physicists' action [33]. Not only do the elementary symmetries of associative commutativity and associative distributivity force the mathematics used to quantify objects and events, but they also force the descriptions underlying our perception of objects and events [34]. These inherent perceptual descriptions lead one to feel that space and time are an objective reality, whereas our analysis shows that three-dimensional space and one-dimensional time are the only consistent description one can have of reality. Perhaps that's why it has taken centuries for this structure to be uncovered.
It will not escape notice that spatial location is accumulated as the timeintegral of local velocity. Consequently, the curvature of space, if any, is not defined in the language, but is part of the content of physics, along with basic parameters of particle physics and cosmology. In this paper, we have addressed the language only, leaving content for other investigation.

Conclusions
We have argued that objects must be represented, not by classical scalars, but by pairs of numbers fusing quantity and uncertainty, and have shown how they can be so quantified. Associative commutativity of combination and partition results in the component-wise sum rule, and associative distributivity of chaining or sequencing results in three possible product rules. These define the arithmetical rules to which our formalism must adhere. Scalars have only one product rule but pairs have three.
Only one of the three product rules (A) yields proper (probabilistic) predictions. Applied to pairs, it imposes complex quantum amplitudes controlled by the Feynman sum and product rules with predictions given by the Born rule. By construction, quantum formalism is thus engineered as a probabilistic theory incorporating uncertainty. Uncertainty appears as phase and the theory is engineered to predict objects' behavior in the form of likelihood functions, which is physics. Given those likelihoods, Bayesian analysis then computes posterior probabilities, which assess the models in the light of outcomes as actually observed, which is inference. In this way physics and inference are distinct, but mutually consistent, with physics making predictions about the world and inference using those predictions along with experimental outcomes to learn about the world. Of course, there can be no conflict between standard classical probability and quantum applications because the sum and product rules for scalars and pairs follow together from common foundational symmetries to give consistent arithmetical rules.
Every independent component of a compound object has its own unknown phase in a wide-ranging ensemble of possibilities. For large objects, such phases average out leaving classically scalar macroscopic quantities. Where components are not independent because of related construction, their connected phases are said to be "entangled". Entanglement yields extra information about an object, inaccessible to classical scalar inquiry as exemplified by the Bell inequalities [35,36].
Beyond phase and entanglement, which are underpinned by product rule A, lie rules B and C which take the scope further. Those rules must also play a part -otherwise some additional assumption would be required to prevent it. Rule B generates the Pauli matrices, which yield spin, energy, and momentum. Rule C integrates those to construct 3+1-dimensional relativistic spacetime. This explains why space has three dimensions, and why Einstein's postulate of a speed limit (that of light) was correct [37].
Upon acknowledging that quantification carries uncertainty with it, we find that quantum formalism and relativistic spacetime both follow without further assumption. The rules follow from logical derivation, and not from experimentation which must and does conform. As first demonstrated by [11], robust formalism is not invented or discovered, but is openly engineered to conform to fundamental symmetries. There is then no mystery about why the resulting mathematics works. It has to.
We do not, of course, make the philosophically contentious claim to have proved our findings in some absolute sense. All we claim is that any hypothetical alternative theory not conforming to standard quantum formalism and standard spacetime must be incompatible with the foundational symmetries, so that combination and/or sequencing must go awry in that world.
In principle, a world could be so deeply interconnected that separating individual objects was impossible. In practice, ours isn't, so we stand by our findings as a uniquely consistent mathematical description of the world we live in. Associative commutativity of + requires combination c = a + b of ntuples to be represented -up to isomorphism -by component-wise addition (the sum rule, which is unique for any n).
This linear form is invariant to non-singular linear transformation of axes.
Distributivity of · over + then requires n-tuple connection c = a · b to be bilinear multiplication defined by n 3 coefficients γ.
Associative distributivity imposes n 4 quadratic constraints (not all independent) on those n 3 γ's.
We seek standardised values for the multiplication coefficients γ that we can adopt as agreed conventions for representing families connected by non-singular transformation of axes. Each such standard will be called a product rule.
For a start, we can scale axes by for each index j separately. This lets us set the coefficients γ jjj (all 3 suffices the same) to 0 or +1 for each j.
One dimension n = 1 With only one coefficient γ 111 , setting it to 0 would be trivial and useless so we assign Alternative conventions are possible, as when proportions are presented as percentages (γ 111 = 1/100), but the unit assignment is standard. In one dimension, distributivity allows only one product rule (which is ordinary scalar multiplication) and associativity follows trivially. But in higher dimension more than one product rule can accompany the unique sum rule.
Two dimensions n = 2 There are now 2 3 = 8 γ's, which we lay out as an array As well as scaling, we can also shear one of the axes by the other, as in The γ's then transform with, in particular, Full-rank cubic equations have at least one real root, enabling us to set γ 211 = 0 unless γ 122 = 0. Finally, we can interchange axis labels 1 2, so consequently we can always select γ 211 = 0.
We now choose γ 111 to be either 1 or 0.
On appending γ 111 = 1 to the original γ 211 = 0, the associativity equations reduce to γ 121 γ 221 = 0 From (4 ), γ 221 = 1 or 0; and from (7 ) To be nontrivial, x·y·z ≡ 0, these forms need γ 222 = θ = 0. Hence θ must be scalable to +1. Equation (14 ) then requires φ = 0 in the last two forms, which reproduce earlier solutions D and E in interchanged 1 2 form, so offer nothing new. Meanwhile φ can be sheared to 0 in the first two forms by x 1 = x 1 ± φx 2 (with + sign for the first, − for the second). Again, these reproduce earlier solutions C and F in interchanged form, so offer nothing new.

Summary
In two dimensions, associative distributivity allows six standard product rules, all different, which we label alphabetically.
Only the first three offer the nondegenerate "Pair · Pair → Pair" product laws that we seek to engineer. Rules D and E introduce multiplication by scalars, and F confirms ordinary multiplication of those scalars. Thus the full structure of an associative algebra follows from pair-wise associative commutativity and nontrivial associative distributivity alone, with scalars emerging automatically as pairs of form (x, 0).