Submitted:
10 March 2025
Posted:
12 March 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction: Reduced Paper Concerning Quaternionic Least Squares Only
2. Fundamentals of Quaternionic Matrices and Chain Multiplication
2.1. Mother Nature’s Matrix Form; and the Zero and Forward Vectors
2.2. Traditional Human Matrix Forms
2.3. Least Squares is a Misnomer: Least Conjugates Regression Is the Correct Terminology
2.4. Multiplication Rules for Three Variables
- 1.
- .
- 2.
- .
- 3.
- .
- 4.
- .
2.5. The Partial Commutativity of Product Chains Surrounding an Anchor
- 1.
- Let be the anchor of a multiplicative system, such that all other quaternions being multiped against it are perceived as external forces, and that if there are no quaternions being multiped against , then is “at rest."
- 2.
- Let set X be a set of quaternions be the Left Chain, which are the left-handed forces that have acted on , with the indices from 1 to m reflecting the reverse temporal ordering of the left-handed actions taken (that is, is the most recent left-handed action).
- 3.
- Let set Y be a set of quaternions be the Right Chain, which are the right-handed forces that have acted on , with the indices from 1 to n reflecting the reverse temporal ordering of the right-handed actions taken (that is, is the most recent right-handed action).
- 1.
- 2.
- 3.
- Given that
- 4.
- Given that
- 5.
- Given that and , that
- 6.
- Given that and , that
- 7.
- Given that and , that
- 8.
- Given that and , that
- 1.
- Any s iteration for some can be placed anywhere between any t iteration for some and , provided that no iteration is placed out of order with respect to all other X iterations.
- 2.
- Any t iteration for some can be placed anywhere between any s iteration for some and , provided that no iteration is placed out of order with respect to all other Y iterations.
2.6. The Conjugate and Reciprocal of a Hypercomplex Algebra are the Adjugate and Inverse Matrices
3. Quaternionic Linear Systems of Equations
3.1. The Power of Quaternionic Least Squares for Market Analysis
3.2. Are There Cases Where Regression Might Still Be Preferred?
- Use regression to broadly identify top-performing consumed goods.
- Apply quaternionic regression to validate the most stable subsets.
- Invest in goods that demonstrate high predictive power in quaternionic space, ensuring a more reliable and stable portfolio.
- Since goods exhibit a latency of two to four weeks in responding to fluctuations in raw material prices, the changes in raw material prices effectively serve as a predictive model for future trades in the goods market.
3.3. Solving a Purely Left-Handed System of Quaternionic Linear Equations
3.4. Solving a Purely Right-Handed System of Quaternionic Linear Equations
3.5. Solving a Purely Middle-Handed System of Quaternionic Linear Equations
3.6. Solving System of Quaternionic Linear Equations of Mixed Chirality
3.7. Invertibility of the X Matrix and Construction of H
4. What is the Physical Interpretation of Least Squares Regression over the Reals and the Quaternions?
4.1. Multiple Lists of Real Numbers All Exist in the Same Dimension
- One-half the sum of either ruler.
- One-fourth the sum of the first ruler plus five-eighths the sum of the second.
- Or any one of the infinite linear combinations of 4 and 8 that yield 6, all of which exist in the same forward direction.
- ⋮
-
Multiplication by the First Reflector (The Forward Vector):
- (a)
- (b)
- (c)
- (d)
-
Multiplication by the First Rotator:
- (a)
- (b)
- (c)
- (d)
-
Multiplication by the Second Reflector:
- (a)
- (b)
- (c)
- (d)
-
Multiplication by the Second Rotator:
- (a)
- (b)
- (c)
- (d)
-
Exponent Rules:
- (a)
- (b)
- (c)
- (d)
- Vector Form:
-
Trigonometric Form:with
- Exponential Form: , where .
- The “Magnitude" of is the complex number , which is a complex number possessing direction. The complex determinant of the 2x2 complex matrix form is precisely . The determinant of the 4x4 real number matrix form is the square of the traditional magnitude of the complex number .
4.2. Was the Previous Discussion about Tessarine Logic a Waste of Time for This Article?
4.2.1. The Physical Interpretation of the Determinant of a Real Number Matrix
- Express the sum in hyperbolic form:
- Compute as:
- Rewrite using hyperbolic secant:
- Recognizing that the determinant of the unit reflector matrixis 1, we conclude that the determinant ofmust be
- The determinant of the original matrix is . The square root of 75 is approximately 8.66025.
- Compute :
- Compute :
- Compute : .
- Squaring this value:
“A relative action, which when repeated twice, brings the Specimen to in the Observer’s Reference Frame."
“A relative action, repeated twice."
“Two consecutive left turns, or two consecutive right turns."
4.2.2. Practical use of Quaternionic Least Squares
4.2.3. Rethinking Linear Independence
4.2.4. It seems like you’re wasting more time explaining why this isn’t a waste of time—let’s just move on to Quaternionic Least Squares!
-
Given a quaternion, its reduced imaginary basis is:with:
- (a)
- (b)
-
The conjugate of the quaternion, , is of the form:which always exists, even if either or (or both) are equal , where is some finite unit vector. However, the zero vector still retains its complex direction, , which is important for calculus (for instance, finding the derivative of the logarithmic spiral, for some constant real ).
- The reciprocal of the quaternion, , is of the form:which is not guaranteed to exist, but still retains its direction, even though if it extends infinitely. This is still important for calculus. The appearance of infinity with direction is actually quite common once you dive into the logarithmic spirals of complex numbers, tessarines, and quaternions by inverting them through the inverse hyperbolic tangent or cotangent functions! Especially if one needs to invoke L’Hôpital’s rule to remove competing zeros and/or infinities.
- In the event thatprior to a basis reduction, then it is the ordinary quaternion with which you are familiar, and the conjugate and reciprocal simplify to the known constructs:andrespectively.
4.3. There’s no way that can yield a valid conjugate!
4.3.1. The Digital Definition of Ill-Conditioned Matrices
4.4. What Is Least Squares Regression? How Does It Physically Work?
4.4.1. Low Values Don’t Always Indicate a Bad Fit!
4.5. What If I need Astronomical Precision...Like 13 Billion Light Years Level?
4.5.1. What is the Physical Mechanism Behind Least Squares Regression?
- We are given the equation:where and t indexes each data point.
- If there are exactly three data points, we obtain a perfect fit—a unique flat plane that passes through all three data points.
-
If there are fewer than three data points:
- With just one data point, an infinite number of planes can pass through it.
- With two data points, an infinite number of planes can pass through the line connecting them.
- With zero data points, the solution is trivial (the zero vector).
In these cases, solving for a unique regression plane is impossible. - If there are more than three data points, we construct an error matrix, invert it, and multiply it by the response vector. Despite the presence of error, Mother Nature treats the multiplication of the matrix with the response vector as if no error exists.
- This is because, no matter how many data points we provide, Mother Nature only “sees" three ghost points—points that do not correspond to any of the original data points.
- She then fits a perfect plane through these three ghost points, since whatever plane passes through them must be the one that minimizes the residual error across all of the actual data points. And that’s it!
- By the way, for this to be a legitimate plane regression in 3D space, we’re actually calculating over the quaternions.
- We are given the equation:where and t indexes each data point.
- If there are exactly three data points, we obtain a perfect fit— three rulers that can measure all three data points.
-
If there are fewer than three data points:
- With less than three data points, an infinite number of ruler-trios can measure each of the z’s. In these cases, solving for a unique set of rulers is impossible.
- If there are more than three data points, we construct an error matrix, invert it, and multiply it by the response ruler. Despite the presence of error, Mother Nature treats the multiplication of the matrix with the response ruler as if no error exists.
- This is because, no matter how many data points we provide, Mother Nature only “sees" three ghost rulers — rulers that do not correspond to any of the original data points.
- She then fits a perfect set of three ghost rulers through the response ruler, since whatever rulers measure the z’s must be the one that minimizes the residual error across all of the actual data points.
- Nothing changes when our rulers or z measurements are hypercomplex numbers, other than the rulers and measurements having directions beyond the forward vector.
- My My two cents on the apparent probability-driven nature of Quantum Mechanics? I believe that Nature collapses systems onto ghost points — abstract constructs that don’t actually exist in our universe — in order to reduce computational complexity. If true (even though untestable, as it doesn’t conflict with Hidden-But-Knowable Variables, because the ghost points are inherently Unknowable, they are Nature’s fiction during the number crunch made real post-rendering!), this would support Wolfram’s hypothesis that the speed of light is the speed of computation. To maintain this speed, Mother Nature collapses onto these ghost points when the computation time expires. While this remains untestable directly, if rival theories (Bohmian Hidden Variables, Many Worlds, Objective Collapse, etc.) are eliminated, and if anything lends credence to Wolfram’s speed-of-computation hypothesis, I believe it could be confirmed indirectly — invoking the wisdom of Sherlock Holmes: “When you have eliminated the impossible, whatever remains, however improbable, must be the truth."
5. Univariate Least Squares, with and without a Constant
5.1. Univariate Least Squares, No Constant
5.1.1. Human Readable-Forms of the Univariate Case
- Let t be the total number of data points. and .
- The Left-Handed Regression of
- The Right-Handed Regression of
- The Middle-Handed Regression of
- 1.
- Let Z be a real number matrix containing the four components of each quaternion, such that each element of this matrix is of the form , with and .
- 2.
- Let X be a real number matrix containing the four components of each quaternion, such that each element of this matrix is of the form , with and .
- 3.
-
Let be a real number matrix containing the four components of the conjugate of each quaternion, such that each element of this matrix is of the form , with and , where:
- (a)
- (b)
- (c)
- (d)
- (e)
- The above saves time for the regular quaternions, since the alternative would be to calculate the first column of the adjugate matrix form. This method does not extend to the General Case of Biquaternionic Regression.
- 4.
- We seek the quaternionic regression of . The left-handed constant of is unknown, which means we need the right-handed matrix of .
- 5.
-
Let be a three dimensional tensor that stores the Right-Handed Matrix Form of each from the two-dimensional array of X. And let an element of this tensor be , with , and , such that:
- (a)
- From to and from to :
- (b)
- And let a Matrix Element of this Tensor be referenced as .
- 6.
-
Let be a three dimensional tensor that stores the Right-Handed Matrix Form of each from the two-dimensional array of . And let an element of this tensor be , with , and , such that:
- (a)
- From to and from to :
- (b)
- And let a Matrix Element of this Tensor be referenced as .
- 7.
-
Let D be a three dimensional tensor that stores the product of . And let an element of this tensor be , with , and , such that:
- (a)
- (b)
- (c)
- (d)
- (e)
- And let a Matrix Element of this Tensor be referenced as .
- 8.
- And let be the Direct Matrix Sum of all , such that an element of is equal to:
- 9.
-
Let R be a two dimensional tensor that stores the product of . And let an element of this tensor be , with and , such that:
- (a)
- (b)
- And let a Column Matrix Element of this Tensor be reference as .
- 10.
- And let be the Direct Matrix Sum of all , such that an element of is equal to:
- 11.
- Finally let G be the inverse matrix of , such that each element of is equal to each entry of .
- 12.
- Then each component of the initially unknown is given by each entry (respectively) of the product G and the column vector of , such that:
- 13.
- The Residual Sum of Squares is a real number equal to , where .
- 14.
- The Total Sum of Squares is a real number equal to , where , where .
- 15.
- and .
- 1.
- Let Z be a real number matrix containing the four components of each quaternion, such that each element of this matrix is of the form , with and .
- 2.
- Let X be a real number matrix containing the four components of each quaternion, such that each element of this matrix is of the form , with and .
- 3.
-
Let be a real number matrix containing the four components of the conjugate of each quaternion, such that each element of this matrix is of the form , with and , where:
- (a)
- (b)
- (c)
- (d)
- (e)
- The above saves time for the regular quaternions, since the alternative would be to calculate the first column of the adjugate matrix form. This method does not extend to the General Case of Biquaternionic Regression.
- 4.
- We seek the quaternionic regression of . The right-handed constant of is unknown, which means we need the left-handed matrix of .
- 5.
-
Let be a three dimensional tensor that stores the Left-Handed Matrix Form of each from the two-dimensional array of X. And let an element of this tensor be , with , and , such that:
- (a)
- From to and from to :
- (b)
- And let a Matrix Element of this Tensor be referenced as .
- 6.
-
Let be a three dimensional tensor that stores the Left-Handed Matrix Form of each from the two-dimensional array of . And let an element of this tensor be , with , and , such that:
- (a)
- From to and from to :
- (b)
- And let a Matrix Element of this Tensor be referenced as .
- 7.
-
Let D be a three dimensional tensor that stores the product of . And let an element of this tensor be , with , and , such that:
- (a)
- (b)
- (c)
- (d)
- (e)
- And let a Matrix Element of this Tensor be referenced as .
- 8.
- And let be the Direct Matrix Sum of all , such that an element of is equal to:
- 8.
-
Let R be a two dimensional tensor that stores the product of . And let an element of this tensor be , with and , such that:
- (a)
- (b)
- And let a Column Matrix Element of this Tensor be referenced as .
- 10.
- And let be the Direct Matrix Sum of all , such that an element of is equal to:
- 11.
- Finally let G be the inverse matrix of , such that each element of is equal to each entry of .
- 12.
- Then each component of the initially unknown is given by each entry (respectively) of the product G and the column vector of , such that:
- 13.
- The Residual Sum of Squares is a real number equal to , where .
- 14.
- The Total Sum of Squares is a real number equal to , where , where .
- 15.
- and .
- 1.
- Let Z be a real number matrix containing the four components of each quaternion, such that each element of this matrix is of the form , with and .
- 2.
- Let X be a real number matrix containing the four components of each quaternion, such that each element of this matrix is of the form , with and .
- 3.
- Let Y be a real number matrix containing the four components of each quaternion, such that each element of this matrix is of the form , with and .
- 4.
- Let be a real number matrix containing the four components of the conjugate of each quaternion, such that each element of this matrix is of the form , with and , where:
- 5.
-
Let be a real number matrix containing the four components of the conjugate of each quaternion, such that each element of this matrix is of the form , with and , where:
- (a)
- and
- (b)
- and
- (c)
- and
- (d)
- and
- (e)
- The above saves time for the regular quaternions, since the alternative would be to calculate the first column of the adjugate matrix form. This method does not extend to the General Case of Biquaternionic Regression.
- 6.
- We seek the quaternionic regression of . The middle-handed constant of is unknown, which means we need the left-handed matrix of and the right-handed matrix of .
- 7.
-
Let be a three dimensional tensor that stores the Left-Handed Matrix Form of each from the two-dimensional array of X. And let an element of this tensor be , with , and , such that:
- (a)
- From to and from to :
- (b)
- And let a Matrix Element of this Tensor be referenced as .
- 8.
-
Let be a three dimensional tensor that stores the Left-Handed Matrix Form of each from the two-dimensional array of . And let an element of this tensor be , with , and , such that:
- (a)
- From to and from to :
- (b)
- And let a Matrix Element of this Tensor be referenced as .
- 9.
-
Let be a three dimensional tensor that stores the Right-Handed Matrix Form of each from the two-dimensional array of Y. And let an element of this tensor be , with , and , such that:
- (a)
- From to and from to :
- (b)
- And let a Matrix Element of this Tensor be referenced as .
- 10.
-
Let be a three dimensional tensor that stores the Right-Handed Matrix Form of each from the two-dimensional array of . And let an element of this tensor be , with , and , such that:
- (a)
- From to and from to :
- (b)
- And let a Matrix Element of this Tensor be referenced as .
- 11.
- Let D be a four-dimensional tensor.
- 12.
-
Let be a three dimensional tensor, which is the first 3D layer of D, that stores the product of . And let an element of this tensor be , with , and , such that:
- (a)
- (b)
- (c)
- (d)
- (e)
- And let a Matrix Element of this Tensor be referenced as .
- 13.
-
Let be a three dimensional tensor, which is the second 3D layer of D, that stores the product of . And let an element of this tensor be , with , and , such that:
- (a)
- (b)
- (c)
- (d)
- (e)
- And let a Matrix Element of this Tensor be referenced as .
- 14.
-
Let be a three dimensional tensor, which is the zeroth 3D layer of D, that stores the product of . And let an element of this tensor be , with , and , such that:
- (a)
- (b)
- (c)
- (d)
- (e)
- And let a Matrix Element of this Tensor be referenced as .
- 15.
- And let be the Direct Matrix Sum of all , such that an element of is equal to:
- 16.
- And that the entries of and remain the empty set, ∅, never to be used (you can technically set it to anything you want since it not used, so set default the value to zero upon Tensor Creation).
- 17.
-
Let R be a two dimensional tensor that stores the product of . And let an element of this tensor be , with and , such that:
- (a)
- (b)
- And let a Column Matrix Element of this Tensor be referenced as .
- 18.
- And let be the Direct Matrix Sum of all , such that an element of is equal to:
- 19.
- Finally let G be the inverse matrix of , such that each element of is equal to each entry of .
- 20.
- Then each component of the initially unknown is given by each entry (respectively) of the product G and the column vector of , such that:
- 21.
- The Residual Sum of Squares is a real number equal to , where .
- 22.
- The Total Sum of Squares is a real number equal to , where , where .
- 23.
- and .
5.2. A Constant is Variable and Rulers have Density and the Physical Interpretation of a Derivative
- 1.
- Ruler Inputs: . The vector difference is the same as the distance between the inputs on the Ruler’s Continuum.
- 2.
- Ruler Output: . The vector difference is the distance between outputs on the ruler, regardless of the difference between the outputs of and .
- 3.
- This is the same definition as a Slide Ruler, but in multiple dimensions. Imagine a thin double-sided plate for that reveals the opposite side when you press on the visible side of . In short, these are Continuous Vector Maps.
- 4.
- Real-valued Rulers exist solely in one direction. For , there are two parallel lists: and . The distance between consecutive integer squares on the ruler is still equal to 1, no matter the size of n.
- 5.
- Complex-valued Rulers exist in two directions. Looking in only one unique complex direction on the ruler, one could indeed graph alongside it with a color gradient for the input and output vectors, yielding some quadratic swirl (in this particular case). However, the true form is a two-sided map, with the input vector on the front side and the output vector on the bottom side.
- 6.
- Hypercomplex-valued Rulers extend this idea to more than two dimensions. The input exists on the “front side of space” and the output on the “back side of space.” For three dimensions, you can envision stacks of two-sided parallel planes.
- 7.
- For four dimensions and beyond, you can envision an entire 3D palette of 2D doubled-sided planes as the first entry of the fourth dimension, a second palette as the second entry in the fourth dimension, a third palette (all palettes in the same straight horizontal line) as the third entry in the fourth dimension. Then, for the fifth dimension, you translate these palettes laterally, and for the sixth dimension, you stack these palettes vertically. Then call this a super-6D palette, and begin stacking them again in the same three dimensions to express the seventh, eighth, and ninth dimensions, continuing this process until you have no dimensions left to express.
- 1.
- Primary Input: . The vector difference represents the distance between the primary inputs on the Ruler’s Continuum.
- 2.
-
Secondary Input: . The vector difference is the normalized distance relative to the primary input distance on the Ruler’s Continuum, such that:since and map to the same continuum indices as and , respectively.
- 3.
- If , then the Ruler of X is normalized to instead. If they are both , then they equal, which means they don’t need to be normalized, such that can be computed as is.
- 4.
- Ruler Output: . The vector difference determines the distance between outputs on the ruler, irrespective of the actual output values of and .
- 5.
- This definition extends the concept of a Slide Ruler to multiple dimensions, with auxiliary input rulers dynamically scaled to the primary input ruler. Imagine a thin double-sided plate for , where pressing on the visible side of reveals the opposite side . In essence, these are Continuous Vector Maps.
- 6.
-
Real-valued Bilinear Rulers exist solely in one direction. For , there are three parallel lists:where scales as to ensure alignment with on the X ruler. The computed outputs are:The distance between consecutive integer increments of x remains 1, independent of or the value of input y, such that a Ruler List of always has the corresponding list of below it and slide-rulers output on the reverse side (a multi-valued function has multiple output lists, such that the relationship between neighboring entities is continue in the same output list and the other output list!).
- 7.
- That the term “list" is not cheating the idea of Ruler and replacing it with a function. The ruler of T is the datalists themselves (from which we get our data for least squares!). T is an Index of discrete ticks with finite length, t, with lists of and on its opposite side (and each tick of the index is )! That is, using traditional terminology, and are themselves functions of T from , which then invoke the continuum rulers. Hence why the mathematical domains of quantile analysis and PCA analysis are so different, even though both branches are ultimately analyzing the same data! The Quantile Ruler is not the same Ruler as the inputs and z outputs!
- 8.
- The Index Ruler: is a list of consecutive integers from 0 to , existing only in the forward direction. This temporal order of the data points as they were measured. Any other ordering of this list must be expressed as an Ordinal Ruler , and there must always exist a bijection between and , and that if all elements of are exhausted (which exhausts all remaining ordinal rulers by default), that , which is distinctly different from , which is the position of the observer.
- 9.
- The Quantile Ruler: Q is a list of consecutive integer multiples of from to , existing only in the forward direction. Hypercomplex Index Rulers and Quantiles Rulers do indeed exist, but they are beyond the scope of this publication.
- 10.
-
Example of a real y ruler scaling: Given and , the y ruler must be scaled by so that aligns with . This allows one to read:on the ruler, which transforms into:
- 11.
- Complex-valued Bilinear Rulers extend in two directions. The y ruler both scales and rotates by , ensuring a dynamically oriented bijection between inputs and the complex continuum of .
- 12.
-
Hypercomplex-valued Bilinear Rulers generalize this concept beyond two dimensions, incorporating non-commutative and non-associative structures while preserving the chirality of the ruler. For example, in:the ruler scales as:However, for:the ruler scales differently:
- 13.
- Observe that is the reverse of . However, this order only affects the bilinear terms in the former and in the latter. It does not impact the substitution for in either ruler.
5.2.1. The Physical Mechanism which resolves
- Microsoft Excel is the digital incarnation of Ruler Space.
- Let T be the set of data points from to written in an Excel column.
- Let the vectors of and be appended column-wise, using as many columns per vector as each vector has components.
- Let the set of columns expressing be the Ruler of X, and let the Ruler Y “be seen" from the perspective of X, such that . Remember earlier in the first chapter of the paper when I defined division as change of reference frame! We now have the F ruler.
- Then . This is the G Ruler, which is ultimately attuned to the T Index Ruler, is it not!?
- Then . This is the H ruler. This is when I realized that if vectors can be inputs and outputs on a ruler, then so can matrices!
- Let redefine . This is the actual H ruler.
- Now let’s simply into bilinear form:
- The problem is that is unknown! So we need one last ruler (and it’s conjugate), the Ruler of Divine Chirality and its Conjugate Ruler (Adjugate Matrix)!
- Then clearly
- How does one measure the of this regression? For this we need the Quantile Ruler from to to yield the Total Sum of Squares: , which states that is the Riemann Sum of over the indices of T normalized from zero to one (a series of 5D rectangular prisms added together, all having the same width of ). From which is follows that , where .
- From which it also follows that , where , since is a column vector output, as is (naming variable types, such as “integer", “real", “column vector" or “matrix" in C++, Mathlab and/or in Excel’s column headers really helps!).
- And therewithin Excel was the physical incarnation of Ruler Space, was the closed form solution to the regression of , along with its real numbered value. All I required was a bunch of rulers. Some with only one direction (T and Q) and some with four directions , and some as 4x4 tables to mimic a single spot on a ruler (D and ). This is because Excel is a pre-built compiler for ruler space!
- Express the sum in hyperbolic form:
- Compute as:
- Rewrite using hyperbolic secant:
- Recognizing that the determinant of the unit reflector matrixis 1, we conclude that the determinant ofmust be
5.3. Univariate Least Squares, With a Constant
5.3.1. Human Readable-Forms of the Univariate Case with a Constant
- Let t be the total number of data points. and .
- Let C be the real number column vector of height 8 that reads the constants of and by their eight combined components, in the order just listed, such that
-
The Left-Handed Regression of implies the linear of system of equations:
- (a)
- (b)
- (c)
- Which compels:
-
The Right-Handed Regression of implies the linear of system of equations:
- (a)
- (b)
- (c)
- Which compels:
-
The Middle-Handed Regression of implies the linear of system of equations:
- (a)
- (b)
- (c)
- Which compels:
-
Where:
- (a)
- (b)
- and
5.4. From Pascal’s Simplex to Pascal’s Cube
- We just went from one term, to two terms, to four terms, to eight terms. Then the full bivariate cubic regression of in terms of and contains a total of 15 terms:
- More generally, the fully bivariate regression of degree n contains terms.
- Even worse all linear terms have two possible chirality ( versus ), and the quadratic terms have three chiralities ( versus versus ) and the Cubic Terms, while having only three chiralities on paper, have four different placements of .
- Thus the number of chiral permutations that needs to be executed to find the highest is precisely .
- For the bivariate quadratic introduced at the start of this paper, we have a total of 324 permutations since
- Now suppose this was trivariate. We’d go from Pascal’s Pyramid over the reals and complex numbers, to Pascal’s Cube, containing non-commutative polynomial terms. The number of permutations is now: .
- versus is two checks.
- versus is two checks.
- vs vs is three checks.
- vs vs is three checks.
- vs vs is three checks.
- vs vs is three checks.
- A total of 12+4=16 checks.
- versus is two checks. For all logics, a left matrix times a vector is equal to the the right matrix of the vector times the column vector of the original matrix.
- versus is two checks. or all logics, a left matrix times a vector is equal to the the right matrix of the vector times the column vector of the original matrix.
-
has the six possible forms:
- (a)
- vs
- (b)
- vs
- (c)
- vs
-
has the six possible forms:
- (a)
- vs
- (b)
- vs
- (c)
- vs
-
has the six possible forms:
- (a)
- vs
- (b)
- vs
- (c)
- vs
-
has the six possible forms:
- (a)
- vs
- (b)
- vs
- (c)
- vs
- This results in a total of checks against the theoretical permutations (which is not significantly worse than the 16 checks for the associative logics). However, if we extend this to a degree-three cubic or a three-term trinomial, the complexity escalates rapidly. Fortunately, most natural phenomena follow inverse-square laws governing interactions between two distinct entities, meaning the bivariate quadratic suffices for the vast majority of laboratory experiments.
5.4.1. The Least Squares Chirality
- 1.
-
If , then let:
- (a)
- be the of
- (b)
- be the of
- (c)
- be the of
- (d)
- be the of
- 2.
- That both or is greater than or equal to , and let the great of them be (this statement is automatically true).
- 3.
- That both or is greater than or equal to , and let the great of them be (this statement is automatically true).
- 4.
- Such that (this is the conjecture!), which implies that left-handed always outperforms the right-handed , regardless of the contribution of .
- 1.
-
If , then let:
- (a)
- be the of
- (b)
- be the of
- (c)
- be the of
- (d)
- be the of
- 2.
- That both or is greater than or equal to , and let the great of them be (this statement is automatically true).
- 3.
- That both or is greater than or equal to , and let the great of them be (this statement is automatically true).
- 4.
- Such that (this is the conjecture!), which implies that right-handed always outperforms the left-handed , regardless of the contribution of .
- 5.
- That if this Conjecture is proven true, it means that the placement of for any and all additional terms can be checked independently of the other terms, reducing the number of required checks from factorial time to exponential time.
- 6.
- That a partial solution for associative logics is acceptable.
- 7.
- That a full solution for non-associative logics is desired.
- 8.
- And that it matters not if this conjecture is proven to be true or untrue, for the Closed Form Solution to Hypercomplex Least Squares only acts upon a particular permutation, and yields the best fit C fit to the data for the given permutation.
6. General Closed Form Solution to Multivariate Quaternionic Least Squares of Mixed Chirality
6.1. Human Readable Version
6.2. Direct Human Readable Example
- and
- and .
- and .
- and .
- and .
- and .
- and .
- and .
References
- Minghui Wang, Mushseng Wei, Yan Feng An Iterative Algorithm for Least Squares in Quaternionic Quantum Theory Computer Physics Communications, Volume 179, Issue 4, Pages 203-207.
- https://youtu.be/FOhWGq9KExE?si=SpI9kjdcg-WIO_yI JMM2023 Conference, January 7th, 2023: Closed Form Solution to Quaternionic Least Squares.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).