Preprint
Article

This version is not peer-reviewed.

Bad Philosophy and Bad Statistics

Submitted:

01 October 2025

Posted:

02 October 2025

You are already at the latest version

Abstract
The case presented refers to the Aspect entangled spin in photon pairs experiment. In the paper it is demonstrated that the conclusion of the experiment is based on an irreparable statistical flaw.
Keywords: 
;  

Introduction

Recently, Carlo Rovelli wrote an essay about bad philosophy hampering the progress of physics [1]. That, of course, can be true. Nevertheless, I have definite proof that bad statistics hampers that progress as well. In this case, the stakes are high. Nevertheless, erroneous statistical underpinning of an experiment is nothing but erroneous statistics. It disallows a serious conclusion from an experiment.

1. Experiment Statistics

The case presented refers to the Aspect experiment that earned him the Nobelprize Physics 2022. The presented work demonstrates a statistical flaw in that famous experiment [2]. Let’s start with the notion that the raw product moment correlation [2] of Aspect’s experiment is:
R ( x ) = P ( x , = ) P ( x , ) , x [ 0 , 2 π )
Here, the x is the angle between Alice’s unit length parameter vector a and Bob’s unit length parameter vector b . It is measured in the plane spanned by a and b . The direction is, for instance, from a towards b . The range of x is 0 x < 2 π .

1.1. Hypothesis

In the experiment, the hypothesis H 0 : R ( x ) = cos ( x ) is tested against the gathered data, making full use of classical probability theory. The hypothesis H 0 is, in fact: "the classical probability data can give the quantum correlation." In (1) the P ( x , = ) = N ( x , = ) / N and P ( x , ) = N ( x , ) / N , the N = N ( x , = ) + N ( x , ) . Furthermore, the N ( x , = ) = N ( x , + , + ) + N ( x , , ) and N ( x , ) = N ( x , + , ) + N ( x , , + ) . In this notation, the ( + , + ) in N ( x , + , + ) is: Alice’s measures + and Bob measures +. Other instances are similar. The N ( x , + , + ) is the number of "times" Alice has measured + and Bob has measured + as well. For details, see [2].
Furthermore, it is obvious that, e.g., P ( x , = ) is, in fact, the sum of P ( x , + , + ) and P ( x , , ) . Similar case for P ( x , ) and, P ( x , + , ) and P ( x , , + ) . The latter descriptions of the probability (via the law of large numbers) are the expressions of how a classical probability model is supposed to generate the outcomes of the measurements. The assumption apparently is that the empirical Einstein data is completely embedded in classical probability and hence is ruled by the Kolmogorovian axioms [3].
Now we subsequently may observe that the H 0 can also be rewritten like
H 0 : P ( x , ) = sin 2 ( x / 2 ) , x [ 0 , 2 π )
The H 0 will be tested in the empirical reality with the CHSH inequality derived from Bell’s correlation formula [5]. The CHSH inequality [6] is obtained with full reference to the Kolmogorovian axioms [3].

1.2. Kolmogorovian Probability

Probability is a function of a set projecting in the real interval [ 0 , 1 ] . Any proper textbook on statistics can tell you that. Nevertheless, most people in the physics community apparently seem to want to resist the notion of a random variable that connects the events " x , " with a set structure. Let’s, therefore, approach the soundness of the formulated hypothesis via the integral.
Note that we have the following
sin 2 ( x / 2 ) = 0 x ( 1 / 2 ) sin ( y ) d y
with, x [ 0 , 2 π ) . Because of our hypothesis in (2), we then may write
P ( x , ) = 0 x ( 1 / 2 ) sin ( y ) d y
Do subsequently observe that according to Kolmogorov, the additivity axiom is valid [3]. It a.o. means that an interval such as S with s 1 x < s 2 also has a probability assigned to it. We may, therefore, write:
P ( S ) = s 1 s 2 ( 1 / 2 ) sin ( y ) d y
It’s clear that from (3) and (4) we then may write
P ( π , ) = 0 π ( 1 / 2 ) sin ( y ) d y = 1
Then, looking at (3), we may also write
P ( 3 π / 2 , ) = 0 3 π / 2 ( 1 / 2 ) sin ( y ) d y = 1 2
Note also that the extremes in a Riemann integral need not be included in the interval of integration.
If we then have S = [ 0 , 3 π / 2 ) and S 1 = [ 0 , π ) , it is possible to compute P ( S 2 ) from S 2 = [ π , 3 π / 2 ) . This is so because S = S 1 S 2 and S 1 S 2 = . From the integral in (5) it can already be observed that for y [ π , 3 π / 2 ) , the integrand, ( 1 / 2 ) sin ( y ) 0 . Hence, it is already clear that for S 2 = [ π , 3 π / 3 ) , the P ( S 2 ) < 0 follows. Here the additivity axiom is employed.

1.3. Additivity

The additivity axiom of Kolmogorov [3] says: P ( S ) = P ( S 1 ) + P ( S 2 ) because S = S 1 S 2 and S 1 S 2 = . The P ( S 2 ) < 0 follows both from the negative integrand in [ π , 3 π / 2 ) as well as from the not monotone non-descending character of F ( x ) = sin 2 ( x / 2 ) . Both characteristics imply negative probabilities for the set structure.
People who deny this, then, effectively claim that P ( S 2 ) is, one way or the other, not a probability. They effectively claim to be allowed to employ the CHSH, which is obtained from a Kolmogorovian completely classical probability space [5,6]. Furthermore, if with 0 x 1 < x 2 < 2 π both 0 x 2 ( 1 / 2 ) sin ( y ) d y and 0 x 1 ( 1 / 2 ) sin ( y ) d y are considered probability, then, how can one deny that x 1 x 2 ( 1 / 2 ) sin ( y ) d y is a probability? The experimental probability space of Aspect’s experiment is as a consequence, for unknown reasons, not fully Kolmogorovian prior to the gathering of data. And please do also observe that
P ( π / 2 , ) = 0 π / 2 ( 1 / 2 ) sin ( y ) d y = 1 2
Together with (6), we have P ( π , ) = P ( π / 2 , ) + I . The I, refering to (5) then is
I = π / 2 π ( 1 / 2 ) sin ( y ) d y = 1 2
And we may observe that (8) and (9), indeed is a case where Kolmogorov’s additivity axiom is obeyed and ( 1 / 2 ) sin ( y ) 0 for y [ π / 2 , π ) .
This implies that, sometimes in the probability environment of Aspect’s experiment, we do have a Kolmogorovian probability in the experiment; it is in accordance with the Kolmogorovian additivity. Sometimes, we don’t. That’s an unwarranted "halfhearted" set structure of the probability space associated with the experiment. The "halfheartedness" is in fact really beyond imagination. Note that Kolmogorov additivity [3] is applied for, for instance, P ( x , = ) . This is P ( x , = ) = P ( x , + , + ) + P ( x , , ) . Now the events x , + , + and x , , are disjoint and so they give P ( x , = ) in a way that employs additivity. Apparently, if additivity is ok, it is allowed without questioning in the probability environment of Aspect’s experiment. If not, then people act as though Kolmogorov additivity is none of their concern. They state that F ( x ) = sin 2 ( x / 2 ) , for x [ 0 , 2 π ) can be a probability. That’s indeed a "halfhearted" approach, biased against the gathering of possible Einstein data in the experiment.

1.4. Set Structure and Hypothesis in Equation (2)

Obviously, allowing the complete set structure of the probability space, i.e., S 2 is a part of the probability related set structure of the experiment, requires negative probabilities from Einsteinian data to get at H 0 is true. It is, therefore, an impossible requirement to observe potential Einstein positive data.

1.5. Conditional Probability

Another invalid approach to the probability N ( x , ) / N is to claim that P ( x , ) is in fact a conditional probability. That would, however, entail the necessity that the events in Aspect’s experiment are not x , and/or x , = . The claim is then that we are looking at separate events: Y either = or ≠, and the event; the angle is X = x . We assume, here for this moment as in Aspect’s and similar experiments, a finite discrete subset of [ 0 , 2 π ) x . The probability N ( x , ) / N then would be something like P ( Y = " " | X = x ) . This implies that we have [4][page 21],
P ( Y = " " | X = x ) = P [ ( Y = " " ) & ( X = x ) ] P ( X = x )
But note, Aspect didn’t experimentally determine e.g. P ( X = x ) N ( X = x ) and e.g. P ( Y = " " ) N ( Y = " " ) with statistical frequency countings. The events X = x and Y { = , } are not meaningful in the experiment to determine the truth of the hypothesis in (2). Aspect determined the statistical frequencies of events x , = and x , .
Moreover, the conditional probability reasoning would require that
P [ ( Y = " " ) & ( X = x ) ] = P ( X = x ) sin 2 ( x / 2 )
for all x [ 0 , 2 π ) . Now, note that the x are equally (but randomly) distributed over the measurements in the experiment. This can easily be checked in the description of Aspect’s experiment [2]. This means that if we accept that P ( X = x ) based on frequency, is meaningful, it doesn’t vary over the values of x in the sequence of measurements. The P ( X = x ) is a constant in the design of the experiment, in order to have all setting combinations in equal amounts. Therefore, the probability density regarding the compound event ( Y = " " ) & ( X = x ) with variable x, would then still not be positive definite for x [ 0 , 2 π ) . The function sin 2 ( x / 2 ) isn’t a probability function for x [ 0 , 2 π ) . It has a non-positive definite probability density as can be seen from (3).
Therefore, it is absolutely clear from the counting frequencies and the definition of the raw product moment correlation of (1) in [2], that Aspect employed the meaningful events x , and x , = .
Let us look at for instance, the N ( x , = ) = N ( x , + , + ) + N ( x , , ) . In that case, the conditional P [ ( Y = " = " ) | ( X = x ) ] is obtained from N ( x , + , + ) and N ( x , , ) . Note that for instance, N ( x , + , + ) represents nothing but the counting of the statistical frequency labelled with x , + , + . Note now that, if we take the conditional probability approach serious for the moment, the conditional P [ ( X = x ) | ( Y = " = " ) ] is also obtained from the same N ( x , + , + ) and N ( x , , ) . However, we may also note from (10) that
P [ ( Y = " = " ) | ( X = x ) ] = P ( Y = " = " ) P ( X = x ) P [ ( X = x ) | ( Y = " = " ) ]
Where, again, we have accepted P ( X = x ) = N ( X = x ) / N based on frequency counting. This is valid for experiments where a discrete integer number of x-es are being used. The x is, however, a continuous variable in general.
Furthermore, we can assume a certain P ( Y = " = " ) and a certain P ( Y = " " ) . If for instance, P ( Y = " = " ) P ( X = x ) 1 , how to employ the countings from the gathered data: N ( x , + , + ) and N ( x , , ) and make a difference between P [ ( Y = " = " ) | ( X = x ) ] and P [ ( X = x ) | ( Y = " = " ) ] such as in that case required in (12).
Hence, if one wants to know if the hypothesis in (2) is true, then observations are directed to the complementary mutual exclusive events x , and x , = that concur with how the data is gathered and with P ( x , = ) + P ( x , ) = 1 . That is exactly what Aspect et al did [2].

1.5.1. Mathematical Detail

If the conditional probability reasoning is serious we must acknowledge
P ( = | x ) = N ( x , = ) N P ( | x ) = N ( x , ) N
Hence, P ( = | x ) + P ( | x ) = 1 . We have employed a self-explanatory shorthand here to ease the presentation. In addition the frequency based experiment probability distribution P ( x ) has x P ( x ) = 1 and we also must have P ( = ) + P ( ) = 1 . From the definition P ( A | B ) P ( B ) = P ( A & B ) it then follows
P ( x ) = P ( = & x ) + P ( & x )
And so we find from x P ( x ) = 1 that
1 = x P ( = & x ) + x P ( & x )
With P ( A | B ) P ( B ) = P ( A & B ) and P ( = ) + P ( ) = 1 , it follows from (15) that
1 = P ( = ) x P ( x | = ) + ( 1 P ( = ) ) x P ( x | )
This subsequently gives us
0 P ( = ) = 1 x P ( x | ) x P ( x | = ) x P ( x | )
Because we are dealing with probabilities 1 x P ( x | ) 0 , hence we also see, x P ( x | = ) x P ( x | ) 0 . The reason why 1 x P ( x | ) 0 is because the condition, here =, makes it so that P ( x | = ) P ( x ) . Some of the x case will be associated to ≠. Similarly for P ( x | ) P ( x ) . Hence,
1 x P ( x | ) 0 1 x P ( x | = ) 0
Now, note that from P ( = ) + P ( ) = 1 and (16) it also follows
1 = ( 1 P ( ) ) x P ( x | = ) + P ( ) x P ( x | )
And this gives in turn
P ( ) = 1 x P ( x | = ) x P ( x | ) x P ( x | = )
Because we are dealing with probabilities we have (18). But because, in this case it is true that x P ( x | = ) x P ( x | ) 0 it follows from (20) that P ( ) 0 . This demonstrates the internal mathematical inconsistency of acting as though N ( x , = ) / N and N ( x , ) / N are conditional probabilities.

1.6. Large Numbers to Probabilities

Furthermore, the law of large numbers is applied in order to circumvent specific probability models and measurement functions. Application of this law in the experiment is definitely classical probability applied to every event of the experiment. This in fact again looks like another biased approach to observing Einstein data.
To continue, given
  • the way the classical probability model of Einstein data is expected to describe the probabilities, i.e., P ( x , + , + ) , P ( x , , ) , P ( x , , + ) and, P ( x , + , ) ,
  • the way the Bell correlation formula [5] is fully Kolmogorovian; density hidden variables is ρ ( λ ) 0 , and is normalized, ρ ( λ ) d λ = 1 ,
the "halfhearted" probability space in the experiment can not be a proper translation of Bell’s assumptions about the relation between extra hidden parameters and the settings [5]. The Bell correlation is an expectation value, over probability measure μ ( d λ ) = ρ ( λ ) d λ of measurements at Alice’s with parameter vector a and Bob’s with parameter vector b .
We want to find out if P ( x , + , + ) , P ( x , , ) , P ( x , , + ) and, P ( x , + , ) can reproduce the quantum correlation. But, from the way the experiment is statistically configured this possibility is suppressed from observation. Therefore, the implicit " P ( S 2 ) < 0 must be ignored" isn’t a fair assessment of possible Einstein data in the empirical reality. Furthermore, the notion that N ( x , ) / N and N ( x , = ) / N are conditional probabilities is not the way in which the Aspect experiment is statistically configured. Note also that this approach to Aspect’s measurements is erroneous as well because the function sin 2 ( x / 2 ) isn’t a probability function for x [ 0 , 2 π ) .

1.7. Symmetry

Finally, it is necessary to remind the reader also that the angle between the two parameter vectors must be completely free in the interval [ 0 , 2 π ) . Otherwise, Einstein’s locality is breached. Acting as though, for instance, x = 7 π / 4 and x = π / 4 are symmetrically equal in the analysis of the data is false. The x = 7 π / 4 to x = π / 4 neutral transformation isn’t at all a neutral transformation of the angle in the analysis of the data. It implies that a non-local overseer must have been active during the experiment to conveniently change the angle definition. Alice is unaware of Bob’s b and Bob is unaware of Alice’s a . Then, it is obvious that the absence of Einstein locality is concluded from an analysis of data where Einstein data was excluded from in the first place.
The so-called neutral symmetry transformation tries to hide a flaw in the statistical methodology. It, in its turn, shows, however, to be an error in the physics set-up of the experiment. Introducing nonlocality during the experiment and concluding the absence of Einstein data from the analysis of the data is simply bad science. In this way, we go from pseudo-statistics, by allowing negative probabilities, to bad science, by concluding the absence of Einstein data from a nonlocality allowing a set-up.

2. Conclusion

All this isn’t a demonstration of bad philosophy. It is a demonstration of methodological erroneous experimentation. It is a demonstration of methodological unclarity when a conditional probability reinterpretation instead of a probability of mutually exclusive events is attempted. This reinterpretation of N ( x , = ) / N and of N ( x , ) / N as conditional probabilities is yet another attempt to try to hide the statistical flaw of the experiment. Bell’s correlation formula is incomplete [7]. That is the single exclusive reason for the erroneous statistics in experiments derived thereof.
The conclusion can therefore not be anything else but H 0 in (2) is false by methodology design. Rovelli’s bad philosophy is only there to serve certain editors as a cover-up for the eternal rejection of this fact.

References

  1. C.Rovelli, Is bad philosophy holding back physics? Nature 2025, 641, 585–587.
  2. A.Aspect, P.Grangier and G. Roger, Experimental Realization of Einstein-Podolsky-Rosen-Bohm Gedankenexperiment: A New Violation of Bell’s Inequalities. Phys.Rev.Lett. 1982, 49, 91–94. [CrossRef]
  3. A.N. Kolmogorov, Foundations of the Theory of Probability, pg. 1-3,NY: (Chelsea Publ. Com. 1950).
  4. R.V.Hogg, J.W.McKean, and A T. Graig, Introduction to Mathematical Statistics, 7th edition, Pearson, Boston, 2013.
  5. J.S. Bell, On the Einstein Podolsky Rosen paradox. Physics 1964, 1, 195–200. [CrossRef]
  6. J.F. Clauser, M.A. Horne, A. Shimony and R.A. Holt, Proposed Experiment To Test Local Hidden-Variable Theories. Phys. Rev. Lett. 1969, 23, 880–883. [CrossRef]
  7. H. Geurdes, https://arxiv.org/abs/2502.15808, 2025.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated