The case presented refers to the Aspect experiment that earned him the Nobelprize Physics 2022. The presented work demonstrates a statistical flaw in that famous experiment [
2]. Let’s start with the notion that the raw product moment correlation [
2] of Aspect’s experiment is:
1.1. Hypothesis
In the experiment, the hypothesis
is tested against the gathered data, making full use of classical probability theory. The hypothesis
is, in fact: "the classical probability data can give the quantum correlation." In (
1) the
and
, the
. Furthermore, the
and
. In this notation, the
in
is: Alice’s measures + and Bob measures +. Other instances are similar. The
is the number of "times" Alice has measured + and Bob has measured + as well. For details, see [
2].
Furthermore, it is obvious that, e.g.,
is, in fact, the sum of
and
. Similar case for
and,
and
. The latter descriptions of the probability (via the law of large numbers) are the expressions of how a classical probability model is supposed to generate the outcomes of the measurements. The assumption apparently is that the empirical Einstein data is completely embedded in classical probability and hence is ruled by the Kolmogorovian axioms [
3].
Now we subsequently may observe that the
can also be rewritten like
The
will be tested in the empirical reality with the CHSH inequality derived from Bell’s correlation formula [
5]. The CHSH inequality [
6] is obtained with full reference to the Kolmogorovian axioms [
3].
1.2. Kolmogorovian Probability
Probability is a function of a set projecting in the real interval . Any proper textbook on statistics can tell you that. Nevertheless, most people in the physics community apparently seem to want to resist the notion of a random variable that connects the events "" with a set structure. Let’s, therefore, approach the soundness of the formulated hypothesis via the integral.
Note that we have the following
with,
. Because of our hypothesis in (
2), we then may write
Do subsequently observe that according to Kolmogorov, the additivity axiom is valid [
3]. It a.o. means that an interval such as
S with
also has a probability assigned to it. We may, therefore, write:
It’s clear that from (
3) and (
4) we then may write
Then, looking at (
3), we may also write
Note also that the extremes in a Riemann integral need not be included in the interval of integration.
If we then have
and
, it is possible to compute
from
. This is so because
and
. From the integral in (
5) it can already be observed that for
, the integrand,
. Hence, it is already clear that for
, the
follows. Here the additivity axiom is employed.
1.3. Additivity
The additivity axiom of Kolmogorov [
3] says:
because
and
. The
follows both from the negative integrand in
as well as from the not monotone non-descending character of
. Both characteristics imply negative probabilities for the set structure.
People who deny this, then, effectively claim that
is, one way or the other, not a probability. They effectively claim to be allowed to employ the CHSH, which is obtained from a Kolmogorovian completely classical probability space [
5,
6]. Furthermore, if with
both
and
are considered probability, then, how can one deny that
is a probability? The experimental probability space of Aspect’s experiment is as a consequence, for unknown reasons,
not fully Kolmogorovian prior to the gathering of data. And please do also observe that
Together with (
6), we have
. The
I, refering to (
5) then is
And we may observe that (
8) and (
9), indeed is a case where Kolmogorov’s additivity axiom is obeyed and
for
.
This implies that, sometimes in the probability environment of Aspect’s experiment, we do have a Kolmogorovian probability in the experiment; it is in accordance with the Kolmogorovian additivity. Sometimes, we don’t. That’s an unwarranted "halfhearted" set structure of the probability space associated with the experiment. The "halfheartedness" is in fact really beyond imagination. Note that Kolmogorov additivity [
3] is applied for, for instance,
. This is
. Now the events
and
are disjoint and so they give
in a way that employs additivity. Apparently, if additivity is ok, it is allowed without questioning in the probability environment of Aspect’s experiment. If not, then people act as though Kolmogorov additivity is none of their concern. They state that
, for
can be a probability. That’s indeed a "halfhearted" approach, biased against the gathering of possible Einstein data in the experiment.
1.5. Conditional Probability
Another invalid approach to the probability
is to claim that
is in fact a conditional probability. That would, however, entail the necessity that the events in Aspect’s experiment are not
and/or
. The claim is then that we are looking at separate events:
Y either = or ≠, and the event; the angle is
. We assume, here for this moment as in Aspect’s and similar experiments, a finite discrete subset of
. The probability
then would be something like
. This implies that we have [
4][page 21],
But note, Aspect didn’t experimentally determine e.g.
and e.g.
with statistical frequency countings. The events
and
are not meaningful in the experiment to determine the truth of the hypothesis in (
2). Aspect determined the statistical frequencies of events
and
.
Moreover, the conditional probability reasoning would require that
for all
. Now, note that the
x are equally (but randomly) distributed over the measurements in the experiment. This can easily be checked in the description of Aspect’s experiment [
2]. This means that if we accept that
based on frequency, is meaningful, it doesn’t vary over the values of
x in the sequence of measurements. The
is a constant in the design of the experiment, in order to have all setting combinations in equal amounts. Therefore, the probability density regarding the compound event
with variable
x, would then still not be positive definite for
. The function
isn’t a probability function for
. It has a non-positive definite probability density as can be seen from (
3).
Therefore, it is absolutely clear from the counting frequencies and the definition of the raw product moment correlation of (
1) in [
2], that Aspect employed the meaningful events
and
.
Let us look at for instance, the
. In that case, the conditional
is obtained from
and
. Note that for instance,
represents nothing but the counting of the statistical frequency labelled with
. Note now that, if we take the conditional probability approach serious for the moment, the conditional
is also obtained from the same
and
. However, we may also note from (
10) that
Where, again, we have accepted
based on frequency counting. This is valid for experiments where a discrete integer number of
x-es are being used. The
x is, however, a continuous variable in general.
Furthermore, we can assume a certain
and a certain
. If for instance,
, how to employ the countings from the gathered data:
and
and make a difference between
and
such as in that case required in (
12).
Hence, if one wants to know if the hypothesis in (
2) is true, then observations are directed to the complementary mutual exclusive events
and
that concur with how the data is gathered and with
. That is exactly what Aspect et al did [
2].
1.5.1. Mathematical Detail
If the conditional probability reasoning is serious we must acknowledge
Hence,
. We have employed a self-explanatory shorthand here to ease the presentation. In addition the frequency based experiment probability distribution
has
and we also must have
. From the definition
it then follows
And so we find from
that
With
and
, it follows from (
15) that
This subsequently gives us
Because we are dealing with probabilities
, hence we also see,
. The reason why
is because the condition, here =, makes it so that
. Some of the
x case will be associated to ≠. Similarly for
. Hence,
Now, note that from
and (
16) it also follows
And this gives in turn
Because we are dealing with probabilities we have (
18). But because, in this case it is true that
it follows from (
20) that
. This demonstrates the internal mathematical inconsistency of acting as though
and
are conditional probabilities.
1.6. Large Numbers to Probabilities
Furthermore, the law of large numbers is applied in order to circumvent specific probability models and measurement functions. Application of this law in the experiment is definitely classical probability applied to every event of the experiment. This in fact again looks like another biased approach to observing Einstein data.
To continue, given
the way the classical probability model of Einstein data is expected to describe the probabilities, i.e., , , and, ,
the way the
Bell correlation formula [
5] is fully Kolmogorovian; density hidden variables is
, and is normalized,
,
the "halfhearted" probability space in the experiment can not be a proper translation of Bell’s assumptions about the relation between extra hidden parameters and the settings [
5]. The Bell correlation is an expectation value, over probability measure
of measurements at Alice’s with parameter vector
and Bob’s with parameter vector
.
We want to find out if , , and, can reproduce the quantum correlation. But, from the way the experiment is statistically configured this possibility is suppressed from observation. Therefore, the implicit " must be ignored" isn’t a fair assessment of possible Einstein data in the empirical reality. Furthermore, the notion that and are conditional probabilities is not the way in which the Aspect experiment is statistically configured. Note also that this approach to Aspect’s measurements is erroneous as well because the function isn’t a probability function for .
1.7. Symmetry
Finally, it is necessary to remind the reader also that the angle between the two parameter vectors must be completely free in the interval . Otherwise, Einstein’s locality is breached. Acting as though, for instance, and are symmetrically equal in the analysis of the data is false. The to neutral transformation isn’t at all a neutral transformation of the angle in the analysis of the data. It implies that a non-local overseer must have been active during the experiment to conveniently change the angle definition. Alice is unaware of Bob’s and Bob is unaware of Alice’s . Then, it is obvious that the absence of Einstein locality is concluded from an analysis of data where Einstein data was excluded from in the first place.
The so-called neutral symmetry transformation tries to hide a flaw in the statistical methodology. It, in its turn, shows, however, to be an error in the physics set-up of the experiment. Introducing nonlocality during the experiment and concluding the absence of Einstein data from the analysis of the data is simply bad science. In this way, we go from pseudo-statistics, by allowing negative probabilities, to bad science, by concluding the absence of Einstein data from a nonlocality allowing a set-up.