Appendix A Algorithms
The numerical simulation of the correlation function does not require significant computer resources. It is like a simulation of a onedimensional Ising model with longrange forces. However, a much more efficient method is to simulate the Markov process, equivalent to the Fermionic trace of the [
12].
Regarding that quantum trace, we are simulating the random histories of the changes of the Fermionic occupation numbers, taking each step off history with Markov matrix probabilities.
We devised an algorithm that does not take computer memory growing with the number N of points at the fractal curve. This algorithm works for very large $N\sim {10}^{9}$, and it can be executed on the GPU in the supercomputer cluster, expanding the statistics of the simulation by massive parallelization.
We use large values of $N=2\ast {10}^{8}$. As for the random values of fractions $\frac{p}{q}$ with $0<p<q<N$, we used the following Python algorithm.
We used even
$N,q$ and
${N}_{\pm}=N/2$, as it was shown in [
4] that these are the dominant values in the local limit
$\mu \to 0$.
 1
python def EulerPair(self):

 2
M = self.m

 3
while (True):

 4
q = 2 * np.random.randint(2,M//2)

 5
p = np.random.randint(1, q)

 6
if gcd(p,q) ==1:

 7
if np.random.randint(0,M) < q:

 8
break

 9
pass

 10
pass

 11
return [ p, q]

In other words, we generated random even
$q\in (2,N)$, after which we tried random
$0<p<q$ until we got a coprime pair
$(p,q)$. Statistically, at large
N, this happens with probability
$6/{\pi}^{2}\approx 0.61$ (see [
13]). On average, it takes
${\pi}^{2}/6\approx 1.64493$ attempts to get a coprime pair.
Once we have a
p candidate, we accept it with the chance
$w=\frac{q}{N}$, which, again, for large
$q\sim N$ has a finite acceptance rate. The probabilities multiply to the desired factor, which is proportional to Euler totient
The main task is to generate a sequence of random spins ${\sigma}_{1}=\pm 1,\cdots {\sigma}_{N}=\pm 1$ with prescribed numbers ${N}_{+},{N}_{}$ of $+1$ and $1$ values. The statistical observables are additive and depend upon the partial sums ${\alpha}_{n}={\sum}_{k<n}{\sigma}_{k}$.
We avoid storing arrays of ${\sigma}_{k},{\alpha}_{k}$, using iterative methods to compute these observables. Our RAM will not grow with N, with CPU time growing only linearly.
The C++ class RandomWalker generates this stream of ${\sigma}_{k},{\alpha}_{k}$.
 1
#pragma once

 2

 3
#pragma once

 4

 5
class RandomWalker

 6
{

 7
public:

 8
RandomWalker(std::int64_t N_pos, std::int64_t N_neg) :

 9
N_pos(N_pos), N_neg(N_neg), alpha(), gen(std::random_device{}()), unif(0, 1) {

 10

 11
}

 12

 13
int Advance()

 14
{

 15
int sigma = RandomSign();

 16
int sigma = RandomSign();

 17
alpha += sigma;

 18
return sigma;

 19
}

 20

 21
std::int64_t get_alpha() const { return alpha; }

 22

 23
private:

 24
int RandomSign()

 25
{

 26
return (unif(gen) * double(N_pos + N_neg) <= N_neg) ? 1 : 1;

 27
}

 28

 29
std::int64_t N_pos;

 30
std::int64_t N_neg;

 31
std::int64_t alpha; // alpha[i]

 32
std::minstd_rand gen;

 33
std::uniform_real_distribution<double> unif;

 34
};

 35

At each consecutive step, the random sign $\sigma =\pm 1$ is generated with probabilities $\frac{{n}_{+}}{{n}_{+}+{n}_{}},\frac{{n}_{}}{{n}_{+}+{n}_{}}$ after which the corresponding number ${n}_{+}$ or ${n}_{}$ is decremented, starting with ${n}_{\pm}={N}_{\pm}$ at step zero. By construction, in the end, there will be precisely ${N}_{+}$ positive and ${N}_{}$ negative random values ${\sigma}_{k}$.
This algorithm describes a Markov chain corresponding to sequential drawing random
$\sigma $ from the set with initial
${N}_{\pm}$ positive and negative spins, and equivalent to onedimensional random walk on a number line. These conditional probabilities
are such that unconditional probability equals to the correct value:
regardless of the position
k of the variable
${\sigma}_{k}$ at the chain.
The formal proof goes as follows.
Proof. At
$k=1$, without spins to the left on the chain, the conditional probability equals the total probability, and the statement holds. With some number of spins on the left, averaging over all these spins at a fixed value of
${\sigma}_{k}$ is equivalent to averaging over all values except
${\sigma}_{k}$ because the only condition on random spins is their net sum, which is a symmetric function of their values. Thus, the average of conditional probabilities equals the total probability (
A117) for any spin on the chain, not just the first one. □
Here is the code, which computes the sample of
$d{\overrightarrow{S}}_{nm},\overrightarrow{\omega}\xb7\overrightarrow{\omega}$ given
$n,m,{N}_{+},{N}_{},\beta $. The 2D vectors
$\overrightarrow{S}$ are treated as complex numbers, using 128 bit complex arithmetic.
 1
#include <complex>

 2
#include <cassert>

 3
#include <iostream>

 4
#include "Euler.h"

 5
#include "RandomWalker.h"

 6
#include "MatrixMaker.h"

 7

 8
using complex = std::complex<double>;

 9
using namespace std::complex_literals;

 10

 11
inline std::complex<double> expi(double a)

 12
{

 13
double sin_a, cos_a;

 14
sincos(a, &sin_a, &cos_a);

 15
return {cos_a, sin_a};

 16
}

 17

 18
double DS(std::int64_t n, std::int64_t m,

 19
std::int64_t N_pos, std::int64_t N_neg, double beta, /*OUT*/ double *o_o)

 20
{

 21
assert(n < m);

 22
std::int64_t M = N_pos + N_neg;

 23
int sigma_n, sigma_m, alpha_m, alpha_n;

 24
complex S_nm, S_mn;

 25
double R_nm;

 26

 27
RandomWalker walker(N_pos, N_neg);

 28
std::int64_t i = 0;

 29
for (; i != n; i++)

 30
{ // i = [0; n)

 31
S_mn += expi(walker.get_alpha() * beta);

 32
walker.Advance();

 33
}

 34

 35
alpha_n = walker.get_alpha();

 36
S_nm += expi(alpha_n * beta);

 37
sigma_n = walker.Advance(); // i = n

 38
for (i++; i != m; i++)

 39
{ // i = (n, m)

 40
{ // i = (n, m)

 41
walker.Advance();

 42
}

 43

 44
alpha_m = walker.get_alpha();

 45
alpha_m = walker.get_alpha();

 46
sigma_m = walker.Advance(); // i = m

 47
for (i++; i != M; i++)

 48
{ // i = (m, M)

 49
S_mn += expi(walker.get_alpha() * beta);

 50
walker.Advance();

 51
}

 52

 53
*o_o = M * (M  1) / 2 * sigma_n * sigma_m / (2 * pow(tan(beta / 2), 2)) \\

 54
* pow(sin(beta / 4 * (2 * (alpha_m  alpha_n) + sigma_m  sigma_n)), 2);

 55

 56
S_nm /= double(m  n);

 57
S_nm /= double(m  n);

 58
return abs((S_nm  S_mn) / (2 * sin(beta / 2)));

 59
}

This code fits the architecture of the supercomputer cluster with many GPU units at each node. These GPU units are aggregated in blocks by 32, with every init inside the block working in sync. We use this synchronization to collect statistics: every block is assigned with different random values of $n,m,{N}_{+},{N}_{},\beta $, and every unit inside the block performs the same computation of DS function with different values of the integer seed for the random number generator, automatically provided by the system.
When quantum computers become a reality, our sum over the Markov histories of discrete spin (or Fermion) variables will run massively parallel on the qubits. Our Euler/Markov ensemble is ideally suited for quantum computers.