Preprint
Concept Paper

This version is not peer-reviewed.

Defining a Satisfying Expected Value From Chosen Sequences of Bounded Functions Converging to Pathological Functions

Submitted:

18 February 2025

Posted:

28 February 2025

You are already at the latest version

Abstract
Let n ∈ N and suppose function f : A ⊆ R^n → R, where A and f are Borel. We want a satisfying average for all pathological f (e.g., everywhere surjective f whose graph has zero Hausdorff measure in its dimension) taking finite values only. If this is impossible, we wish to average a nowhere continuous f defined on the rationals. The problem is that the expected value of these examples of f , w.r.t the Hausdorff measure in its dimension, is undefined. We fix this by taking the expected value of chosen sequences of bounded functions converging to f with the same satisfying and finite expected value. Note, “satisfying” is explained in the leading question which uses rigorous versions of phrases in the former paragraph and the “measure” of a bounded functions’ graph which involves minimal pair-wise disjoint covers of the graph with equal ε measure, sample points from each cover, paths of line segments between sample points, the lengths of the line segments in the path, removed lengths which are outliers, remaining lengths which are converted into a probability distribution, and the entropy of the distribution. We also explain “satisfying” by defining the actual rate expansion of a bounded functions’ graph and also “the rate of divergence” of a bounded functions’ graph compared to that of other bounded functions’ graphs.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Intro

Let n N and suppose function f : A R n R , where A and f are Borel. We want a satisfying average for all pathological f taking finite values only. The problem is the expected value of certain examples of f2.12.2), w.r.t the Hausdorff measure in its dimension, is undefined (§2.3). To fix this, we take the expected value of a sequence of bounded functions converging to f2.3.2); however, depending on the sequence of bounded functions chosen, the expected value could be one of several values (thm. 1). Hence, we define a leading question (§3.1) which chooses sequences of bounded functions with the same satisfying and finite expected value, such the term “satisfying" is explained rigorously.
Note, the leading question (§3.1) was inspired by two problems (i.e., informal versions of thm. 2 and 5):
(1)
If F R A is the set of all f R A , where the expected value of f w.r.t the Hausdorff measure in its dimension is finite, then F is shy [13].
  • If F R A is shy, we say “almost no" element of R A lies in F2.4).
(2)
If F R A is the set of all f R A , where two sequences of bounded functions that converge to f have different expected values, then F is prevelant [13].
  • If F R A is prevelant, we say “almost all" elements of R A lies in F2.4).
In section §5, we clarify the leading question (§3.1) by applying the rigorous definitions of the leading question to specific examples (§5.2.1). We also define a “measure" (§5.3.15.3.2) of the sequence of a bounded functions’ graphs. This is crucial for defining a satisfying expected value, where “measure" is defined by:
(1)
Covering each graph with minimal, pairwise disjoint sets of equal ε Hausdorff measure (§Appendix 8.1)
(2)
Taking a sample point from each cover (§Appendix 8.2)
(3)
Taking a “pathway of line segments" starting with sample point x 0 to the sample point with the smallest Euclidean distance from x 0 (i.e., when more than one point has the smallest Euclidean distance to x 0 , take either of those points). Next, repeat this process until the pathway intersects with every sample point once (§Appendix 8.3.1)
(4)
Taking the length of each line segments in the pathway and remove the outliers which are more than C > 0 times the interquartile range of the length of each line segment as ε 0 Appendix 8.3.2),
(5)
Multiply the remaining lengths by a constant to get a probability distribution (§Appendix 8.3.3)
(6)
Taking the entropy of the distribution (§Appendix 8.3.4)
(7)
Taking the maximum entropy w.r.t all pathways (§Appendix 8.3.5)
We give examples of how to apply the “measure" (§5.3.35.3.5), then define the actual rate of expansion of a bounded functions’ graph (§5.4).
Finally, we answer the leading question in §6. Since the answer is complicated, is likely incorrect, and the leading question might not admit an unique expected value, it is best to keep refining the leading question (§3.1) rather than worrying about an immediate solution.

2. Formalizing the Intro

Let n N and suppose function f : A R n R , where A and f are Borel. Let dim H ( · ) be the Hausdorff dimension, where H dim H ( · ) ( · ) is the Hausdorff measure in its dimension on the Borel σ -algebra.
We want an unique, satisfying average for each of the following functions (§2.12.2) taking finite values only. We explain the method of averaging in later sections, starting from §2.3.1.

2.1. First Special Case of f

If the graph of f is G, we want an explicit f where:
(1)
The function f is everywhere surjective [1] (i.e., f is defined on a topological space where its’ restriction to any non-empty open subset is surjective).
(2)
H dim H ( G ) ( G ) = 0

2.1.1. Potential Answer

If A = R , using this post [3]:
Consider a Cantor set C [ 0 , 1 ] with Hausdorff dimension 0 [4]. Now consider a countable disjoint union m N C m such that each C m is the image of C by some affine map and every open set O [ 0 , 1 ] contains C m for some m. Such a countable collection can be obtained by e.g. at letting C m be contained in the biggest connected component of [ 0 , 1 ] ( C 1 C m 1 ) (with the center of C m being the middle point of the component).
Note that m C m has Hausdorff dimension 0, so m C m × [ 0 , 1 ] R 2 has Hausdorff dimension one [2].
Now, let g : [ 0 , 1 ] R such that g | C m is a bijection C m R for all m (all of them can be constructed from a single bijection C R , which can be obtained without choice, although it may be ugly to define) and outside m C m let g be defined by g ( x ) = h ( x ) , where h : [ 0 , 1 ] R has a graph with Hausdorff dimension 2 [14] (this doesn’t require choice either).
Then the function g has a graph with Hausdorff dimension 2 and is everywhere surjective, but its graph has Lebesgue measure 0 because it is a graph (so it admits uncountably many disjoint vertical translates).
Note, we can make the construction with union of C m rather explicit as follows. Split the binary expansion of x as strings of size with a power of two, say x = 0 . 1101000010 becomes ( s 0 , s 1 , s 2 , ) = ( 1 , 10 , 1000 , ) . If this sequence eventually contains only strings of the form 0 0 or 1 1 , say after s k , then send it to y = i > 0 ϵ i 2 i , where s k + i = ϵ i ϵ i . Otherwise, send it to the explicit continuous function h given by the linked article [14]. This will give you something from [ 0 , 1 ) [ 0 , 1 )
Finally, compose an explicit (reasonable) bijection from [ 0 , 1 ) to R . In your case, the construction can be easily adapted so that the [ 0 , 1 ] or [ 0 , 1 ) target space is actually ( 0 , 1 ) , then compose with t ( 1 2 x ) / ( x 2 x ) .
In case we cannot obtain a unique, satisfying average (§3.1) from §2.1.1, consider the following:

2.2. Second Special Case of f

Suppose, we define A = Q , where f : A R , such that:
f ( x ) = 1 x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0 0 x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0
In the next section, we state why we want §2.1 and §2.2.

2.3. Attempting to Analyze/Average f

Suppose, the expected value of f w.r.t the Hausdorff measure in its dimension is:
E [ f ] = 1 H dim H ( A ) ( A ) A f d H dim H ( A )
Then, using §2.1, the integral of f w.r.t the Hausdorff measure in its dimension is undefined: i.e., the graph of f has Hausdorff dimension 2 with a zero 2-d Hausdorff measure (§2.1.1). Hence, E [ f ] is undefined.
Moreover, obseve that in §2.2, f is nowhere continuous and defined on a countably infinite set, which means depending on the enumeration of A or the sequence { a r } r = 1 , where the expected value of f (when it exists) is:
E [ f ] = lim t f ( a 1 ) + f ( a 2 ) + · · · + f ( a t ) t
the expected value E [ f ] is any number from inf f to sup f . Hence, we need a specific enumeration that gives a unique, satisfying, and finite expected value, generalizing this process to nowhere continuous functions defined on uncountable domains.
Thus, we want the “expected value of chosen sequences of bounded functions converging to f with the same satisfying and finite expected value" which we describe rigorously in later sections; however, consider the following definitions beginning with §2.3.1:

2.3.1. Definition of Sequences of Functions Converging to f

Let n N and suppose function f : A R n R , where A and f are Borel.
The sequence of functions ( f r ) r N , where f r : A r R and ( A r ) r N is a sequence of sets, converges to f when:
For any x A , there exists a sequence x A r s.t. x ( x 1 , · · · , x n ) and f r ( x ) f ( x 1 , · · · , x n ) .
This is equivelant to:
( f r , A r ) ( f , A )

2.3.2. Expected Value of Sequences of Functions Converging to f

Hence, suppose:
  • ( f r , A r ) ( f , A ) 2.3.1)
  • | | · | | is the absolute value
  • dim H ( · ) be the Hausdorff dimension
  • H dim H ( · ) ( · ) is the Hausdorff measure in its dimension on the Borel σ -algebra
  • the integral is defined, w.r.t the Hausdorff measure in its dimension
The expected value of ( f r ) r N is a real number E [ f r ] , when the following is true:
( ϵ > 0 ) ( N N ) ( r N ) r N 1 H dim H ( A r ) ( A r ) A r f r d H dim H ( A r ) E [ f r ] < ϵ
when no such E [ f r ] exists, E [ f r ] is infinite or undefined.

2.3.3. The Set of All Bounded Functions/Sets

Let n N and suppose the function f : A R n R , where A and f are Borel. Then, we define the following:
B ( · ) is the set of all bounded Borel functions in a function space
B ( · ) is the set of all bounded Borel subsets of a set
For example, B ( R A ) is the set of all bounded Borel f R A and B ( R n ) is the set of all bounded Borel subsets of R n . Note, however:
Theorem 1. 
For all r , v N , suppose f r , g v B ( R A ) and A r , B v B ( R n ) . There exists a f R A , where ( f r , A r ) , ( g v , B v ) ( f , A ) and E [ f r ] E [ g v ] (§2.3.2)
For example, the expected values of the sequences of bounded functions converging to f2.3.1, §2.3.2) in §2.1 and §2.2 satisfy thm. 1. For simplicity, we illustrate this with §2.2.

2.3.4. Example Illustrating Theorem 1

For the second case of Borel f : A R n R 2.2), where A = Q , and:
f ( x ) = 1 x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0 0 x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0
suppose:
( A r ) r N = ( c / r ! : c Z , r · r ! c r · r ! ) r N
and
( B v ) v N = ( c / d : c Z , d N , d v , d · v c d · v ) v N
where for f r : A r R ,
f r ( x ) = f ( x ) for all x A r
and for g v : B v R
g v ( x ) = f ( x ) for all x B v
Note, for all r , v N :
  • sup ( A r ) = r
  • inf ( A r ) = r
  • sup ( B v ) = v
  • inf ( B v ) = v
  • Since f is bounded, f r and g v are bounded
Hence, f r , g v B ( R A ) and A r , B v B ( R n ) . Also, the set-theoretic limit of ( A r ) r N and ( B v ) r N is A = Q : i.e.,
lim sup r A r = r 1 q r A q lim inf r A r = r 1 q r A q
where:
lim sup r A r = lim inf r A r = A = Q
lim sup v B v = lim inf v B v = A = Q
(We’re unsure how to prove the set-theoretic limits; however, a mathematician specializing in limits should be able to check.)
Therefore, ( f r , A r ) , ( g v , B v ) ( f , A ) (thm. 1).
Now, suppose we want to average ( f r ) r N and ( g v ) v N , which we denote E [ f r ] and E [ g v ] . Note, this is the same as computing the following (i.e., the cardinality is | · | and the absolute value is | | · | | ):
( ϵ > 0 ) ( N N ) ( r N ) r N 1 | A r | A r f d H 0 E [ f r ] < ϵ ( ϵ > 0 ) ( N N ) ( r N ) r N 1 | A r | x A r f ( x ) E [ f r ] < ϵ
( ϵ > 0 ) ( N N ) ( v N ) v N 1 | B v | B v f d H 0 E [ g v ] < ϵ ( ϵ > 0 ) ( N N ) ( v N ) v N 1 | B v | x B v f ( x ) E [ g v ] < ϵ
Thus, if we assume E [ f r ] = 1 in eq. 8, using [9]:
The sum x A f ( x ) counts the number of fractions with an even denominator and an odd numerator in set A , after canceling all possible factors of 2 in the fraction. Let us consider the first case. We can write:
1 | A r | 1 x A r f ( x ) = | A r | x A r f ( x ) / | A r | = H ( r ) / | A r |
where H ( r ) counts the fractions x = c / r ! in A r that are not counted in x A f ( x ) , i.e., for which f ( x ) = 0 . This is the case when the denominator is odd after the cancellation of the factors of 2, i.e., when the numerator c has a number of factors of 2 greater than or equal to that of r ! , which we will denote by V ( r ) : = v 2 ( r ! ) a.k.a the 2-valuation of r ! , oeis : A 11371 ( r ) = r O ( ln ( r ) ) [11]. That means, c must be a multiple of 2 V ( r ) . The number of such c with r · r ! c r · r is simply the length of that inteval, equal to | A r | = 2 r ( r ! ) + 1 , divided by 2 V ( r ) . Thus,
1 | A r | 1 x A r f ( x ) = [ | A r | / 2 V ( r ) ] / | A r | 1 / 2 V ( r ) = 1 / 2 n O ( log n )
This obviously tends to zero, proving E [ f r ] = 1
Last, we need to show E [ g v ] = 1 / 3 in eq. 9, where E [ f r ] E [ g v ] , proving theorem 1.
Concerning the second case [9], it is again simpler to consider the complementary set of x B v such that the denominator is odd when all possible factors of 2 are canceled. We can see that for v = 2 p 1 , and these obviously include all those we had for smaller v. The “new" elements in B v with v = 2 p 1 are those that have the denominator d = 2 p 1 when written in lowest terms. Their number is equal to the number of κ < d , gcd ( κ , d ) = 1 , which is given by Euler’s ϕ function. Since we also consider negative fractions, we have to multiply this by 2. Including x = 0 , we have G ( v ) = | x B v | f ( x ) = 0 | = 1 + 2 0 κ v / 2 ϕ ( 2 κ + 1 ) . There is no simple explicit expression for this (cf. oeis:A99957 [12]), but we know that G ( v ) = 1 + 2 · A 99957 ( v / 2 ) 2 · 8 ( v / 2 ) 2 / π 2 = 4 v 2 / π 2 [12]. On the other hand, the total number of all elements of B v is | B v | = 1 + 2 1 κ v ϕ ( κ ) , since each time we increase v by 1, we have the additional fractions with the new denominator d = v and the numerators are coprime with d, again with the sign + or −. From oeis:A002088 [10] we know that 1 κ v ϕ ( κ ) = 3 v 2 / π 2 + O ( v log v ) , so | B v | 6 v 2 / π 2 , which finally gives | B v | 1 x B v f ( x ) = ( | B v | G ( v ) ) / | B v | ( 6 4 ) / 6 = 1 / 3 as desired.
Hence, E [ g v ] = 1 / 3 and E [ f r ] E [ g v ] proving thm.1.
Thus, consider:

2.4. Definition of Prevalent and Shy Sets

A Borel set E X is said to be prevalent if there exists a Borel measure μ on X such that:
(1)
0 < μ ( C ) < for some compact subset C of X, and
(2)
the set E + x has full μ -measure (that is, the complement of E + x has measure zero) for all x X .
More generally, a subset F of X is prevalent if F contains a prevalent Borel Set.
Moreover:
  • The complement of a prevelant set is a shy set.
Hence:
  • If F X is prevelant, we say “almost every" element of X lies in F.
  • If F X is shy, we say “almost no" element of X lies in F.

2.5. Motivation for Averaging §2.1 and §2.2

If E [ f ] is the expected value of f, w.r.t the Hausdorff measure in its dimension,
E [ f ] = 1 H dim H ( A ) ( A ) A f d H dim H ( A )
Consider the following problems:
Theorem 2. 
If F R A is the set of all f R A , where E [ f ] is finite, then F is shy (§2.4).
Note 3 
(Proof theorem 2 is true). We follow the argument presented in Example 3.6 of this paper [13], take X : = L 0 ( A ) (measurable functions over A), let P denote the one-dimensional subspace of A consisting of constant functions (assuming the Lebesgue measure on A) and let F : = L 0 ( A ) L 1 ( A ) (measurable functions over A without finite integral). Let λ P denote the Lebesgue measure over P, for any fixed f F :
λ P α R | A ( f + α ) d μ < = 0
Meaning P is a one-dimensional, so f is a 1-prevalent set.
Note 4 
(Way of Approaching Theorem 2). For all r N , suppose that f r B ( R A ) and A r B ( R n ) . If F R A is the set of all f R A , where there exists f r B ( R A ) and A r B ( R n ) such that ( f r , A r ) ( f , A ) and E [ f r ] is finite (§2.3.2), then F should be prevalent (§2.4) or neither prevalent nor shy (§2.4).
Theorem 5. 
For all r , v N , suppose f r , g v B ( R A ) and A r , B v B ( R n ) . When F R A is the set of all f R A , where ( f r , A r ) , ( g v , B v ) ( f , A ) and E [ f r ] E [ g v ] , then F is prevalent (§2.4).
Note 6 
(Possible method to proving theorem 5 true). For all r , v N , suppose f r , g v B ( R A ) and A r , B v B ( R n ) . Therefore, suppose Q R A is the set of all f R A whose lines of symmetry intersect at one point, where if ( f r , A r ) , ( g v , B v ) ( f , A ) , then E [ f r ] = E [ g v ] . In addition, Q R A is the set of symmetric f R A which clearly forms a shy subset of R A . Since Q Q , we have proven that Q is also shy (i.e., a subset of a shy set is also shy). Since the complement of the shy set Q is prevalent, F = R A Q is prevalent, such that for all f F , ( f r , A r ) , ( g v , A v ) ( f , A ) and E [ f r ] E [ g v ] . If this is correct, we have partially proven thm. 5.
Note 7 
(Way of Approaching Theorem 5). Suppose B B ( R A ) and B B ( R n ) are arbitrary sets, such that for all r N , f r B and A r B . If F R A is the set of all f R A , where ( f r , A r ) ( f , A ) and E [ f r ] is unique, then F should be prevelant (§2.4).
Since thm. 2 and 5 are true, we need to solve both theorems at once with the following:

2.5.1. Approach

Suppose B B ( R A ) and B B ( R n ) are arbitrary sets, such that for all r N , f r B and A r B . If F R A is the collection of all f R A , where ( f r , A r ) ( f , A ) and E [ f r ] is unique, satisfying (§3) and finite, then F should be:
(1)
a prevalent (§2.4) subset of R A
(2)
If not prevalent (§2.4) then neither prevalent (§2.4) nor shy (§2.4) subset of R A .

3. Attempt to Define “Satisfying" in The Approach of §2.5.1

3.1. Leading Question

To define satisfying in the blockquote of the §2.5.1, we ask the leading question...
Suppose, there exists arbitrary sets B B ( R A ) and B B ( R n ) , where for all r , v N :
(A)
f r B and A r B
(B)
f v B ( R A ) B and A v B ( R n ) B
(C)
( G r ) r N = ( graph ( f r ) ) r N is the sequence of the graph of each f r
(D)
□ is the logical symbol for “it’s necessary"
(E)
C is the chosen center point of R n + 1 (e.g., the origin)
(F)
E is the fixed, expected rate of expansion of ( G r ) r N w.r.t center point C: e.g., E = 1 3.1.C, §3.1.E)
(G)
E ( C , G r ) is the actual rate of expansion of ( G r ) r N w.r.t center point C (§3.1.C, §3.1.E, §5.4)
Does there exist an unique choice function, which chooses an unique set B B ( R A ) and B B ( R n ) , where for all r N , f r B and A r B , such that:
(1)
( f r , A r ) ( f , A ) 2.3.1)
(2)
For all v N , where for all f v B ( R A ) B and A v B ( R n ) B , assuming ( f v , A v ) ( f , A ) , the “measure" (§5.3.1, §5.3.2) of ( G r ) r N = ( graph ( f r ) ) r N 3.1.C) must increase at a rate linear or superlinear to that of ( G v ) v N = ( graph ( f v ) ) v N 3.1.C)
(3)
E [ f r ] is unique and finite (§2.3.2)
(4)
For some f r B and A r B satisfying (1), (2) and (3), when f is unbounded (i.e, skip (4) when f is bounded), for all sets B B ( R A ) and B B ( R n ) , where , r s , B B , and B B in (1), (2) and (3), s.t. ¬ ( E [ f r ] = E [ f s ] ) 2.3.1, §2.3.2, §3.1.D), note that for any s N , where f s B and A s B satisfy (1), (2) and (3):
  • If the absolute value is | | · | | and the ( n + 1 ) -th coordinate of C3.1.E) is x n + 1 , | | E [ f r ] x n + 1 | | | | E [ f s ] x n + 1 | | 2.3.1, §2.3.2)
  • If r N , then for all linear s 1 : N N , where s = s 1 ( r ) and the Big-O notation is O , there exists a function K : R R , where (§3.1.F-G):
    | | E ( C , G r ) E | | = O ( K ( | | E ( C , G s ) E | | ) ) = O ( K ( | | E ( C , G s 1 ( r ) ) E | | ) )
    such that:
    0 lim x + K ( x ) / x < +
    In simpler terms, “the rate of divergence" of | | E ( C , G r ) E | | 3.1.F-G) is less than or equal to “the rate of divergence" of | | E ( C , G s ) E | | 3.1.F-G).
(5)
When set F R A is the set of all f R A , where a choice function chooses a collection B B ( R A ) and B B ( R n ) , such that f r B and A r B satisfy (1), (2), (3) and (4), then F should be:
(a)
a prevelant (§2.4) subset of R A
(b)
If not (5a) then neither a prevelant (§2.4) nor shy (§2.4) subset of R A
(6)
Out of all choice functions which satisfy (1), (2), (3), (4) and (5), we choose the one with the simplest form, meaning for each choice function fully expanded, we take the one with the fewest variables/numbers?
(In case this is unclear, see §5.)
We are convinced E [ f r ] in (§3.1 crit. 3) isn’t unique nor satisfying enough to answer the approach of §2.5.1. Still, adjustments are possible by changing the criteria or by adding new criteria to the question.

4. Question Regarding My Work

Most don’t have time to address everything in my research, hence I ask the following:
Is there a research paper which already solves the ideas I’m woring on? (Non-published papers, such as mine [7], don’t count.)

5. Clarifying §3

See §3.1 once reading §5, and consider the following:
Is there a simpler version of the definitions below?

5.1. Example of Sequences of Bounded Functions Converging to f2.3.1)

The sequence of bounded functions ( f r ) r N , where f r : A r R and ( A r ) r N is a sequence of bounded sets, converges to Borel f : A R n R when:
For any x A there exists a sequence x A r s.t. x ( x 1 , · · · , x n ) and f r ( x ) f ( x 1 , · · · , x n ) (see [5] for info).
Example 
(Example of §2.3.1). If A = R and f : A R , where f ( x ) = 1 / x , then an example of ( f r ) r N , such that f r : A r R is:
(1)
( A r ) r N = ( [ r , 1 / r ] [ 1 / r , r ] ) r N
(2)
f r ( x ) = 1 / x for x A r
Example 
(More Complex Example). If A = R and f : A R , where f ( x ) = x , then an example of ( f r ) r N , such that f r : A r R is:
(1)
( A r ) r N = ( [ r , r ] ) r N
(2)
f r ( x ) = x + ( 1 / r ) sin ( x ) for x A r

5.2. Expected Value of Bounded Sequence of Functions

The expected value of ( f r ) r N is a real number E [ f r ] , when the following is true:
( ϵ > 0 ) ( N N ) ( r N ) r N 1 H dim H ( A r ) ( A r ) A r f r d H dim H ( A r ) E [ f r ] < ϵ
otherwise when no such E [ f r ] exists, E [ f r ] is infinite or undefined.

5.2.1. Example

Using example 0.1, when ( f r ) r N = ( x , 1 / x ) : x [ r , 1 / r ] [ 1 / r , r ] r N where:
(1)
( A r ) r N = [ r , 1 / r ] [ 1 / r , r ] r N
(2)
f r ( x ) = 1 / x for x A r
If we assume E [ f r ] = 0 :
( ϵ > 0 ) ( N N ) ( r N ) r N 1 H dim H ( A r ) ( A r ) A r f r d H dim H ( A r ) E [ f r ] < ϵ =
( ϵ > 0 ) ( N N ) ( r N ) ( r N
1 H dim H ( [ r , 1 / r ] [ 1 / r , r ] ) ( [ r , 1 / r ] [ 1 / r , r ] ) [ r , 1 / r ] [ 1 / r , r ] 1 / x d H dim H ( [ r , 1 / r ] [ 1 / r , r ] ) 0 < ϵ =
( ϵ > 0 ) ( N N ) ( r N ) r N 1 H 1 ( [ r , 1 / r ] [ 1 / r , r ] ) [ r , 1 / r ] [ 1 / r , r ] 1 / x d H 1 < ϵ =
( ϵ > 0 ) ( N N ) ( r N ) r N 1 ( 1 / r ( r ) ) + ( r 1 / r ) r 1 / r 1 / x d x + 1 / r r 1 / x d x < ϵ =
( ϵ > 0 ) ( N N ) ( r N ) r N 1 ( r 1 / r ) + ( 1 / r + r ) ln ( | | x | | ) + C | r 1 / r + ln ( | | x | | ) + C | 1 / r r < ϵ =
( ϵ > 0 ) ( N N ) ( r N ) r N 1 ( r 1 / r ) + ( 1 / r + r ) ln ( | | r | | ) ln ( | | 1 / r | | ) + ln ( | | r | | ) ln ( | | 1 / r | | ) < ϵ =
( ϵ > 0 ) ( N N ) ( r N ) r N 1 2 r 2 / r · 4 ln ( r ) < ϵ =
To prove eq. 19 is true, recall:
r e r / 2 , e 1 / r e r
r e r / 2 , e 1 / ( 2 r ) e r / 2
r e 1 / ( 2 r ) e r / 2
r e r / 2 / e 1 / ( 2 r )
r e r / 2 1 / ( 2 r )
ln ( r ) r / 2 1 / ( 2 r )
4 ln ( r ) 2 r 2 / r
Hence, for all ε > 0
4 ln ( r ) < ε ( 2 r 2 / r )
4 ln ( r ) 2 r 2 / r < ε
4 ln ( r ) 2 r 2 / r < ε
Since eq. 19 is true, E [ f r ] = 0 . Note, if we simply took the average of f from ( , ) , using the improper integral, the expected value:
lim ( x 1 , x 2 , x 3 , x 4 ) ( , 0 , 0 + , + ) 1 ( x 4 x 3 ) + ( x 2 x 1 ) x 1 x 2 1 x d x + x 3 x 4 1 x d x =
lim ( x 1 , x 2 , x 3 , x 4 ) ( , 0 , 0 + , + ) 1 ( x 4 x 3 ) + ( x 2 x 1 ) ln ( | | x | | ) + C | x 1 x 2 + ln ( | | x | | ) + C | x 3 x 4 =
lim ( x 1 , x 2 , x 3 , x 4 ) ( , 0 , 0 + , + ) 1 ( x 4 x 3 ) + ( x 2 x 1 ) ln ( | | x 2 | | ) ln ( | | x 1 | | ) + ln ( | | x 4 | | ) ln ( | | x 3 | | )
is + (when x 2 = 1 / x 1 , x 3 = 1 / x 4 , and x 1 = exp x 4 2 ) or (when x 2 = 1 / x 1 , x 3 = 1 / x 4 , and x 4 = exp x 1 2 ), making E [ f ] undefined. (However, using eq. 12-, we get the E [ f r ] = 0 instead of an undefined value.)

5.3. Defining the “Measure"

5.3.1. Preliminaries

We define the “measure" of ( G r ) r N , in §5.3.2, which is the sequence of the graph of each f r 2.3.1). To understand this “measure", continue reading.
(1)
For every r N , “over-cover" G r with minimal, pairwise disjoint sets of equal H dim H ( G r ) measure. (We denote the equal measures ε , where the former sentence is defined C ( ε , G r , ω ) : i.e., ω Ω ε , r enumerates all collections of these sets covering G r . In case this step is unclear, see §Appendix 8.1.)
(2)
For every ε , r and ω , take a sample point from each set in C ( ε , G r , ω ) . The set of these points is “the sample" which we define S ( C ( ε , G r , ω ) , ψ ) : i.e., ψ Ψ ε , r , ω enumerates all possible samples of C ( ε , G r , ω ) . (If this is unclear, see §Appendix 8.2.)
(3)
For every ε , r, ω and ψ ,
(a)
Take a “pathway" of line segments: we start with a line segment from arbitrary point x 0 of S ( C ( ε , G r , ω ) , ψ ) to the sample point with the smallest ( n + 1 ) -dimensional Euclidean distance to x 0 (i.e., when more than one sample point has the smallest ( n + 1 ) -dimensional Euclidean distance to x 0 , take either of those points). Next, repeat this process until the “pathway" intersects with every sample point once. (In case this is unclear, see §Appendix 8.3.1.)
(b)
Take the set of the length of all segments in (3a), except for lengths that are outliers (i.e., for any constant C > 0 , the outliers are more than C times the interquartile range of the length of all line segments as ε 0 ). Define this L ( x 0 , S ( C ( ε , G r , ω ) , ψ ) ) . (If this is unclear, see §Appendix 8.3.2.)
(c)
Multiply remaining lengths in the pathway by a constant so they add up to one (i.e., a probability distribution). This will be denoted P ( L ( x 0 , S ( C ( ε , G r , ω ) , ψ ) ) ) . (In case this is unclear, see §Appendix 8.3.3)
(d)
Take the shannon entropy [8] of step (3c). We define this:
E ( P ( L ( x 0 , S ( C ( ε , G r , ω ) , ψ ) ) ) ) = x P ( L ( x 0 , S ( C ( ε , G r , ω ) , ψ ) ) ) x log 2 x
which will be shortened to E ( L ( x 0 , S ( C ( ε , G r , ω ) , ψ ) ) ) . (If this is unclear, see §Appendix 8.3.4.)
(e)
Maximize the entropy w.r.t all "pathways". This we will denote:
E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) ) = sup x 0 S ( C ( ε , G r , ω ) , ψ ) E ( L ( x 0 , S ( C ( ε , G r , ω ) , ψ ) ) )
(In case this is unclear, see §8.3.5.)
(4)
Therefore, the maximum entropy, using (1) and (2) is:
E max ( ε , r ) = sup ω Ω ε , r sup ψ Ψ ε , r , ω E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) )

5.3.2. What Am I Measuring?

We define ( G r ) r N and ( G v ) v N , which respectively are sequences of the graph for each of the bounded functions f r and f v 2.3.1). Hence, for constant ε and cardinality | · |
(a)
Also, using (2) and (3e) of 5.3.1, suppose:
S ( C ( ε , G r , ω ) , ψ ) ¯ = inf S ( C ( ε , G v , ω ) , ψ ) : v N , ω Ω ε , v , ψ Ψ ε , v , ω , E ( L ( S ( C ( ε , G v , ω ) , ψ ) ) ) E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) )
then (using S ( C ( ε , G r , ω ) , ψ ) ¯ ) we also get:
α ¯ ε , r , ω , ψ = S ( C ( ε , G r , ω ) , ψ ) ¯ / S ( C ( ε , G r , ω ) , ψ ) )
(b)
Using (2) and (3e) of 5.3.1, suppose:
S ( C ( ε , G r , ω ) , ψ ) ̲ = sup S ( C ( ε , G v , ω ) , ψ ) : v N , ω Ω ε , v , ψ Ψ ε , v , ω , E ( L ( S ( C ( ε , G v , ω ) , ψ ) ) ) E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) )
then (using S ( C ( ϵ , G r , ω ) , ψ ) ̲ ) we get
α ̲ ε , r , ω , ψ = S ( C ( ε , G r , ω ) , ψ ) ̲ / S ( C ( ε , G r , ω ) , ψ ) )
(1)
If using α ¯ ϵ , r , ω , ψ and α ̲ ϵ , r , ω , ψ we have:
1 < lim sup ε 0 lim sup r sup ω Ω ε , r sup ψ Ψ ε , r , ω α ¯ ε , r , ω , ψ , lim inf ε 0 lim inf r inf ω Ω ε , r inf ψ Ψ ε , r , ω α ̲ ε , r , ω , ψ < +
then what I’m measuring from  ( G r ) r N increases at a rate superlinear to that of ( G v ) v N .
(2)
If using equations α ¯ ε , v , ω , ψ and α ̲ ε , v , ω , ψ (where, using α ¯ ε , r , ω , ψ and α ̲ ε , r , ω , ψ , we swap r with v and G r with G v ) we get:
1 < lim sup ε 0 lim sup v sup ω Ω ε , v sup ψ Ψ ε , v , ω α ¯ ε , v , ω , ψ , lim inf ε 0 lim inf v inf ω Ω ε , v inf ψ Ψ ε , v , ω α ̲ ε , v , ω , ψ < +
then what I’m measuring from  ( G r ) r N increases at a rate sublinear to that of ( G v ) v N .  
(3)
If using equations α ¯ ε , r , ω , ψ , α ̲ ε , r , ω , ψ , α ¯ ε , v , ω , ψ , and α ̲ ε , v , ω , ψ , we both have:  
(a)
lim sup ε 0 lim sup r sup ω Ω ε , r sup ψ Ψ ε , r , ω α ¯ ε , r , ω , ψ or lim inf ε 0 lim inf r inf ω Ω ε , r inf ψ Ψ ε , r , ω α ̲ ε , r , ω , ψ are equal to zero, one or +
(b)
lim sup ε 0 lim sup v sup ω Ω ε , v sup ψ Ψ ε , v , ω α ¯ ε , v , ω , ψ or lim inf ε 0 lim inf v inf ω Ω ε , v inf ψ Ψ ε , v , ω α ̲ ε , v , ω , ψ are equal to zero, one or +
then what I’m measuring from  ( G r ) r N increases at a rate linear to that of ( G v ) v N .

5.3.3. Example of The “Measure" of ( G r ) Increasing at Rate Super-Linear to That of ( G v )

Suppose, we have function f : A R , where A = Q [ 0 , 1 ] , and:
f ( x ) = 1 x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0 [ 0 , 1 ] 0 x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0 [ 0 , 1 ]
such that:
( A r ) r N = ( c / r ! : c Z , 0 c r ! ) r N
and
( A v ) v N = ( c / d : c Z , d N , d v , 0 c v ) v N
where for f r : A r R ,
f r ( x ) = f ( x ) for all x A r
and f v : A v R
f v ( x ) = f ( x ) for all x A v
Hence, when ( G r ) r N is:
( G r ) r N = ( x , f ( x ) ) : x c / r ! : c Z , 0 c r ! r N
and ( G v ) v N is:
( G v ) v N = ( x , f ( x ) ) : x c / d : c Z , d N , d v , 0 c v v N
Note, the following:
 
Since ε > 0 and A = Q [ 0 , 1 ] is countably infinite, there exists minimum ε which is 1. Therefore, we don’t need ε 0 . We also maximize E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) ) 5.3.1 step 3e) by the following procedure:
(1)
For every r N , group x G r into elements with an even numerator when simplified: i.e.,
x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0
which we call S 1 , r , and group x G r into elements with an odd denominator when simplified: i.e.,
x Q ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0
which we call S 2 , r
(2)
Arrange the points in S 1 , r from least to greatest and take the 2-d Euclidean distance between each pair of consecutive points in S 1 , r . In this case, since all points lie on y = 1 , take the absolute difference between the x-coordinates of S 1 , r then call this D 1 , r . (Note, this is similar to §5.3.1 step 3a).
(3)
Repeat step (3) for S 2 , r , then call this D 2 , r . (Note, all point of S 2 , r lie on y = 0 .)
(4)
Remove any outliers from D r = D 1 , r D 2 , r { d ( ( n ! 1 n ! , 1 ) , ( 1 , 0 ) ) } (i.e., d is the 2-d Euclidean distance between points ( n ! 1 n ! , 1 ) and ( 1 , 0 ) ). Note, in this case, D 2 , r and { d ( ( n ! 1 n ! , 1 ) , ( 1 , 0 ) ) } should be outliers (i.e., for any C > 0 , the lengths of D 2 , r are more than C times the interquartile range of the lengths of D r ) leaving us with D 1 , r .
(5)
Multiply the remaining lengths in the pathway by a constant so they add up to one. (See P[r] of code 1 for an example)
(6)
Take the entropy of the probability distribution. (See entropy[r] of code 1 for an example.)
We can illustrate this process with the following code:
Preprints 149723 i001
Preprints 149723 i002
Taking Table[{r,entropy[r]},{r,3,8}], we get:
Preprints 149723 i003
and notice when:
(1)
c ( r ) = ( r ! ) / 2 1
(2)
b ( 4 ) 9 , b ( 5 ) 45 , b ( 6 ) 315 , b ( 7 ) 2205 , b ( 8 ) 19845
(3)
a ( r ) + b ( r ) = c ( r )
the output of code 2 can be defined:
a ( r ) log 2 ( c ( r ) ) c ( r ) + b ( r ) log ( 2 c ( r ) ) ) c ( r ) = a ( r ) log 2 ( c ( r ) ) + b ( r ) log ( 2 c ( r ) ) c ( r )
Hence, since a ( r ) = c ( r ) b ( r ) = ( r ! ) / 2 1 b ( r ) :
a ( r ) log 2 ( c ( r ) ) + b ( r ) log ( 2 c ( r ) ) c ( r ) =
( r ! / 2 1 b ( r ) ) log 2 ( c ( r ) ) + b ( r ) log 2 ( 2 c ( r ) ) c ( r ) =
( r ! / 2 ) log 2 ( c ( r ) ) log 2 ( c ( r ) ) b ( r ) log 2 ( r ) + b ( r ) log 2 ( c ( r ) ) + b ( r ) log 2 ( 2 ) c ( r ) =
( r ! / 2 ) log 2 ( c ( r ) ) log 2 ( c ( r ) ) + b ( r ) c ( r ) =
( r ! / 2 1 ) log 2 ( c ( r ) ) + b ( r ) c ( r ) =
( r ! / 2 1 ) log 2 ( r ! / 2 1 ) + b ( r ) r ! / 2 1 =
log 2 ( r ! / 2 1 ) + b ( r ) r ! / 2 1 =
and lim r b ( r ) / c ( r ) = 1 (I need help proving this):
log 2 ( r ! / 2 1 ) + b ( r ) r ! / 2 1 log 2 ( r ! / 2 1 ) + 1
log 2 ( r ! / 2 1 ) + log 2 ( 2 ) =
log 2 ( 2 ( r ! / 2 1 ) )
log 2 ( r ! 2 ) log 2 ( r ! )
Hence, entropy[r] is the same as:
E ( L ( S ( C ( 1 , G r , ω ) , ψ ) ) ) log 2 ( r ! )
Now, repeat code 1 with:
( A v ) r N = ( c / d : c Z , d N , d v , 0 c v )
Preprints 149723 i004
Using this post [15], we assume an approximation of Table[entropy[v],{v,3,Infinity}] or
E ( L ( S ( C ( 1 , G v , ω ) , ψ ) ) ) is:
E ( L ( S ( C ( 1 , G v , ω ) , ψ ) ) ) 2 log 2 ( v ) + 1 log 2 ( 3 π )
Hence, using §5.3.2 (a) and §5.3.2 (2), take S ( C ( ε , G v , ω ) , ψ ) = M = 1 v ϕ ( M ) 3 π 2 v 2 (where ϕ is Euler’s Totient function) to compute the following:
S ( C ( ε , G r , ω ) , ψ ) ̲ = inf S ( C ( ε , G v , ω ) , ψ ) : v N , ω Ω ε , v , ψ Ψ ε , v , ω , E ( L ( S ( C ( ε , G v , ω ) , ψ ) ) ) E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) ) = inf 3 π 2 v 2 : r N , ω Ω ε , v , ψ Ψ ε , v , ω , 2 log 2 ( v ) + 1 log 2 ( 3 π ) log 2 ( r ! ) =
where:
(1)
For every r N , we find a v N , where 2 log 2 ( v ) + 1 log 2 ( 3 π ) log 2 ( r ! ) , but the absolute value of 2 log 2 ( v ) + 1 log 2 ( 3 π ) log 2 ( r ! ) is minimized. In other words,
for every r N , we want v N where:
2 log 2 ( v ) + 1 log 2 ( 3 π ) log 2 ( r ! )
2 2 log 2 ( v ) log 2 ( r ! ) 1 + log 2 ( 3 π )
2 log 2 ( v ) 2 2 log 2 ( r ! ) 1 + log 2 ( 3 π )
v 2 2 log 2 ( r ! ) 2 log 2 ( 3 π ) / 2
v r ! ( 3 π ) 2
v = 3 π r ! 2
3 π 2 v 2 = 3 π 2 3 π r ! 2 2 S ( C ( 1 , G r , ω ) , ψ ) ¯
Finally, since S ( C ( 1 , G r , ω ) , ψ ) = r ! , we wish to prove
1 < lim sup ε 0 lim sup r sup ω Ω ε , r sup ψ Ψ ε , r , ω α ¯ ε , r , ω , ψ < +
within §5.3.2 crit. 1:
lim sup ε 0 lim sup r sup ω Ω ε , r sup ψ Ψ ε , r , ω α ¯ ε , r , ω , ψ = lim sup r sup ω Ω ε , r sup ψ Ψ ε , r , ω S ( C ( 1 , G r , ω ) , ψ ) ¯ S ( C ( 1 , G r , ω ) , ψ ) )
= lim r 3 π 2 3 π r ! 2 2 r !
where using mathematica, we get the limit is greater than one:
Preprints 149723 i005
Also, using §5.3.2 (b) and §5.3.2 (1), take S ( C ( ε , G v , ω ) , ψ ) = M = 1 v ϕ ( M ) 3 π 2 v 2 (where ϕ is Euler’s Totient function) computing the following:
S ( C ( ε , G r , ω ) , ψ ) ̲ = sup S ( C ( ε , G v , ω ) , ψ ) : v N , ω Ω ε , v , ψ Ψ ε , v , ω , E ( L ( S ( C ( ε , G v , ω ) , ψ ) ) ) E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) ) = sup 3 π 2 v 2 : v N , ω Ω ε , v , ψ Ψ ε , v , ω , 2 log 2 ( v ) + 1 log 2 ( 3 π ) log 2 ( r ! ) =
where:
(1)
For every r N , we find a v N , where 2 log 2 ( r ) + 1 log 2 ( 3 π ) log 2 ( r ! ) , but the absolute value of log 2 ( r ! ) 2 log 2 ( v ) + 1 log 2 ( 3 π ) is minimized. In other words, for every r N , we want v N where:
2 log 2 ( v ) + 1 log 2 ( 3 π ) log 2 ( r ! )
2 log 2 ( v ) log 2 ( r ! ) 1 + log 2 ( 3 π )
2 log 2 ( v ) 2 2 log 2 ( r ! ) 1 + log 2 ( 3 π )
( v ) 2 2 log 2 ( r ! ) 2 log 2 ( 3 π ) / 2
v r ! ( 3 π ) 2
v = 3 π r ! 2
3 π 2 v 2 = 3 π 2 3 π r ! 2 2 S ( C ( 1 , G r , ω ) , ψ ) ̲
Finally, since S ( C ( 1 , G r , ω ) , ψ ) = r ! , we wish to prove
1 < lim inf ε 0 lim inf r inf ω Ω ε , r inf ψ Ψ ε , r , ω α ̲ ε , r , ω , ψ < +
within §5.3.2 crit. 1:
lim inf ε 0 lim inf r inf ω Ω ε , r inf ψ Ψ ε , r , ω α ̲ ε , r , ω , ψ = lim inf r inf ω Ω ε , r inf ψ Ψ ε , r , ω S ( C ( 1 , G r , ω ) , ψ ) ̲ S ( C ( 1 , G r , ω ) , ψ ) )
= lim r 3 π 2 3 π r ! 2 2 r !
where using mathematica, we get the limit is greater than one:
Preprints 149723 i006
Hence, since the limits in eq. 70 and eq. 60 are greater than one and less than + : i.e.,
1 < lim inf ε 0 lim inf r inf ω Ω ε , r inf ψ Ψ ε , r , ω α ̲ ε , r , ω , ψ = lim sup ε 0 lim sup r sup ω Ω ε , r sup ψ Ψ ε , r , ω α ¯ ε , r , ω , ψ < +
what we’re measuring from ( G r ) r N increases at a rate superlinear to that of ( G v ) v N (i.e., 5.3.2 crit. 1).

5.3.4. Example of The “Measure" from ( G r ) r N Increasing at a Rate Sub-Linear to That of ( G v ) v N

Using our previous example, we can use the following theorem:
Theorem 8. 
If what we’re measuring from ( G r ) r N increases at a rate superlinear to that of ( G v ) v N , then what we’re measuring from ( G v ) v N increases at a ratesublinearto that of ( G r ) r N
Hence, in our definition of super-linear (§5.3.2 crit. 1), swap G r for G v and v N for r N regarding α ¯ ϵ , r , ω , ψ and α ̲ ϵ , r , ω , ψ (i.e., α ¯ ϵ , v , ω , ψ and α ̲ ϵ , v , ω , ψ ) and notice thm. 8 is true when:
1 < lim sup ε 0 lim sup v sup ω Ω ε , v sup ψ Ψ ε , v , ω α ¯ ε , v , ω , ψ , lim inf ε 0 lim inf v inf ω Ω ε , v inf ψ Ψ ε , v , ω α ̲ ε , v , ω , ψ < +

5.3.5. Example of The “Measure" from ( G r ) r N Increasing at a Rate Linear to That of ( G v ) v N

Suppose, we have function f : A R , where A = Q [ 0 , 1 ] , and:
f ( x ) = 1 x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0 [ 0 , 1 ] 0 x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0 [ 0 , 1 ]
such that:
( A r ) r N = ( c / r ! : c Z , 0 c r ! ) r N
and
( A v ) v N = ( c / ( v ! ) 2 : c N , 1 c ( v ! ) 2 ) v N
where for f r : A r R ,
f r ( x ) = f ( x ) for all x A r
and f v : A v R
f v ( x ) = f ( x ) for all x A v
Hence, when ( G r ) r N is:
( G r ) r N = ( x , f ( x ) ) : x c / r ! : c Z , 0 c r ! r N
and ( G v ) v N is:
( G v ) v N = ( x , f ( x ) ) : x c / ( v ! ) 2 : c Z , 0 c ( v ! ) 2 v N
We already know, using eq. 50:
E ( L ( S ( C ( 1 , G r , ω ) , ψ ) ) ) log 2 ( r ! 2 ) log 2 ( r ! )
Also, using §5.3.3 steps 1-6 on ( A v ) r N :
Preprints 149723 i007
where the output is
Preprints 149723 i008
Notice when:
(1)
c ( v ) = ( v ! ) 2 / 2 1
(2)
b ( 4 ) 9 , b ( 5 ) 279 , b ( 6 ) 6975 , b ( 7 ) 257175 , b ( 8 ) 19845
(3)
a ( v ) + b ( v ) = c ( v )
the output of code 7 can be defined:
a ( v ) log 2 ( c ( v ) ) c ( v ) + b ( v ) log ( 2 c ( v ) ) c ( v ) = a ( v ) log 2 ( c ( v ) ) + b ( v ) log ( 2 c ( v ) ) c ( v )
Hence, since a ( v ) = c ( v ) b ( v ) = ( v ! ) 2 / 2 1 b ( v ) :
a ( v ) log 2 ( c ( v ) ) + b ( v ) log ( 2 c ( v ) ) c ( v ) =
( ( v ! ) 2 / 2 1 b ( v ) ) log 2 ( c ( v ) ) + b ( v ) log 2 ( 2 c ( v ) ) c ( v ) =
( ( v ! ) 2 / 2 ) log 2 ( c ( v ) ) log 2 ( c ( v ) ) b ( v ) log 2 ( v ) + b ( v ) log 2 ( c ( v ) ) + b ( v ) log 2 ( 2 ) c ( v ) =
( ( v ! ) 2 / 2 ) log 2 ( c ( v ) ) log 2 ( c ( v ) ) + b ( v ) c ( v ) =
( ( v ! ) 2 / 2 1 ) log 2 ( c ( v ) ) + b ( v ) c ( v ) =
( ( v ! ) 2 / 2 1 ) log 2 ( ( v ! ) 2 / 2 1 ) + b ( v ) ( v ! ) 2 / 2 1 =
log 2 ( ( v ! ) 2 / 2 1 ) + b ( v ) ( v ! ) 2 / 2 1 =
since lim v b ( v ) / c ( v ) = 1 (this is proven in [16]):
log 2 ( ( v ! ) 2 / 2 1 ) + b ( v ) ( v ! ) 2 / 2 1 log 2 ( ( v ! ) 2 / 2 1 ) + 1
log 2 ( ( v ! ) 2 / 2 1 ) + log 2 ( 2 ) =
log 2 ( ( v ! ) 2 2 ) )
log 2 ( ( v ! ) 2 ) =
2 log 2 ( v ! )
Hence, entropy[r] is the same as:
E ( L ( S ( C ( 1 , G v , ω ) , ψ ) ) )
2 log 2 ( v ! )
Therefore, using §5.3.2 (b) and §5.3.2 (44a), take S ( C ( ε , G v , ω ) , ψ ) = ( v ! ) 2 to compute the following:
S ( C ( ε , G r , ω ) , ψ ) ̲ = sup S ( C ( ε , G v , ω ) , ψ ) : v N , ω Ω ε , v , ψ Ψ ε , v , ω , E ( L ( S ( C ( ε , G v , ω ) , ψ ) ) ) E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) ) = sup ( v ! ) 2 : r N , ω Ω ε , v , ψ Ψ ε , v , ω , 2 log 2 ( v ! ) log 2 ( r ! ) =
where:
(1)
For every r N , we find a v N , where 2 log 2 ( v ! ) log 2 ( r ! ) , but the absolute value of log 2 ( r ! ) 2 log 2 ( v ! ) is minimized. In other words, for every r N , we want v N where:
2 log 2 ( v ! ) log 2 ( r ! )
2 2 log 2 ( v ! ) 2 log 2 ( r ! )
( 2 log 2 ( v ! ) ) 2 r !
( v ! ) 2 r !
( v ! ) 2 = r !
To solve for v, we try the following code:
Preprints 149723 i009
Note, the output is:
Preprints 149723 i010
Figure 1. Plot of loweralphr.
Figure 1. Plot of loweralphr.
Preprints 149723 g001
Finally, since the lower bound of loweralphr is zero, we have shown:
lim inf ε 0 lim inf r inf ω Ω ε , r inf ψ Ψ ε , r , ω α ̲ ε , r , ω , ψ = 0
Next, using §5.3.2 (b) and §5.3.2 (3b), take S ( C ( ε , G r , ω ) , ψ ) = r ! and swap r N and ( G r ) r N with v N and ( G v ) v N , to compute the following:
S ( C ( ε , G v , ω ) , ψ ) ̲ = inf S ( C ( ε , G r , ω ) , ψ ) : r N , ω Ω ε , r , ψ Ψ ε , r , ω , E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) ) E ( L ( S ( C ( ε , G v , ω ) , ψ ) ) ) = inf r ! : r N , ω Ω ε , r , ψ Ψ ε , r , ω , log 2 ( r ! ) 2 log 2 ( v ! ) =
where:
(1)
For every v N , we find a r N , where log 2 ( r ! ) 2 log 2 ( v ! ) , but the absolute value of 2 log 2 ( v ! ) log 2 ( r ! ) is minimized. In other words, for every v N , we want r N where:
log 2 ( r ! ) 2 log 2 ( v ! )
2 log 2 ( r ! ) 2 2 log 2 ( v ! )
r ! ( 2 log 2 ( v ! ) ) 2
r ! ( v ! ) 2
r ! = ( v ! ) 2
To solve r, we try the following code:
Preprints 149723 i011
Preprints 149723 i012
Note, the output is:
Preprints 149723 i013
Figure 2. Plot of loweralphj.
Figure 2. Plot of loweralphj.
Preprints 149723 g002
since the lower bound of loweralphj is zero, we have shown:
lim inf ε 0 lim inf v inf ω Ω ε , v inf ψ Ψ ε , v , ω α ̲ ε , v , ω , ψ = 0
Hence, using eq. 100 and 107, since both:  
(1)
lim sup ε 0 lim sup r sup ω Ω ε , r sup ψ Ψ ε , r , ω α ¯ ε , r , ω , ψ or lim inf ε 0 lim inf r inf ω Ω ε , r inf ψ Ψ ε , r , ω α ̲ ε , r , ω , ψ are equal to zero, one or +
(2)
lim sup ε 0 lim sup v sup ω Ω ε , v sup ψ Ψ ε , v , ω α ¯ ε , v , ω , ψ or lim inf ε 0 lim inf v inf ω Ω ε , v inf ψ Ψ ε , v , ω α ̲ ε , v , ω , ψ are equal to zero, one or +
then what I’m measuring from  ( G r ) r N increases at a rate linear to that of ( G v ) v N .

5.4. Defining The Actual Rate of Expansion of Sequence of Bounded Sets

5.4.1. Definition of Actual Rate of Expansion of Sequence of Bounded Sets

Suppose, ( G r ) r N is the sequence of the graph for each function f r 2.3.1). When d ( Q , R ) is the Euclidean distance between points Q , R R n and a “chosen" center point is C R n + 1 , where
G ( C , G r ) = sup d ( C , y ) : y G r
the actual rate of expansion is:
E ( C , G r ) = G ( C , G r + 1 ) G ( C , G r )
Note, there are cases of ( G r ) r N when E isn’t fixed and E E (i.e., the expected, fixed rate of expansion).

5.4.2. Example

Suppose, we have f : A R , where A = R and f ( x ) = x , such that ( A r ) r N = ( [ r , r ] ) r N and for f r : A r R :
f r ( x ) = f ( x ) for all x A r
Hence, when ( G r ) r N is:
( G r ) r N = ( x , x ) : x [ r , r ] r N
such that C = ( 0 , 0 ) , note the farthest point of G r from C is either ( r , r ) or ( r , r ) . Hence, to compute G ( C , G r ) , we can take d ( ( 0 , 0 ) , ( r , r ) ) or d ( ( 0 , 0 ) , ( r , r ) ) :
G ( C , G r ) = sup { d ( C , y ) : y G r } =
d ( ( 0 , 0 ) , ( r , r ) ) =
( 0 r ) 2 + ( 0 r ) 2 =
r 2 + r 2 =
2 r 2 =
2 r 2 =
2 | r |
2 r sin ce r > 0
and the actual rate of expansion is:
E ( C , G r ) = G ( C , G r + 1 ) G ( C , G r ) =
2 ( r + 1 ) 2 r =
2 ( r + 1 ) 2 r =
2 r + 2 2 r =
2

5.5. Reminder

See if §3.1 is easier to understand.

6. My Attempt At Answering The Approach of §2.5.1

6.1. Choice Function

Suppose we define the following:
(1)
If B B ( R A ) 2.3.3) and B B ( R n ) 2.3.3) are arbitrary sets, then f r B and A r B satisfy (1), (2), (3), (4) and (5) of the leading question in §3.1
(2)
For all v N , f v B ( R A ) B and A v B ( R n ) B
Further note, from §5.3.2 (a), if we take:
S ( C ( ε , G r , ω ) , ψ ) ¯ = inf S ( C ( ε , G v , ω ) , ψ ) : v N , ω Ω ε , v , ψ Ψ ε , v , ω , E ( L ( S ( C ( ε , G v , ω ) , ψ ) ) ) E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) )
and from §5.3.2 (b), we take:
S ( C ( ε , G r , ω ) , ψ ) ̲ = sup S ( C ( ε , G v , ω ) , ψ ) : v N , ω Ω ε , v , ψ Ψ ε , v , ω , E ( L ( S ( C ( ε , G v , ω ) , ψ ) ) ) E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) )
Then, §5.3.1 (2), eq. 121, and eq. 122 is:
sup ω Ω ε , r sup ψ Ψ ε , r , ω S ( C ( ε , G r , ω ) , ψ ) = S ( ε , G r ) = S
sup ω Ω ε , r sup ψ Ψ ε , r , ω S ( C ( ε , G r , ω ) , ψ ) ¯ = S ( ε , G r ) ¯ = S ¯
sup ω Ω ε , r sup ψ Ψ ε , r , ω S ( C ( ε , G r , ω ) , ψ ) ̲ = S ( ε , G r ) ̲ = S ̲

6.2. Approach

We manipulate the definitions of §5.3.2 (a) and §5.3.2 (b) to solve (1), (2), (3), (4) and (5) of the leading question in §3.1

6.3. Potential Answer

6.3.1. Preliminaries (Definition of T)

Suppose ( G r ) r N is the sequence of the graph on each function f r 2.3.1). Then, when:
  • The average of G r for every r N is:
    Avg ( G r ) = 1 H dim H ( G r ) ( G r ) G r ( x 1 , · · · , x n ) d H dim H ( G r )
  • d ( P , Q ) is the n-d Euclidean distance between points P , Q R n
  • The difference of point X = ( x 1 , · · · , x n ) and Y = ( y 1 , · · · , y n ) is:
    X Y = ( x 1 y 1 , x 2 y 2 , · · · , x n y n )
We define an explicit injective F : R n R , where r , v N , such that:
(1)
If d ( Avg ( G r ) , C ) < d ( Avg ( G v ) , C ) , then F ( Avg ( G r ) C ) < F ( Avg ( G v ) C )
(2)
If d ( Avg ( G r ) , C ) > d ( Avg ( G v ) , C ) , then F ( Avg ( G r ) C ) > F ( Avg ( G v ) C )
(3)
If d ( Avg ( G r ) , C ) = d ( Avg ( G v ) , C ) , then F ( Avg ( G r ) C ) F ( Avg ( G v ) C )
where we define:
T ( C , G r ) = F ( Avg ( G r ) C )

6.3.2. Question

Does T exist? If so, how do we define it?
Hence, using S , S ¯ , S ̲ , E, E ( C , G r ) 5.4), and T ( C , G r ) , such that with the absolute value function | | · | | , ceiling function · , and nearest integer function · , we define:
K ( ε , G r ) = 1 + E E ( C , G r ) 5 ( 5 | 5 | S 1 + S S ̲ + 2 S S ̲ + S S ̲ + S + S ¯ 1 + S ̲ / S 1 + S / S ¯ 1 + S ̲ / S ¯ S 5 | 5 | + S 5 ) T ( C , G r ) E ( C , G r )
where E , E, and T are “removed" when E , E = 0 , the choice function which answers the leading question in §3.1 could be the following, s.t.we explain the reason behind choosing the choice function in §6.4:
Theorem 9. 
If we define:
M ( ε , G r ) = | S ( ε , G r ) | K ( ε , G r ) | S ( ε , G r ) |
M ( ε , G v ) = | S ( ε , G v ) | K ( ε , G v ) | S ( ε , G v ) |
where for M ( ε , G r ) , we define M ( ε , G r ) to be the same as M ( ε , G v ) when swapping “ v N " with “ r N " (for eq. 121 & 122) and sets G r with G v (for eq. 121128), then for constant v > 0 and variable v * > 0 , if:
S ¯ ( ε , r , v * , G v ) = inf | S ( ε , G v ) | : v N , M ( ε , G v ) M ( ε , G r ) v * { v * } + v
and:
S ̲ ( ε , r , v * , G v ) = sup | S ( ε , G v ) | : v N , v * M ( ε , G v ) M ( ε , G r ) { v * } + v
where for all r , v N , there exists a f r B and A r B (§6.1 crit. 1), such for all f v B ( R A ) B and A v B ( R n ) B (§6.1 crit. 2), whenever:
inf | | 1 c | | : ( ϵ > 0 ) ( c > 0 ) ( r N ) ( v N ) S ( ε , G r ) S ( ε , G v ) c < ε
where · is the ceiling function, E is the fixed rate of expansion, Γ is the gamma function, n is the dimension of R n , dim H ( G r ) is the Hausdorff dimension of set G r R n + 1 , and A r is area of the smallest ( n + 1 ) -dimensional box that contains A r , then:
V ( ε , G r , n ) = ( A r 1 sign ( E ) ( E sign ( E ) + 1 ) exp n ln ( π ) / 2 Γ ( n / 2 + 1 ) r ! ( n dim H ( G r ) ) r sign ( E ) dim H ( G r ) sign dim H ( G r ) + 1 + ( 1 sign ( dim H ( G r ) ) ) ) / ε / | S ( ε , G r ) |
& the choice function is:
lim sup ε 0 lim v * lim sup r sign ( M ( ε , G r ) ) S ¯ ( ε , r , v * , G v ) | S ( ε , G r ) | + v c V ( ε , G r , n ) sign ( M ( ε , G r ) ) S ̲ ( ε , r , v * , G v ) | S ( ε , G r ) | + v c V ( ε , G r , n ) =
lim inf ε 0 lim v * lim inf r sign ( M ( ε , G r ) ) S ¯ ( ε , k , v * , G v ) | S ( ε , G r ) | + v c V ( ε , G r , n ) sign ( M ( ε , G r ) ) S ̲ ( ε , r , v * , G v ) | S ( ε , G r ) | + v c V ( ε , G r , n ) = 0
such that ( G r ) r N satisfies eq. 133 & eq. 134. (Note, we want sup = and inf = + ) where the expected value answering the approach of §2.5.1 using the leading question (§3.1) is E [ f r ]

6.4. Explaining the Choice Function and Evidence The Choice Function Is Credible

Notice, before reading the programming in code 12, without the “c"-terms in eq. 133 and eq. 134:
(1)
The choice function in eq. 133 and eq. 134 is zero, when what I’m measuring from ( G r ) r N 5.3.2 criteria 1) increases at a rate superlinear to that of ( G v ) v N , where sign ( M ( ε , G r ) ) = 0 .
(2)
The choice function in eq. 133 and eq. 134 is zero, when for a given ( G r ) r N and ( G v ) v N there doesn’t exist c where eq. 131 is satisfied or c = 0 .
(3)
When c does exist, suppose:
J ( r ) : r N , | S ( ε , G r ) | | S ( ε , G J ( r ) ) | c
(a)
When | S ( ε , G r ) | < | S ( ε , G J ( r ) ) | , then:
lim sup ε 0 lim v * lim sup r sign ( M ( ε , G r ) ) S ¯ ( ε , r , v * , G v ) | S ( ε , G r ) | + v = c
lim inf ε 0 lim v * lim inf r sign ( M ( ε , G r ) ) S ̲ ( ε , r , v * , G v ) | S ( ε , G r ) | + v = 0
(b)
When | S ( ε , G r ) | > | S ( ε , G J ( r ) ) | , then:
lim sup ε 0 lim v * lim sup r sign ( M ( ε , G r ) ) S ¯ ( ε , k , v * , G v ) | S ( ε , G r ) | + v = +
lim inf ε 0 lim v * lim inf r sign ( M ( ε , G r ) ) S ̲ ( ε , k , v * , G v ) | S ( ε , G r ) | + v = 1 / c
Hence, for each sub-criteria under crit. (3), if we subtract one of their limits by their limit value, then eq. 133 and eq. 134 is zero. (We do this using the “c"-term in eq. 133 and 134). However, when the exponents of the “c"-terms aren’t equal to 1 , the limits of eq. 133 and 134 aren’t equal to zero. We want this, infact, whenever we swap S ( ε , G r ) with S ( ε , G v ) . Moreover, we define function V ( ε , G r , n ) (i.e., eq. 132), where:
i.
When S ( ε , G r ) Numerator V ( ε , G r , n ) , then eq. 133 and 134 without the “c"-terms are zero. (The “c"-terms approach zero and still allow eq. 133 and 134 to equal zero.)
ii.
When S ( ε , G r ) Numerator V ( ε , G r , n ) , then sign ( M ( ε , G r ) ) is zero which makes eq. 133 and 134 equal zero.
iii.
Here are some examples of the numerator of V ( ε , G r , n ) (eq. 132):
A.
When E = 0 , n = 1 , and dim H ( A ) = 0 , the numerator of V ( ε , G r , n ) is A r ! + 1 / ε
B.
When E = z , n = 1 , and dim H ( A ) = 0 , the numerator of V ( ε , G r , n ) is 2 z r · r ! + 1 / ε
C.
When E = 0 , n = z 2 , and dim H ( A ) = z 2 , the numerator of V ( ε , G r , n ) is ceiling of constant A times the volume of an n-dimensional ball with finite radius: i.e.,
A z 1 exp z 2 ln ( π ) / 2 Γ ( z 2 / 2 + 1 ) / ε
D.
When E = z 1 , n = z 2 , and dim H ( A ) = z 2 , the numerator of V ( ε , G r , n ) is ceiling of the volume of the n-dimensional ball: i.e.,
z 1 exp z 2 ln ( π ) / 2 Γ ( z 2 / 2 + 1 ) r z 2 / ε
Now, consider the code for eq. 133 and eq. 134. (Note, the set theoretic limit of G r is the graph of function f : A R .) In this example, A = Q [ 0 , 1 ] , and:
f ( x ) = 1 x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0 [ 0 , 1 ] 0 x ( 2 s + 1 ) / ( 2 t ) : s Z , t N , t 0 [ 0 , 1 ]
such that:
( A r ) r N = ( c / ( r ! ) : c Z , 0 c r ! ) r N
the ceiling function is · , and:
( A v ) v N = ( c / v ! / 3 : c Z , 0 c v ! / 3 ) v N
such for f r : A r R ,
f r ( x ) = f ( x ) for all x A r
and f v : A v R
f v ( x ) = f ( x ) for all x A v
Hence, when ( G r ) r N is:
( G r ) r N = ( x , f ( x ) ) : x c / k ! : c Z , 0 c k ! r N
and ( G v ) v N is:
( G v ) v N = ( x , f ( x ) ) : x c / v ! / 3 : c Z , 0 c v ! / 3 v N
Note, the following (we leave this to mathematicians to figure LengthS1, LengthS2, Entropy1 and Entropy 2 for other A and f in code 12).

6.4.1. Evidence with Programming

Preprints 149723 i014
Preprints 149723 i015
Preprints 149723 i016

7. Questions

(1)
Does §6 answer the leading question in §3.1
(2)
Using thm. 9, when f is defined in §2.1, does E [ f r ] have a finite value?
(3)
Using thm. 9, when f is defined in §2.2, does E [ f r ] have a finite value?
(4)
If there’s no time to check questions 1, 2 and 3, see §4.

8. Appendix of §5.3.1

8.1. Example of §5.3.1, step 1

Suppose
(1)
A = R
(2)
When defining f : A R :
f ( x ) = 1 x < 0 1 0 x < 0.5 0.5 0.5 x
(3)
( G r ) r N = ( x , f ( x ) ) : r x r r N
Then one example of C ( 2 / 6 , G 1 , 1 ) , using §5.3.1 step 1, (where G 1 = ( x , f ( x ) ) : 1 x 1 r N ) is:
{ ( x , f ( x ) ) : 1 x 2 6 6 , ( x , f ( x ) ) : 2 6 6 x 2 2 6 6 , ( x , f ( x ) ) : 2 2 6 6 x 3 2 6 6 ( x , f ( x ) ) : 3 2 6 6 x 4 2 6 6 , ( x , f ( x ) ) : 4 2 6 6 x 5 2 6 6 , ( x , f ( x ) ) : 5 2 6 6 x 6 2 6 6 ( x , f ( x ) ) : 6 2 6 6 x 7 2 6 6 , ( x , f ( x ) ) : 7 2 6 6 x 8 2 6 6 , ( x , f ( x ) ) : 8 2 6 6 x 9 2 6 6 }
Note, the length of each partition is 2 / 6 , where the borders could be approximated as:
{ ( x , f ( x ) ) : 1 x . 764 , ( x , f ( x ) ) : . 764 x . 528 , ( x , f ( x ) ) : . 528 x . 293 ( x , f ( x ) ) : . 293 x . 057 , ( x , f ( x ) ) : . 057 x . 178 , ( x , f ( x ) ) : . 178 x . 414 ( x , f ( x ) ) : . 414 x . 65 , ( x , f ( x ) ) : . 65 x . 886 , ( x , f ( x ) ) : . 886 x 1 . 121 }
which is illustrated using alternating orange/black lines of equal length covering G 1 (i.e., the black vertical lines are the smallest and largest x-cooridinates of G 1 ).
(Note, the alternating covers in Figure 3 satisfy step (1) of §5.3.1, because the Hausdorff measure in its dimension of the covers is 2 / 6 and there are 9 covers over-covering G 1 : i.e.,
Figure 3. The alternating orange & black lines are the “covers" and the vertical lines are the boundaries of G 1 .
Figure 3. The alternating orange & black lines are the “covers" and the vertical lines are the boundaries of G 1 .
Preprints 149723 g003
Definition 1
(Minimum Covers of Measure ε = 2 / 6 covering G 1 ). We can compute the minimum covers of C ( 2 / 6 , G 1 , 1 ) , using the formula:
H dim H ( G 1 ) ( G 1 ) / ( 2 / 6 )
where H dim H ( G 1 ) ( G 1 ) / ( 2 / 6 ) = Length ( [ 1 , 1 ] ) / ( 2 / 6 ) = 2 / ( 2 / 6 ) = 6 2 = 6 ( 1 . 4 ) = 8 + . 4 = 9 ).
Note there are other examples of C ( 2 / 6 , G 1 , ω ) for different ω . Here is another case:
which can be defined (see eq. 146 for comparison):
{ ( x , f ( x ) ) : 6 9 2 6 x 6 8 2 6 , ( x , f ( x ) ) : 6 8 2 6 x 6 7 2 6 , ( x , f ( x ) ) : 6 7 2 6 x 6 6 2 6 ( x , f ( x ) ) : 6 6 2 6 x 6 5 2 6 , ( x , f ( x ) ) : 6 5 2 6 x 6 4 2 6 , ( x , f ( x ) ) : 6 4 2 6 x 6 3 2 6 ( x , f ( x ) ) : 6 3 2 6 x 6 2 2 6 , ( x , f ( x ) ) : 6 2 2 6 x 6 2 6 , ( x , f ( x ) ) : 6 2 6 x 1 }
In the case of G 1 , there are uncountable different covers C ( 2 / 6 , G 1 , ω ) which can be used. For instance, when 0 α ( 12 9 2 ) / 6 (i.e., ω = α ( 12 9 2 ) / 6 + 1 ) consider:
{ ( x , f ( x ) ) : α 1 + α x α + 2 6 6 , ( x , f ( x ) ) : α + 2 6 6 x α + 2 2 6 6 , ( x , f ( x ) ) : α + 2 2 6 6 x α + 3 2 6 6 ( x , f ( x ) ) : α + 3 2 6 6 x α + 4 2 6 6 , ( x , f ( x ) ) : α + 4 2 6 6 x α + 5 2 6 6 , ( x , f ( x ) ) : α + 5 2 6 6 x α + 6 2 6 6 , ( x , f ( x ) ) : α + 6 2 6 6 x α + 7 2 6 6 ( x , f ( x ) ) : α + 7 2 6 6 x α + 8 2 6 6 , ( x , f ( x ) ) : α + 8 2 6 6 x α + 9 2 6 6 }
When α = 0 and ω = ( 9 2 6 ) / 6 , we get Figure 4 and when α = ( 12 9 2 ) / 6 and ω = 1 , we get Figure 3
Figure 4. This is similar to Figure 3, except the start-points of the covers are shifted all the way to the left.
Figure 4. This is similar to Figure 3, except the start-points of the covers are shifted all the way to the left.
Preprints 149723 g004

8.2. Example of §5.3.1, step 2

. Suppose:
(1)
A = R
(2)
When defining f : A R : i.e.,
f ( x ) = 1 x < 0 1 0 x < 0 . 5 0 . 5 0 . 5 x
(3)
( G r ) r N = ( x , f ( x ) ) : r x r r N
(4)
G 1 = ( x , f ( x ) ) : 1 x 1
(5)
C ( 2 / 6 , G 1 , 1 ) , using eq. 147 and fig. Figure 3, which is approximately
{ ( x , f ( x ) ) : 1 x . 764 , ( x , f ( x ) ) : . 764 x . 528 , ( x , f ( x ) ) : . 528 x . 293 ( x , f ( x ) ) : . 293 x . 057 , ( x , f ( x ) ) : . 057 x . 178 , ( x , f ( x ) ) : . 178 x . 414 ( x , f ( x ) ) : . 414 x . 65 , ( x , f ( x ) ) : . 65 x . 886 , ( x , f ( x ) ) : . 886 x 1 . 121 }
Then, an example of S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) is:
{ ( . 9 , 1 ) , ( . 65 , 1 ) , ( . 4 , 1 ) , ( . 2 , 1 ) , ( . 1 , 1 ) , ( . 3 , 1 ) , ( . 55 , . 5 ) , ( . 75 , . 5 ) , ( 1 , . 5 ) }
Below, we illustrate the sample: i.e., the set of all blue points in each orange and black line of  C ( 2 / 6 , G 1 , 1 ) covering G 1 :
Figure 5. The blue points are the “sample points", the alternative black and orange lines are the “covers", and the red lines are the smallest & largest x-coordinates G 1 .
Figure 5. The blue points are the “sample points", the alternative black and orange lines are the “covers", and the red lines are the smallest & largest x-coordinates G 1 .
Preprints 149723 g005
Note, there are multiple samples that can be taken, as long as one sample point is taken from each cover in C ( 2 / 6 , G 1 , 1 ) .

8.3. Example of §5.3.1, step 3

Suppose
(1)
A = R
(2)
When defining f : A R :
f ( x ) = 1 x < 0 1 0 x < 0.5 0.5 0.5 x
(3)
( G r ) r N = ( x , f ( x ) ) : r x r r N
(4)
G 1 = ( x , f ( x ) ) : 1 x 1
(5)
C ( 2 / 6 , G 1 , 1 ) , using eq. 147 and fig. Figure 3, is approx.
{ ( x , f ( x ) ) : 1 x . 764 , ( x , f ( x ) ) : . 764 x . 528 , ( x , f ( x ) ) : . 528 x . 293 ( x , f ( x ) ) : . 293 x . 057 , ( x , f ( x ) ) : . 057 x . 178 , ( x , f ( x ) ) : . 178 x . 414 ( x , f ( x ) ) : . 414 x . 65 , ( x , f ( x ) ) : . 65 x . 886 , ( x , f ( x ) ) : . 886 x 1.121 }
(6)
S ( C ( 13 / 6 , G 1 , 1 ) , 1 ) , using eq. 152, is:
{ ( . 9 , 1 ) , ( . 65 , 1 ) , ( . 4 , 1 ) , ( . 2 , 1 ) , ( . 1 , 1 ) , ( . 3 , 1 ) , ( . 55 , . 5 ) , ( . 75 , . 5 ) , ( 1 , . 5 ) }
Therefore, consider the following process:

8.3.1. Step 3a

If S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) is:
{ ( . 9 , 1 ) , ( . 65 , 1 ) , ( . 4 , 1 ) , ( . 2 , 1 ) , ( . 1 , 1 ) , ( . 3 , 1 ) , ( . 55 , . 5 ) , ( . 75 , . 5 ) , ( 1 , . 5 ) }
suppose x 0 = ( . 9 , 1 ) . Note, the following:
(1)
x 1 = ( . 65 , 1 ) is the next point in the “pathway" since it’s a point in S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) with the smallest 2-d Euclidean distance to x 0 instead of x 0 .
(2)
x 2 = ( . 4 , 1 ) is the third point since it’s a point in S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) with the smallest 2-d Euclidean distance to x 1 instead of x 0 and x 1 .
(3)
x 3 = ( . 2 , 1 ) is the fourth point since it’s a point in S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) with the smallest 2-d Euclidean distance to x 2 instead of x 0 , x 1 , and x 2 .
(4)
we continue this process, where the “pathway" of S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) is:
( . 9 , 1 ) ( . 65 , 1 ) ( . 4 , 1 ) ( . 2 , 1 ) ( . 55 , . 5 ) ( . 75 , . 5 ) ( 1 , . 5 ) ( . 3 , 1 ) ( . 1 , 1 )
Note 10. 
If more than one point has the minimum 2-d Euclidean distance from x 0 , x 1 , x 2 , etc. take all potential pathways: e.g., using the sample in eq. 156, if x 0 = ( . 65 , 1 ) , then since ( . 9 , 1 ) and ( . 4 , 1 ) have the smallest Euclidean distance to ( . 65 , 1 ) , taketwopathways:
( . 65 , 1 ) ( . 9 , 1 ) ( . 4 , 1 ) ( . 2 , 1 ) ( . 55 , . 5 ) ( . 75 , . 5 ) ( 1 , . 5 ) ( . 3 , 1 ) ( . 1 , 1 )
and also:
( . 65 , 1 ) ( . 4 , 1 ) ( . 2 , 1 ) ( . 9 , 1 ) ( . 55 , . 5 ) ( . 75 , . 5 ) ( 1 , . 5 ) ( . 3 , 1 ) ( . 1 , 1 )

8.3.2. Step 3b

Next, take the length of all line segments in each pathway. In other words, suppose d ( P , Q ) is the n-th dim.Euclidean distance between points P , Q R n . Using the pathway in eq. 157, we want:
{ d ( ( . 9 , 1 ) , ( . 65 , 1 ) ) , d ( ( . 65 , 1 ) , ( . 4 , 1 ) ) , d ( ( . 4 , 1 ) , ( . 2 , 1 ) ) , d ( ( . 2 , 1 ) , ( . 55 , . 5 ) ) , d ( ( . 55 , . 5 ) , ( . 75 , . 5 ) ) , d ( ( . 75 , . 5 ) , ( 1 , . 5 ) ) , d ( ( 1 , . 5 ) , ( . 3 , 1 ) ) , d ( ( . 3 , 1 ) , ( . 1 , 1 ) ) }
Whose distances can be approximated as:
{ . 25 , . 25 , . 2 , . 901389 , . 2 , . 25 , 1 . 655295 , . 2 }
Also, we see the outliers [6] are . 901389 and 1 . 655295 (i.e., notice that the outliers are more prominent for ε 2 / 6 ). Therefore, remove . 901389 and 1 . 655295 from our set of lengths:
{ . 25 , . 25 , . 2 , . 2 , . 25 , . 2 }
This is illustrated using:
Figure 6. The black arrows are the “pathways" whose lengths aren’t outliers. The length of the red arrows in the pathway are outliers.
Figure 6. The black arrows are the “pathways" whose lengths aren’t outliers. The length of the red arrows in the pathway are outliers.
Preprints 149723 g006
Hence, when x 0 = ( . 9 , 1 ) , using §5.3.1 step 3b & eq. 156, we note:
L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) = { . 25 , . 25 , . 2 , . 2 , . 25 , . 2 }

8.3.3. Step 3c

To convert the set of distances in eq. 159 into a probability distribution, we take:
x { . 25 , . 25 , . 2 , . 2 , . 25 , . 2 } x = . 25 + . 25 + . 2 + . 2 + . 25 + . 2 = 1 . 35
Then divide each element in { . 25 , . 25 , . 2 , . 2 , . 25 , . 2 } by 1.35
{ . 25 / ( 1.35 ) , . 25 / ( 1.35 ) , . 2 / ( 1.35 ) , . 2 / ( 1.35 ) , . 25 / ( 1.35 ) , . 2 / ( 1.35 ) }
which gives us the probability distribution:
{ 5 / 27 , 5 / 27 , 4 / 27 , 4 / 27 , 5 / 27 , 4 / 27 }
Hence,
P ( L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) = { 5 / 27 , 5 / 27 , 4 / 27 , 4 / 27 , 5 / 27 , 4 / 27 }

8.3.4. Step 3d

Take the shannon entropy of eq. 161:
E ( P ( L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) ) = x P ( L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) x log 2 x = x { 5 / 27 , 5 / 27 , 4 / 27 , 4 / 27 , 5 / 27 , 4 / 27 } x log 2 x = ( 5 / 27 ) log 2 ( 5 / 27 ) ( 5 / 27 ) log 2 ( 5 / 27 ) ( 4 / 27 ) log 2 ( 4 / 27 ) ( 4 / 27 ) log 2 ( 4 / 27 ) ( 5 / 27 ) log 2 ( 5 / 27 ) ( 4 / 27 ) log 2 ( 5 / 27 ) = ( 15 / 27 ) log 2 ( 5 / 27 ) ( 12 / 27 ) log 2 ( 4 / 27 ) 2 . 57604
We shorten E ( P ( L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) ) to E ( L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) , giving us:
E ( L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2 . 57604

8.3.5. Step 3e

Take the entropy, w.r.t all pathways, of the sample:
{ ( . 9 , 1 ) , ( . 65 , 1 ) , ( . 4 , 1 ) , ( . 2 , 1 ) , ( . 1 , 1 ) , ( . 3 , 1 ) , ( . 55 , . 5 ) , ( . 75 , . 5 ) , ( 1 , . 5 ) }
In other words, we’ll compute:
E ( L ( S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) = sup x 0 S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) E ( L ( x 0 , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) )
We do this by repeating §8.3.18.3.4 for different x 0 S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) (i.e., in the equation with multiple values, see note 10)
E ( L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.57604
E ( L ( ( . 65 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.3131 , 2.377604
E ( L ( ( . 4 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.3131
E ( L ( ( . 2 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.57604
E ( L ( ( . 1 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 1.86094
E ( L ( ( . 3 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 1.85289
E ( L ( ( . 55 , . 5 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.08327
E ( L ( ( . 75 , . 5 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.31185
E ( L ( ( 1 , . 5 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.2622
Hence, since the largest value out of eq. 164-172 is 2 . 57604 :
E ( L ( S ( C ( 13 / 6 , G 1 , 1 ) , 1 ) ) ) = sup x 0 S ( C ( ε , G 1 , 3 ) , 1 ) E ( L ( x 0 , S ( C ( ε , G 1 , 1 ) , 1 ) ) ) 2 . 57604

References

  1. Claudio Bernardi and Claudio Rainaldi. Everywhere surjections and related topics: Examples and counterexamples. Le Matematiche, 73(1):71–88, 2018. https://www.researchgate.net/publication/325625887_Everywhere_surjections_and_related_topics_Examples_and_counterexamples.
  2. Pablo Shmerkin (https://mathoverflow.net/users/11009/pablo shmerkin). Hausdorff dimension of r x x. MathOverflow. https://mathoverflow.net/q/189274.
  3. Arbuja (https://mathoverflow.net/users/87856/arbuja). Is there an explicit, everywhere surjective f : R R whose graph has zero hausdorff measure in its dimension? MathOverflow. https://mathoverflow.net/q/476471.
  4. JDH (https://math.stackexchange.com/users/413/jdh). Uncountable sets of hausdorff dimension zero. Mathematics Stack Exchange. https://math.stackexchange.com/q/73551.
  5. SBF (https://math.stackexchange.com/users/5887/sbf). Convergence of functions with different domain. Mathematics Stack Exchange. https://math.stackexchange.com/q/1063261.
  6. Renze John. Outlier. https://en.m.wikipedia.org/wiki/Outlier.
  7. Bharath Krishnan. Bharath krishnan’s researchgate profile. https://www.researchgate.net/profile/Bharath-Krishnan-4.
  8. Gray M. Entropy and Information Theory. Springer New York, New York [America];, 2 edition, 2011. https://ee.stanford.edu/~gray/it.pdf.
  9. MFH. Prove the following limits of a sequence of sets? Mathchmaticians, 2023. https://matchmaticians.com/questions/hinaeh.
  10. OEIS Foundation Inc. A002088. The On-Line Encyclopedia of Integer Sequences, 1991. https://oeis.org/A002088.
  11. OEIS Foundation Inc. A011371. The On-Line Encyclopedia of Integer Sequences, 1999. https://oeis.org/A011371.
  12. OEIS Foundation Inc. A099957. The On-Line Encyclopedia of Integer Sequences, 2005. https://oeis.org/A099957.
  13. William Ott and James A. Yorke. Prevelance. Bulletin of the American Mathematical Society, 42(3):263–290, 2005. https://www.ams.org/journals/bull/2005-42-03/S0273-0979-05-01060-8/S0273-0979-05-01060-8.pdf.
  14. T.F. Xie and S.P. Zhou. On a class of fractal functions with graph hausdorff dimension 2. Chaos, Solitons & Fractals, 32(5):1625–1630, 2007. https://www.sciencedirect.com/science/article/pii/S0960077906000129.
  15. ydd. Finding the asymptotic rate of growth of a table of value? Mathematica Stack Exchange. https://mathematica.stackexchange.com/a/307050/34171.
  16. ydd. How to find a closed form for this pattern (if it exists)? Mathematica Stack Exchange. https://mathematica.stackexchange.com/a/306951/34171.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated