Preprint
Article

This version is not peer-reviewed.

Structural Failure Mode Analysis of the Binary Goldbach Conjecture

Submitted:

31 January 2026

Posted:

03 February 2026

You are already at the latest version

Abstract
This paper analyzes the Binary Goldbach Conjecture (bGC) through a deterministic structural lens, employing a Failure Mode Analysis (FMA) framework to map prime and composite inventories onto the Left-Right Partition Table (LRPT). We establish structural identities governing the conservation of partition elements, demonstrating that the count of Prime-Prime (PP) pairs functions as a necessary deterministic residual. The analysis identifies tiered inadmissible failure states where, in each Tier, the exhaustion of composite inventories mathematically forces prime-prime partitions into existence to preserve information conservation. Numerical analysis for N up to 106 validates these findings, showing that the boundary of failure admissibility, parameterized by the ratio \( \hat{\lambda}_L(N) \), converges toward a global structural ceiling. Furthermore, by leveraging the midpoint symmetry of Goldbach primes, the FMA approach yields a ``Mirror Search'' mechanism for distal primes that demonstrates superior discovery efficiency compared to sequential scanning methods guided by the Prime Number Theorem. The analysis also reveals that the failure state (PP(N)=0) precipitates an information-theoretic paradox: it implies that the global prime counting function π(2N) can be fully reconstructed from the local modular geometry of a subset of composites, violating the established algorithmic irreducibility of the prime sequence.
Keywords: 
;  ;  ;  ;  ;  ;  ;  

1. Introduction

The Binary Goldbach Conjecture asserts that every even integer 2 N > 4 can be expressed as the sum of two primes. While the conjecture has been verified computationally for N up to 4 · 10 18 [1], a structural proof remains elusive. Historically, analytic approaches such as the Circle Method and Sieve Theory have faced significant theoretical barriers, most notably the "Parity Problem," which limits the ability to distinguish between primes and products of two primes [2,3]. Furthermore, while probabilistic models like the Hardy-Littlewood heuristics [4] provide strong empirical evidence, they lack the deterministic necessity required to prove the non-existence of a failure state.
This work applies a Failure Mode Analysis (FMA) framework to the problem, investigating the structural conditions required for a “Failure State”, defined as a state where the count of prime partitions (or pairs) is zero, i.e., P P ( N ) = 0 . By re-framing the conjecture as a resource allocation problem, we demonstrate that the existence of prime-prime pairs is not merely probable, but a deterministic consequence of inventory conservation.
The analysis focuses on the transition from the midpoint N to the partition space of 2 N . The fundamental data structure used is the Left-Right Partition Table (LRPT) [5], which maps the arithmetic inventory of primes and composites onto the row space of odd partitions. The framework demonstrates that the conservation of arithmetic inventory imposes strict lower bounds on P P ( N ) , categorizing the problem into tiered inadmissible failure states. This research builds upon prior findings regarding additive pairings in the context of the bGC and, in particular, the structural implications of primes, coprimes and composites within such partitions [6,7,8].
The remainder of this paper is organized as follows: Section 2 establishes the foundational arithmetic definitions and partition classifications. Section 3 derives the Structural Identities that govern the conservation of elements within the LRPT. Section 4 presents the tiered Failure Mode Analysis, identifying the specific conditions under which a failure state becomes inadmissible. Section 5 analyzes the asymptotic behavior of the structural metrics, followed by numerical validation in Section 6. Finally, the Appendices discuss the efficiency of “Mirror Search” algorithms (Appendix A) and the information-theoretic implications of the failure state (Appendix B).

2. Definitions

This section establishes the foundational arithmetic parameters, partition classifications, and structural functions required to construct the Failure Mode Analysis (FMA) framework and the Left-Right Partition Table (LRPT).

2.1. Fundamental Parameters

Definition 1. 
Let N 5 be an integer serving as the midpoint. An  odd partition  of 2 N is defined as a pair of odd integers ( x , y ) such that:
x + y = 2 N , 3 x y 2 N 3
The analysis is restricted strictly to odd integers.
Definition 2.  
The total number of odd partition rows for a given N is denoted by R o w s ( N ) :
R o w s ( N ) = N 1 2
This integer corresponds to the number of rows in the Left-Right Partition Table (LRPT) for the midpoint N.
Definition 3.  
The  Top Right Entry (TRE)  is the largest integer in the partition space:
T R E ( N ) = 2 N 3
Definition 4.  
The  Radius  R ( x , y ) of a partition ( x , y ) represents its distance from the midpoint N:
R ( x , y ) = y x 2
Since x and y are odd integers with x y , the radius is always a non-negative integer.
Definition 5.  
The  Minimum Distinct Radius, M D R ( N ) > 0 , is defined as the minimum radius among all distinct Prime-Prime (PP) partitions ( x , y ) of 2 N :
M D R ( N ) = min { R ( x , y ) : ( x , y ) P P ( N ) }
where 3 x < y 2 N 3 . Note that if N is prime, the partition ( N , N ) is not considered since ( N , N ) is not a distinct prime-prime partition.
Definition 6.  
The  Maximum Radius, M X R ( N ) 0 , is defined as the maximum radius among all Prime-Prime (PP) partitions ( x , y ) of 2 N :
M X R ( N ) = max { R ( x , y ) : ( x , y ) P P ( N ) }
where 3 x y 2 N 3 .

2.2. Partition Classification

Definition 7. 
Partitions ( x , y ) are classified based on the primality of their components:
  • PP:  Prime-Prime (Goldbach pairs).
  • PC:  Prime-Composite.
  • CP:  Composite-Prime.
  • CC:  Composite-Composite.
The symbols P P ( N ) , P C ( N ) , C P ( N ) , and C C ( N ) denote the counts of these partitions in the LRPT for midpoint N.

2.3. Indicator Functions

Definition 8.  
The following indicator functions define the state of the midpoint N:
1 p ( N ) = 1 if N is prime 0 otherwise 1 o c ( N ) = 1 if N is an odd composite 0 otherwise

2.4. Inventory and Arithmetic Functions

Definition 9.  
The  Total Prime Inventory, T P I ( N ) , is the count of odd primes available in the partition interval:
T P I ( N ) = π ( 2 N 3 ) 1
Definition 10.  
The prime factor counting functions are defined as:
  • ω ( n ) : The number of distinct odd prime factors of n.
  • Ω ( n ) : The total number of odd prime factors of n (with multiplicity).

2.5. Composite Subsets

Definition 11.  
The set of Composite-Composite partitions C C ( N ) is decomposed into two disjoint sets based on the greatest common divisor (gcd) of the left component x and the midpoint N:
  • Derived Composite Partitions ( Ψ C C ):
    Ψ C C ( N ) = | { ( x , y ) C C ( N ) : gcd ( x , N ) > 1 } |
  • Primitive Composite Partitions ( Ψ ¯ C C ):
    Ψ ¯ C C ( N ) = | { ( x , y ) C C ( N ) : gcd ( x , N ) = 1 } |
Definition 12.  
We define the complexity bounds for the Primitive Composites Ψ ¯ C C ( N ) :
  • The  Theoretical Upper Bound, k ( N ) , is derived from the magnitude constraint 3 k 2 N 3 :
    k ( N ) = log ( 2 N 3 ) log 3
  • The  Actual Maximum Complexity, k m a x ( N ) , is the maximum number of distinct prime factors observed within the set:
    k m a x ( N ) = max { max ( ω ( x ) , ω ( y ) ) : ( x , y ) Ψ ¯ C C ( N ) }
It follows structurally that k m a x ( N ) k ( N ) .
Definition 13.  
The Primitive Composites set Ψ ¯ C C ( N ) is partitioned into two disjoint subsets based on the complexity of the components relative to a parameter k 1 :
  • Primitive Composite Partitions with Minimum Complexity k ( L k ):
    L k ( N ) = | { ( x , y ) Ψ ¯ C C ( N ) : min ( ω ( x ) , ω ( y ) ) k } |
  • Primitive Composite Partitions with Minimum Complexity > k ( H k ):
    H k ( N ) = | { ( x , y ) Ψ ¯ C C ( N ) : min ( ω ( x ) , ω ( y ) ) > k } |
Structural Partition Identity:
Ψ ¯ C C ( N ) = L k ( N ) + H k ( N )

2.6. Auxiliary Counts

Definition 14.  
To support the Structural Identities, the following counts are defined:
  • P P ( N ) : Count of PP partitions excluding the midpoint ( N , N ) .
  • C C ( N ) : Count of CC partitions excluding the midpoint ( N , N ) .
  • C c o m p ( N ) : Count of distinct odd composite integers in the LRPT.
  • C p r i m e ( N ) : Count of distinct odd prime integers in the LRPT.

2.7. Tiered FMA Criteria

Definition 15.  
The tiered FMA criteria are defined as follows:
R H S 1 ( N ) = R o w s ( N ) T P I ( N ) 1 p ( N )
R H S 2 ( N ) = R H S 1 ( N ) Ψ C C ( N )
R H S 3 ( N , k ) = R H S 2 ( N ) L k ( N )

3. Structural Identities

The following identities govern the conservation of elements within the LRPT.
Partition Summation:
P P ( N ) + P C ( N ) + C P ( N ) + C C ( N ) = R o w s ( N )
Distinct Element Counting:
C c o m p ( N ) = C P ( N ) + P C ( N ) + 2 C C ( N ) + 1 o c ( N )
C p r i m e ( N ) = C P ( N ) + P C ( N ) + 2 P P ( N ) + 1 p ( N )
Midpoint Decomposition:
C C ( N ) = C C ( N ) + 1 o c ( N )
P P ( N ) = P P ( N ) + 1 p ( N )
Inventory Identities:
C p r i m e ( N ) = T P I ( N )
C c o m p ( N ) + C p r i m e ( N ) = N 2
T P I ( N ) = 2 P P ( N ) + P C ( N ) + C P ( N ) 1 p ( N )
Structural Conservation:
C C ( N ) = R o w s ( N ) + P P ( N ) T P I ( N ) 1 p ( N )
Tiered Decompositions:
C C ( N ) = Ψ C C ( N ) + Ψ ¯ C C ( N )
Ψ ¯ C C ( N ) P P ( N ) = R H S 2 ( N )
Ψ ¯ C C ( N ) = L k ( N ) + H k ( N )
H k ( N ) P P ( N ) = R H S 3 ( N , k )
Remark 1.  
The identities SI1 through SI9 establish the LRPT as a closed arithmetic system. The Structural Conservation Law (SI9) demonstrates that the count of prime pairs P P ( N ) is not a probabilistic occurrence but a deterministic residual resulting from the subtraction of the composite inventory and the prime inventory from the total row space.
Remark 2.  
Identity SI10 provides the critical separation between “Fixed” and “Free” arithmetic constraints. The Derived Composites Ψ C C ( N ) are structurally forced by the existing prime factors of N (Tier II), whereas the Primitive Composites Ψ ¯ C C ( N ) represent the intrinsic free space available to absorb the remaining arithmetic criterion (Tier III).
Remark 3.  
The progression from SI11 to SI13 formalizes the sufficiency conditions for the Goldbach Conjecture. Proving P P ( N ) > 0 is structurally equivalent to proving that the Active Primitive Composites L k ( N ) are sufficient to fully saturate the arithmetic criterion R H S 2 ( N ) . This forces the High-Complexity Remainder H k ( N ) into a negative state ( R H S 3 < 0 ), necessitating the existence of prime pairs to balance the structural identity.

4. Structured Failure Mode Analysis

The Failure Mode Analysis (FMA) determines the conditions under which the failure state P P ( N ) = 0 is mathematically impossible (inadmissible). From the Structural Identities derived in Section 3, a hierarchy of inadmissible failure states is defined, based on the balance between the arithmetic inventory (Supply) and the row space capacity (Demand).

4.1. Tier I—FMA: Prime Saturation

Tier I analyzes the arithmetic balance defined by the Structural Conservation Law (SI9). It represents a state of Arithmetic Saturation, where the prime inventory exceeds the composite inventory, and as a consequence some primes are structurally forced to form PP pairs.
Theorem 1.  
For any midpoint N, if the first arithmetic deficit R H S 1 ( N ) is negative, then the count of prime pairs P P ( N ) must be strictly positive.
R H S 1 ( N ) < 0 P P ( N ) > 0
Proof. 
The derivation begins with the Structural Conservation Law (SI9), which relates the partition counts to the arithmetic inventory:
C C ( N ) = R o w s ( N ) + P P ( N ) T P I ( N ) 1 p ( N )
By Definition 15, the Tier I deficit is defined as R H S 1 ( N ) = R o w s ( N ) T P I ( N ) 1 p ( N ) . Substituting this definition into SI 9 yields:
C C ( N ) = P P ( N ) + R H S 1 ( N )
Rearranging the terms to compare the partition types:
C C ( N ) P P ( N ) = R H S 1 ( N )
If the arithmetic deficit is negative ( R H S 1 ( N ) < 0 ), it strictly implies:
C C ( N ) P P ( N ) < 0 C C ( N ) < P P ( N )
Since the count of composite pairs is non-negative ( C C ( N ) 0 ), the inequality 0 C C ( N ) < P P ( N ) necessitates that P P ( N ) 1 . □
This result shows how the analysis and findings discussed in Section 5.2 of [5] fit in the general and systemic FMA framework presented here.

4.2. Tier II—FMA: Structural Resistance

Tier I is sufficient for small N but fails as the density of primes decreases ( T P I ( N ) R o w s ( N ) ). Tier II refines the analysis by accounting for Derived Composites ( Ψ C C ), which are composite pairs structurally forced by the prime factors of N. These pairs occupy row space but cannot contribute to P P ( N ) .
Theorem 2.  
For any midpoint N, if the second FMA criterion R H S 2 ( N ) is negative, then P P ( N ) > 0 .
R H S 2 ( N ) < 0 P P ( N ) > 0
Proof. 
The proof employs the same saturation logic established in Tier I. We begin with the rearranged Structural Conservation Law:
C C ( N ) P P ( N ) = R H S 1 ( N )
Substituting the decomposition C C ( N ) = Ψ C C ( N ) + Ψ ¯ C C ( N ) into the equation:
Ψ C C ( N ) + Ψ ¯ C C ( N ) P P ( N ) = R H S 1 ( N )
We isolate the relationship between the Primitive Composites and the prime pairs by subtracting Ψ C C ( N ) from both sides:
Ψ ¯ C C ( N ) P P ( N ) = R H S 1 ( N ) Ψ C C ( N )
By Definition 15, the right-hand side is exactly the Tier II criterion R H S 2 ( N ) . If this value is negative ( R H S 2 ( N ) < 0 ), the equation dictates:
Ψ ¯ C C ( N ) P P ( N ) < 0 Ψ ¯ C C ( N ) < P P ( N )
Since the set of Primitive Composites is non-negative ( Ψ ¯ C C ( N ) 0 ), the strict inequality Ψ ¯ C C ( N ) < P P ( N ) forces the existence of at least one prime pair ( P P ( N ) 1 ). □
Lemma 1  
(Estimation Formula Ψ E s t ). The Structural Resistance Ψ C C ( N ) is approximated by the estimator function Ψ E s t ( N ) , which is derived using the Euler product formula over the odd domain:
Ψ E s t ( N ) = N 2 · 1 p | N p > 2 1 1 p ω ( N )
Proof. 
The derivation is based on the modular properties of the Left-Right Partition Table (LRPT). The total number of odd integer slots in the interval [ 3 , N ] is represented by the scalar N 2 . The density of integers within this interval that share a common factor with N is calculated using the complement of the Euler product function. Let P ( N ) be the set of distinct odd prime factors of the midpoint. The proportion of integers coprime to N is given by p | N ( 1 1 p ) . Consequently, the proportion of integers sharing a factor is determined as 1 ( 1 1 p ) .
This density is applied to the interval size to estimate the total count of multiples. However, this set includes the prime factors themselves. Since x = p results in a Prime-Composite (PC) pair (not a CC pair), these specific rows must be excluded. Thus, the count of distinct prime factors, ω ( N ) , is subtracted from the total. □
Remark 4  
(Numerical Validation of Ψ E s t ). To assess the precision of the estimator Ψ E s t ( N ) relative to the exact Structural Resistance Ψ C C ( N ) , a computational analysis was conducted over the interval 5 N 10 , 000 . The absolute error | Ψ E s t Ψ C C | was evaluated for each transition point. The results indicate that the approximation provides a highly accurate fit for the structurally enforced count. Across the entire domain of tested integers, the absolute error was found to be strictly bounded by 0.5 . This empirical evidence confirms that Ψ E s t ( N ) serves as a reliable analytical predictor for the count of composite pairs enforced by the prime factors of the partition sum.

4.3. Tier III—FMA: Critical Threshold for Inadmissibility of Failure

The identification of the specific conditions required to render the failure state P P ( N ) = 0 inadmissible is achieved by analyzing the saturation of the partition table row space. This is evaluated through the discrete count of active primitive composites, L k , which is subsequently modeled as a normalized ratio, λ L .
Definition 16.  
The Critical Active Count, L k , c r i t ( N ) , is defined as the maximum quantity of active primitive composites admissible in the partition table before a prime-prime solution is structurally forced:
L k , c r i t ( N ) = R H S 2 ( N )
When the active inventory L k ( N ) reaches this threshold, the available capacity for composite-composite partitions is exhausted, forcing the formation of prime-prime pairs, in order to satisfy the structural conservation constraints of the LRPT table elements.
Definition 17  
(Critical Complexity k c r i t ( N ) ). The Critical Complexity k c r i t ( N ) is defined as the minimum value required to satisfy the structural saturation condition established in Theorem 3. It identifies the specific complexity depth at which the cumulative active inventory L k ( N ) exhausts the critical capacity L k , c r i t ( N ) :
k c r i t ( N ) = min { k N : L k ( N ) L k , c r i t ( N ) }
The existence of a finite k c r i t ( N ) k m a x ( N ) is sufficient to render the failure state P P ( N ) = 0 structurally inadmissible.
Theorem 3.  
For any midpoint N, the failure state P P ( N ) = 0 is structurally inadmissible if the active primitive inventory at complexity k satisfies:
L k ( N ) L k , c r i t ( N )
under the condition that if L k ( N ) = L k , c r i t ( N ) , the high-complexity reservoir H k ( N ) must be strictly positive.
Proof. 
Identity SI13 establishes the following relation:
H k ( N ) P P ( N ) = R H S 2 ( N ) L k ( N )
Substitution of the definition for L k , c r i t ( N ) into (8) results in the expression:
H k ( N ) P P ( N ) = L k , c r i t ( N ) L k ( N )
In the case where L k ( N ) > L k , c r i t ( N ) , the right-hand side is strictly negative:
H k ( N ) P P ( N ) < 0 P P ( N ) > H k ( N ) 0
In the case where L k ( N ) = L k , c r i t ( N ) , the identity simplifies to:
P P ( N ) = H k ( N )
The condition H k ( N ) > 0 forces P P ( N ) > 0 . In both scenarios, the failure hypothesis P P ( N ) = 0 is contradicted by the arithmetic conservation of the partition table. □
To facilitate the analysis across varying magnitudes of N, a normalized metric is adopted. Such a transformation converts the above discrete analysis approach into a density threshold requirement relative to the total composite inventory.
We identify the critical threshold for the inadmissibility of a failure state ( P P ( N ) = 0 ) by analyzing the arithmetic gap between the composite inventory and the partition table capacity.
Definition 18  
(Low Complexity Composite Ratio λ L ). Let L k be the cardinality of the active odd composite inventory (partition rows) at iteration k. The active ratio λ L is defined as the scalar relative to the total composite element inventory C c o m p :
L k = λ L C c o m p
where λ L [ 0 , 1 ] .
Remark 5.  
In the context of the Left-Right Partition Table (LRPT), the structural requirement to maintain a failure state (zero prime-prime pairs) dictates that each row must be occupied by at least one composite element. Since each row contains two distinct integer slots, the threshold for failure admissibility is defined by the saturation of the row space:
λ L , threshold = Rows ( N ) 2 × Rows ( N ) = 0.5
As k k m a x , the numerical ratio λ L approaches 1 as L k exhausts C c o m p . However, the Goldbach solution is established as a deterministic necessity whenever λ L > 0.5 , representing a state of over-saturation where the composite inventory exceeds the available row count, rendering a failure mode inadmissible.
Definition 19  
(The Saturation Limit ρ ). The structural limit ρ is defined as the ratio of the primitive capacity Ψ ¯ C C ( N ) to the total composite inventory C c o m p ( N ) as k k m a x :
ρ = lim k k m a x λ L = Ψ ¯ C C ( N ) C c o m p ( N )
Definition 20  
(High Complexity Composite Ratio λ H ). Let H k be the cardinality of the inactive odd composite inventory (partition rows) at iteration k. The inactive ratio λ H is defined as:
H k = λ H C c o m p
where λ H [ 0 , 0.5 ] . As the complexity parameter k approaches its maximum value k m a x , the inactive reservoir is exhausted, satisfying the limit:
lim k k m a x λ H = 0
Remark 6  
(Derivation of the Saturation Limit ρ ). The structural limit ρ is derived by evaluating the active ratio λ L at the maximum complexity state k = k m a x ( N ) , where the high-complexity reservoir is exhausted ( H k m a x = 0 ).
The definition of the ratio is:
λ L = L k m a x ( N ) C c o m p ( N )
From the Distinct Element Counting identity (SI2) and Midpoint Decomposition (SI4), the total composite inventory is:
C c o m p ( N ) = 2 C C ( N ) + P C ( N ) + C P ( N ) 1 o c ( N )
Substituting the decomposition of composite pairs C C ( N ) = Ψ C C ( N ) + L k m a x ( N ) (since H k m a x = 0 ) into the denominator yields the explicit saturation formula:
ρ = L k m a x 2 L k m a x + 2 Ψ C C + P C + C P 1 o c ( N )
For midpoints with no odd prime factors (e.g., N = 2 m ), the derived composite count vanishes ( Ψ C C = 0 ). Since we cannot reasonably assume that P C ( N ) + C P ( N ) = 0 , a scenario where ρ > 0.5 is unrealistic. As N , the primitive composite inventory L k m a x dominates the prime-based counts, causing ρ to approach 0.5 arbitrarily close from below ( ρ 0 . 5 ).
To identify the critical threshold for the inadmissibility of a failure state ( P P ( N ) = 0 ), we utilize the identity derived from SI13:
λ H C c o m p P P ( N ) = R H S 1 ( N ) Ψ C C ( N ) λ L C c o m p
Substituting the structural definition R H S 1 ( N ) = N 1 2 T P I 1 p ( N ) and the inventory identity C c o m p = N T P I 2 into (SI9) yields:
λ H C c o m p P P ( N ) = N 1 2 T P I 1 p ( N ) Ψ C C ( N ) λ L ( N T P I 2 )
Grouping terms by N and T P I :
λ H C c o m p P P ( N ) = N 1 2 λ L T P I ( 1 λ L ) 1 2 + 1 p ( N ) 2 λ L Ψ C C ( N )
Definition 21  
(Critical Active Threshold λ L ). The critical active threshold λ L ( N ) is defined as the specific value of the active ratio λ L that satisfies the zero-balance condition of the structural deficit equation. By setting the RHS of Equation (14) to zero and solving for λ L , we obtain:
λ L ( N ) = N 2 T P I ( N ) Ψ C C ( N ) 1 2 + 1 p ( N ) N T P I ( N ) 2
Structurally, this threshold represents the exact inventory density required to saturate the partition table’s remaining capacity after accounting for prime inventory and derived composite constraints. If the active ratio λ L λ L ( N ) , the failure state P P ( N ) = 0 becomes algebraically impossible.
These relationships are formally established in the following theorem.
Theorem 4  
(Structural Inadmissibility of the Failure State). For any even integer 2 N , the failure state P P ( N ) = 0 is structurally inadmissible whenever the active ratio λ L strictly exceeds the critical threshold:
λ L > λ L ( N )
Under this condition, the structural deficit becomes strictly negative, which algebraically forces the existence of Prime-Prime pairs ( P P ( N ) > 0 ) to satisfy the partition conservation laws, even in the limiting case where the inactive reservoir is exhausted ( H k = 0 ).
Proof. 
We analyze the structural identity derived in Equation (14):
λ H C c o m p P P ( N ) = N 1 2 λ L Ψ C C ( N ) T P I ( 1 λ L ) δ ( N )
By Definition 21, the critical threshold λ L ( N ) is the root where the Right-Hand Side (RHS) equals zero. We observe that the RHS is strictly decreasing with respect to λ L , as the derivative is dominated by the term ( N T P I ) . Consequently, applying the condition λ L > λ L ( N ) renders the RHS strictly negative:
λ H C c o m p P P ( N ) < 0
Rearranging the inequality yields:
P P ( N ) > λ H C c o m p
Since the inactive inventory is non-negative ( λ H C c o m p = H k 0 ), strictly positive solutions for prime pairs are structurally forced:
P P ( N ) > 0
This completes the proof. □
Remark 7  
(Scaling Dominance of Prime Inventory). It is important to note that the derived composite constraint Ψ C C ( N ) does not threaten the strict negativity of the structural deficit. The term Ψ C C ( N ) scales based on the distinct prime factors of N (specifically, ω ( N ) ), which grows extremely slowly (e.g., Ψ C C N ). In contrast, the prime inventory term T P I ( N ) scales asymptotically as N / ln N . Consequently, for all large N, the magnitude of the prime inventory deficit dominates the modular constraints:
T P I ( N ) Ψ C C ( N )
Remark 8  
(Global Ceiling vs. Operational Threshold). A critical distinction exists between the global structural ceiling and the local operational threshold. The condition:
λ L 1 2
represents the absolute saturation point where the row space is exhausted regardless of prime inventory magnitude. As shown, this limit is practically unreachable. However, for finite midpoints, the state of forced inadmissibility is governed by the critical active threshold. By omitting negligible lower-order terms ( Ψ C C , δ ), we obtain the operational approximation:
λ L ( N ) 1 2 N 2 T P I ( N ) N T P I ( N )
Due to the presence of T P I ( N ) , this value is strictly lower than the asymptotic limit of 0.5 . Consequently, the condition for inadmissibility is satisfied at lower composite densities, rendering the failure state P P ( N ) = 0 inadmissible before the global capacity limit is reached.
This is used to define a conservative boundary estimate for λ L in Section 4.3.1 and is tested numerically in Section 6.
Remark 9  
(Necessity of Positive Capacity). The failure state P P ( N ) = 0 requires the Right-Hand Side (RHS) of Equation (14) to be non-negative. Because the prime inventory term T P I ( 1 λ L ) imposes a strictly negative load for all λ L < 1 , the structural capacity term N ( 1 2 λ L ) must strictly exceed this deficit to sustain the failure state. This implies that structural admissibility becomes infeasible well before the ratio reaches 0.5 . As k k m a x , λ L increases and crosses the operational threshold λ L ( N ) < 0.5 , rendering the failure state structurally inadmissible.

4.3.1. The Conservative Failure Boundary λ ^ L ( N )

A conservative boundary for failure inadmissibility is established. We utilize the critical active threshold λ L ( N ) defined in Definition 21. To obtain a universal sufficiency condition that depends solely on the prime inventory, we define the estimator λ ^ L ( N ) by imposing the “blank slate” condition, setting the derived composite term Ψ C C ( N ) to zero in Equation (15).
Definition 22  
(Conservative Boundary Estimator). The Conservative Boundary Estimator λ ^ L ( N ) is defined as the upper bound of the critical active threshold λ L ( N ) :
λ ^ L ( N ) = N 2 T P I ( N ) N T P I ( N )
Structurally, since Ψ C C ( N ) 0 and 1 2 + 1 p ( N ) > 0 it follows that λ L ( N ) λ ^ L ( N ) . Therefore, satisfying the condition λ L λ ^ L ( N ) guarantees structural inadmissibility for all midpoints N, regardless of their specific prime factors.
Definition 23  
(Structural Safety Margin). The margin μ ( N ) is defined as the normalized difference between the global structural ceiling of 0.5 and the conservative boundary λ ^ L ( N ) :
μ ( N ) = 0.5 λ ^ L ( N ) 0.5 × 100 %
The progression of the critical threshold is governed by the asymptotic density of the prime inventory. As N increases, the scaling of T P I ( N ) follows the Prime Number Theorem ( T P I ( N ) 2 N / ln 2 N ). In the limit N , the ratio T P I ( N ) / N approaches zero. Substitution of this limit into the estimator demonstrates monotonic convergence toward the global structural ceiling:
lim N λ ^ L ( N ) = lim N 0.5 T P I ( N ) N 1 T P I ( N ) N = 0.5
This convergence establishes that the structural requirement for the inadmissibility of a failure state remains bounded by a global constant. While the "occupancy pressure" in the LRPT table from the prime inventory is most dominant at small N, the system remains structurally constrained at large N because the required active composite ratio never exceeds the theoretical ceiling of 0.5 . This trend confirms that even under the conservative assumption of zero derived structural assistance ( Ψ C C = 0 ), the required ratio for saturation remains strictly bounded. The numerical evolution of λ ^ L ( N ) is presented in Section 6.

4.4. Continuous Radical Complexity Model

The structural models previously discussed rely on the partitioning of the primitive composite inventory Ψ ¯ into a discrete sequence of nested subsets, defined by the prime factor count of the partition summands. To resolve the granularity limitations of this discrete approach, which segments the inventory into discontinuous tiers, a continuous model based on the Radical Complexity Gradient is introduced, to analyze the active inventory without discrete partitioning. For any partition ( x , y ) Ψ ¯ , the radical complexity γ x , y is defined as the logarithmic mapping of the square-free product normalized by the midpoint:
γ x , y = ln ( rad ( x · y ) ) ln N
where rad ( n ) = p | n p . This maps the primitive composite inventory to the spectrum γ ( 0 , 2 ] , since the product of partition components satisfies rad ( x · y ) x · y N 2 .
Definition 24  
(Continuous Active Inventory). The continuous active inventory L ( γ ) is defined as the subset of primitive composites satisfying the complexity bound γ:
L ( γ ) = { ( x , y ) Ψ ¯ : rad ( x · y ) < N γ }
The cardinality | L ( γ ) | is a monotonically non-decreasing step function with respect to γ.
Example 1.  
Consider a partition ( x , y ) Ψ ¯ for N = 10 4 where x = 3 2 · 5 2 and y = 7 2 · 11 . The absolute product is x y = 121 , 275 . The radical is rad ( x y ) = 3 · 5 · 7 · 11 = 1155 . The complexity is:
γ x , y = ln ( 1155 ) ln ( 10 4 ) 0.76
This value γ x , y < 1 places the partition in the active inventory L ( γ ) for all γ 0.76 , despite the magnitude of x y exceeding N.
To identify the saturation threshold, we define the Critical Radical Density  γ c r i t ( N ) as the infimum of the complexity parameter required to satisfy the structural deficit R H S 2 ( N ) .
Definition 25  
(Critical Radical Density).
γ c r i t ( N ) = inf { γ ( 0 , 2 ] : | L ( γ ) | R H S 2 ( N ) }
At this critical value, the structural identities (SI4) and (SI13) imply:
| H ( γ c r i t ) | P P ( N )
Since | H ( γ ) | is strictly determined by the complement Ψ ¯ L ( γ ) , the existence of γ c r i t < 2 implies | H ( γ c r i t ) | 0 . If γ c r i t is strictly less than the maximum complexity of the set, the inequality | L ( γ c r i t ) | R H S 2 ( N ) structurally forces P P ( N ) > 0 .

4.5. Evolution of λ ^ L ( N ) and γ c r i t for N = 10 m

The following table shows the values of the conservative boundary estimator λ ^ L ( N ) and the critical radical density γ c r i t across a logarithmic progression of values for N.
The empirical values in Table 1 substantiate the theoretical derivation of the conservative boundary. The observed monotonic convergence of λ ^ L ( N ) toward 0.5 confirms that the structural saturation requirement remains strictly bounded below the global ceiling. Furthermore, the non-vanishing safety margin μ ( N ) provides numerical evidence that the partition space never reaches the theoretical limit required to permit a failure state. Additionally, the data indicates a monotonic decrease in γ c r i t as N increases, consistent with the asymptotic behavior of the active composite ratio λ ^ L ( N ) .

5. Asymptotic Trends of Key FMA Metrics

We analyze the asymptotic behavior of three structural FMA metrics as N : the growth rate of the critical complexity index k c r i t ( N ) , the decay rate of the normalized ratios M D R / N , and the convergence of the normalized ratio M X R / N . These derivations serve as a theoretical baseline for the numerical analysis in Section 6, where the empirical trajectories of k c r i t ( N ) , M D R / N , and M X R / N are benchmarked against their predicted asymptotic trends to validate the FMA structural convergence findings.

5.1. Estimation of Asymptotic Growth Rate of k c r i t as N

The structural analysis in Section 4.3.1 established that the critical active threshold λ ^ L ( N ) converges asymptotically to 0.5 . This convergence implies that to guarantee the inadmissibility of a failure state, the active inventory L k must capture approximately half of the total composite population. Consequently, the critical complexity k c r i t ( N ) represents the median number of distinct prime factors for integers in the interval [ 3 , 2 N ] .
We invoke the Hardy-Ramanujan Theorem [9], which establishes that the normal order of the number of distinct prime factors ω ( n ) for an integer n is ln ln n . Furthermore, the Erdos-Kac Theorem [10] proves that the distribution of ω ( n ) is asymptotically Gaussian with mean ln ln N and variance ln ln N .
For a Gaussian distribution, the median and the mean are asymptotically equivalent. Therefore, the complexity index k required to accumulate the first 50 % of the composite inventory scales directly with the normal order:
k c r i t ( N ) ln ln N
This scaling law provides the theoretical justification for the stability observed in the numerical data. In contrast, the theoretical upper bound for complexity derived from magnitude constraints is k ( N ) ln N ln 3 . Since ln ln N ln N for all large N, this confirms the existence of a substantial combinatorial safety margin, as the system reaches a state of forced inadmissibility using only a negligible fraction of the theoretically available complexity depth.

5.2. Asymptotic Decay of M D R ( N ) / N as N

The normalized Minimum Distinct Radius, defined as r ( N ) = M D R ( N ) / N , indicates the proximity of the first prime-prime partition to the midpoint N. The scaling behavior is derived by treating the primality of components x and y as independent events, consistent with the Hardy-Littlewood k-tuple conjecture [4].
According to the Prime Number Theorem, the local density of primes near N is approximately 1 / ln N . The probability P P P that a given partition row constitutes a prime-prime pair is modeled as:
P P P 1 ln N 2
Assuming a geometric distribution for the distance to the first success in the row space, the expected number of trials—and thus the expected radius—scales as the inverse of the probability:
E [ M D R ( N ) ] 1 P P P = ( ln N ) 2
Normalizing by the midpoint yields the asymptotic decay rate:
r ( N ) = M D R ( N ) N ( ln N ) 2 N

5.3. Asymptotic Convergence of M X R ( N ) / N as N

The normalized Maximum Radius, R ( N ) = M X R ( N ) / N , is determined by the magnitude of the smallest prime p min in the partition set P P ( N ) . Unlike the MDR, which requires simultaneous primality of variable components N ± r , the determination of M X R ( N ) fixes the left component to the sequence of small primes p k and tests the primality of the complement 2 N p k .
Assuming the events are independent, the probability P h i t that the complement 2 N p k is prime is given by the density near 2 N :
P h i t 1 ln ( 2 N )
The expected number of trials k required to observe the first valid partition follows a geometric distribution with expectation E [ k ] 1 / P h i t = ln ( 2 N ) . To find the magnitude of the smallest component p min , we approximate the k-th prime using the asymptotic law p k k ln k :
E [ p min ] ln ( 2 N ) · ln ( ln ( 2 N ) )
Substituting this expectation into the radius definition M X R ( N ) = N p min yields the asymptotic expression for the normalized maximum radius:
M X R ( N ) N 1 ln ( 2 N ) ln ( ln ( 2 N ) ) N
This derivation confirms that the ratio converges to unity from below, with the “gap” from the edge scaling almost linearly with 1 / N , modulated by a polylogarithmic factor.

6. Numerical Analysis

The objective of this analysis is the empirical validation of the findings discussed in the previous sections. The FMA approach, in conjunction with the LRPT table, its associated metrics, and conservation laws, constitutes a comprehensive systemic testbed for the analysis and testing of hypotheses related to the Binary Goldbach conjecture, or any other arithmetic system with similar structure, columnar-type symmetry, and conservation laws.
The analysis of the structural characteristics of the odd partitions of 2 N for N [ 5 , 10 6 ] was implemented using the Google Colab multi-core Python programming and execution environment. To efficiently manage long processing times, large data output volumes, and risks related to runtime timeouts, the coding architecture incorporated three primary safeguards: batched implementation, persistent storage, and seamless resumption in case of mid-run interruptions. With these measures in place, and with the code optimized for a multi-core environment, the total execution time was approximately 3–4 days.
Data processing and graphing utilized three statistical techniques: (1) moving averages to smooth high-frequency fluctuations over large intervals of N; (2) data binning to compress datasets for visualization without information loss; and (3) computational functions for indexing and categorization. These techniques are discussed in greater detail in the subsequent sections as they apply to each FMA measure.

6.1. k c r i t and k ( N ) as N 10 6

The numerical evolution of k c r i t ( N ) and k ( N ) for N [ 5 , 10 6 ] follows the theoretical predictions established in Section 4.3 and Section 5.1, as illustrated in Figure 1. The dashed line (left axis) denotes the asymptotic approximation of k c r i t as N from Equation (26), scaled by a factor of 1.2 with an offset of 0.3 to optimize the fit for N 5000 .
The graph also displays the average k c r i t (orange line, left axis), calculated using a 50-row binning window that reduces the dataset from 10 6 to 2 × 10 5 points. The persistent high-frequency fluctuations observed in the larger values of k c r i t are characteristic of the variability in the structural tension required to balance the primitive composite inventory across different N. Further analysis is required to link k c r i t to the characteristics of the Factorial Spectrum of N—specifically the number, multiplicity, relative size, and variance of its prime factors. This future research direction necessitates a deeper statistical analysis of partition data alongside the factorization characteristics of N. Such analysis could provide insights into the underlying causes of the varying tension required to satisfy k c r i t , beyond the magnitude of N itself.
The solid line represents k ( N ) (blue line, right axis) as defined in Equation (1). It demonstrates a margin of approximately 3–4 times between k c r i t and k ( N ) ; specifically, k c r i t ( 25 30 % ) k ( N ) for larger N. This implies that the required balancing of the primitive composite inventory is consistently achieved at a numerical complexity level significantly lower than the theoretical worst-case bound.

6.2. M D R ( N ) / N and M X R ( N ) / N as N 10 6

The ratios of M D R ( N ) / N and M X R ( N ) / N are plotted after performing a 50-row binning and calculating the average value for each bin. Each ratio converges numerically and asymptotically to the values predicted from the analysis in Section 5.2 and Section 5.3.
Figure 2 illustrates the convergence of M D R ( N ) / N to 0 and Figure 3 illustrates the convergence of M X R ( N ) / N to 1 as N increases. This trend indicates a persistent structural pattern where at least one symmetric Prime-Prime partition exists in the vicinity of the midpoint N (small radius) and the endpoint 2 N 3 (large radius). Consequently, an optimized Mirror Search strategy for primes in ( N , 2 N ) is most efficient when concentrated around these two loci. This optimization is analyzed in Appendix A.
Furthermore, the data demonstrates that M D R ( N ) / N < 0.5 for larger N, substantiating the existence of distinct PP partitions in the lower half of the LRPT (closer to the midpoint). In this context, the case of N = 19 , where the sole distinct PP partition ( 7 , 31 ) resides in the upper half of the table, appears to be a unique anomaly. Conversely, the M X R trajectory suggests a persistent pattern regarding the existence of at least one PP partition in the upper half of the LRPT.
The dashed line in Figure 2 represents the asymptotic approximation of M D R ( N ) / N as N , derived in Equation (27). A scalar factor of 0.45 was applied to optimize the fit to the observed data for N 10 2 .
Similarly, the ratio M X R ( N ) / N is plotted in Figure 3. The dashed line represents the asymptotic approximation of M X R ( N ) / N as N , derived in Equation (28), with a scalar factor of 0.6 applied to optimize the fit for N 10 3 .

6.3. λ L ( N ) and λ ^ L ( N ) as N 10 6

Figure 4 displays the evolution of λ L ( N ) and λ ^ L ( N ) for N [ 5 , 10 6 ] , computed using a 50-row binning average. The solid and dashed lines represent the observed active ratio λ L ( N ) and the conservative boundary estimator λ ^ L ( N ) defined in Equation (19), respectively.
The observed value of λ L ( N ) corresponds to the minimum complexity k required to satisfy the condition R H S 3 ( N , k ) < 0 , which implies k k c r i t ( N ) . The graph demonstrates that for large N, this value is approximately 10 % lower than the limit imposed by the conservative boundary estimator for that N.
The high-frequency fluctuations observed in the graph of λ L ( N ) for larger values of N indicate that the observed ratio of primitive composite partitions to the total composite inventory exhibits slight volatility while maintaining a consistent upward trend as N increases.

6.4. Tier III R H S 3 ( N , k ) Criteria as N 10 6

In the preceding sections, we established the operational ranges for the three-tiered FMA framework. Prime saturation (Tier I) guarantees P P ( N ) > 0 for N 60 , consistent with prior research [5], while Tier II extends this guarantee to N 1000 . A specific tier validates the condition P P ( N ) > 0 if its corresponding criterion is strictly negative: R H S 1 ( N ) < 0 (Tier I), R H S 2 ( N ) < 0 (Tier II), or R H S 3 ( N , k ) < 0 (Tier III).
To accommodate the varying operational ranges of the tiers, the binning window was reduced to 25 rows. Prior to binning, a 100-row moving average was applied to mitigate high-frequency fluctuations that would otherwise obscure the visual trends. As observed in Figure 5, these fluctuations are visibly more persistent for Tier III ( k = 3 ).
Figure 5 illustrates how each subsequent tier extends the domain of admissibility as N increases, quantifying the structural dynamics required to maintain P P ( N ) > 0 as N 10 6 .
The graph demonstrates that the effective domain of each earlier tier diminishes as N increases, justifying the necessity of the Tier III analysis parameterized by k, λ L , or γ . For the range N [ 5 , 10 6 ] , the complexity level k = 3 is sufficient to maintain R H S 3 ( N , 3 ) < 0 for all points, thereby guaranteeing P P ( N ) > 0 .
Notably, the observed necessary complexity k 3 is significantly lower than the theoretical maximum k m a x ( 10 6 ) = 13 derived in Definition 12.

7. Conclusions

This research has presented a structural investigation of the Binary Goldbach Conjecture through the lens of Failure Mode Analysis (FMA). By mapping the arithmetic inventory of primes and composites onto the fixed geometry of the Left-Right Partition Table (LRPT), we have established a framework that treats the existence of Prime-Prime ( P P ) partitions not as a probabilistic occurrence, but as a deterministic consequence of information conservation.
The central finding of this work is the identification and tiered manifestation of the Structural Conservation Law, which dictates that the count of prime-prime pairs operates as a deterministic arithmetic residual. The analysis demonstrated that for a “Failure State” ( P P ( N ) = 0 ) to exist, the inventory of composite numbers would need to saturate the partition row space to a critical density ( λ L λ L ). This saturation effectively deprives the prime inventory of the composite partners required for pairing, thereby forcing prime-prime partitions into existence.
Additional insights derived from this framework include:
  • Tiered Inadmissibility: The FMA approach categorizes the conditions for failure into three hierarchical tiers. Tier I (Prime Saturation) precludes failure for small N where prime density exceeds composite capacity. Tier II (Structural Resistance) extends this domain by identifying the Derived Composite Inventory ( Ψ C C ), which is structurally forced by the prime factors of the midpoint. Finally, Tier III (Critical Complexity) establishes the general sufficiency condition for large N, identifying the critical complexity depth k c r i t at which the Active Primitive Composite Inventory ( L k ) saturates the remaining row space, rendering the failure state structurally inadmissible.
  • Asymptotic Stability: The derivation of the critical complexity k c r i t ( N ) ln ln N relative to the theoretical upper bound k ( N ) ln N confirms that the partition system becomes increasingly stable. The analysis indicates that the partition space is saturated by low-complexity composites ( L k ) long before the theoretical complexity limit is reached, creating an expanding combinatorial safety margin as N increases.
  • Deterministic vs. Probabilistic: Unlike heuristic models that rely on the pseudorandom distribution of primes, the FMA framework relies on the rigid modular constraints of composite pairings. The resulting Structural Identities imply that the global prime counting function is inextricably linked to the local geometry of partitions, suggesting that a failure of the conjecture would necessitate a violation of the algorithmic irreducibility of the prime sequence.
In summary, this work moves beyond existence-based searching to define the boundaries of Structural Inadmissibility. While not replacing traditional analytic methods, the FMA framework offers a novel, deterministic perspective: that the Binary Goldbach Conjecture is likely true not because the primes are random enough to hit a target, but because the composites are structurally constrained from blocking all possible solutions.

Appendix A. Mirror-Prime Search Analysis

This appendix evaluates the search efficiency implications of the FMA framework under the assumption of standard probabilistic prime distribution models. Specifically, we define the search problem as identifying a prime q ( N , 2 N ] given the known inventory of primes p N . Unlike the deterministic structural proofs presented in the main body, these results rely on the Prime Number Theorem (PNT) and the Hardy-Littlewood k-tuple conjecture.

Appendix A.1. Search Definitions

Definition A1.  
Let T represent the number of primality tests performed and Δ = q N represent the distance from the search origin N to the discovered prime q. TheSearch Leverage L is defined as:
L = Δ T
Definition A2.  
TheSearch Efficiencyη is defined as the inverse of the computational cost required to identify a prime q > N :
η = 1 T
A higher η indicates a lower computational cost per discovery.
Theorem A1.  
Under the assumption of the Prime Number Theorem and the Hardy-Littlewood model, the leverage of a sequential search scales as L P N T O ( 1 ) , while the leverage of the FMA mirror search scales as L F M A O ( ln N ) .
Proof. 
For a sequential search starting at N, the Prime Number Theorem implies the expected distance to the next prime is Δ ln N . Since every candidate must be tested, the number of trials T is proportional to the gap, yielding T Δ . Substitution into Equation (A1) yields L P N T 1 .
For the FMA method searching for a symmetric Prime-Prime partition ( N R , N + R ) , the search variable is the radius R. Under the Hardy-Littlewood model, the expected radius for the first occurrence (MDR) scales as M D R ( ln N ) 2 . However, due to the filtering of composite candidates via modular constraints, the effective number of trials T required to traverse this radius is proportional to the prime density, T M D R ln N ln N . Substitution yields:
L F M A ( ln N ) 2 ln N = ln N

Appendix A.2. Local Prime Search Analysis

The objective of the local search is to identify the immediate next prime  q > N , thereby minimizing the distance Δ . To maximize local probability, the FMA framework employs a descending prime anchor protocol, utilizing the largest known primes p N to first check those mirrors q = 2 N p located closer to N.
Lemma A1.  
In a local window [ N , N + ln N ] , the expectation of search efficiency for the PNT strategy is greater than or equal to that of the FMA strategy.
Proof. 
Let k = ln N represent the local window length. The sequential PNT strategy evaluates all odd integers (candidate density 0.5 ). The FMA strategy evaluates a restricted set of symmetric pairs determined by modular constraints. In a local window where primes are dense, the PNT strategy’s exhaustive scan minimizes the risk of skipping a prime, thereby minimizing the expected number of trials E [ T ] required for the first success. Since E [ T P N T ] E [ T F M A ] , it follows that E [ η P N T ] E [ η F M A ] . □
Example A1.  
To illustrate the efficiency gap in local search, consider the midpoint N = 4011 . We seek the first prime q > 4011 .
  • PNT Strategy (Sequential Scan):The search checks odd integers N + 2 , N + 4 .
    1.
    Test 4013: Prime .
    Result: Success in T = 1 . Efficiency η P N T = 1.0 .
  • FMA Strategy (Descending Anchors):The search checks mirrors of known primes p < 4011 in descending order ( p 1 = 4007 , p 2 = 4003 ).
    1.
    Anchor p = 4007 : Test q = 8022 4007 = 4015 (Div 5). Fail .
    2.
    Anchor p = 4003 : Test q = 8022 4003 = 4019 . Prime . Success .
    Result: Success in T = 2 partition checks. Efficiency η F M A = 0.50 .
In this local context, the PNT strategy is 2 × more efficient for identifying the immediate next prime.

Appendix A.3. Distal Prime Search Analysis

The objective of the distal search is to identify a deep prime near the upper boundary 2 N , thereby maximizing the discovery distance Δ .
Recalling Definition 6, the Maximum Distinct Radius, M X R ( N ) , determines the maximum discovery distance to a prime q > N . This distance is maximized at the boundary R = N 3 , corresponding to the smallest prime anchor p = 3 . To maximize search leverage, the FMA protocol starts by testing mirrors of small known prime anchors p { 3 , 5 , } that resolve to primes q = 2 N p located closer to the upper boundary.
Lemma A2.  
Testing candidates of the form q = 2 N p for known primes p [ 3 , N ] identifies a prime q ( N , 2 N 3 ] with a maximum trial budget T = π ( N ) 1 . This achieves a distal discovery leverage L M X R that exceeds a sequential Bertrand search by a factor of O ( ln N ) .
Proof. 
Consider the search for a prime q at the upper boundary of the Bertrand interval ( q 2 N ). (1) Under sequential scanning, the number of trials required to reach the boundary is proportional to the distance T N / 2 . (2) Under FMA, the mirror of p = 3 is the Top Right Entry (TRE), 2 N 3 . Testing distal mirrors { 2 N 3 , 2 N 5 , } utilizes the known density of primes N as anchors. (3) The trial budget for a complete FMA search corresponds to the prime inventory T = π ( N ) 1 N / ln N . (4) The search leverage is defined as L = Δ / T . For the distal mirror ( p = 3 ):
L M X R = N 3 π ( N ) 1 N N / ln N = ln N
In contrast, the leverage for sequential discovery near the boundary is L P N T 1 . □
Example A2.  
To illustrate the leverage divergence in distal discovery, consider the midpoint N = 400 ( 2 N = 800 ). We compare the effort required to identify the boundary prime q = 797 ( M X R pair with p = 3 ).
Sequential Strategy (Reverse Linear Scan):The search scans odd integers downwards from the upper boundary 2 N = 800 .
  • Test 799 ( 17 × 47 ):Fail(Composite).
  • Test 797: Prime .
Result: Prime q = 797 discovered in T = 2 trials.
Mirror Strategy (FMA Edge-In):The search tests mirrors of known small prime anchors p { 3 , 5 , } in ascending order to resolve q = 2 N p .
  • Anchor p = 3 : Test Mirror q = 800 3 = 797 . Prime .
Result: Prime q = 797 discovered in T = 1 trial.
The FMA strategy identifies the distal prime immediately by leveraging the known primality of the anchor p = 3 . In contrast, the linear scan must evaluate the intermediate composite candidate 799. This demonstrates how the FMA framework utilizes the modular geometry of the partition table to structurally filter non-viable candidates at the interval boundaries.
Conclusion: The Goldbach Accelerator
The comparative analysis of search mechanics indicates that the Goldbach symmetry functions as a framework for computational efficiency. Specifically, the midpoint symmetry allows for a structured exploration of the prime field that exceeds the distal reach of linear search methods under fixed-budget constraints. This identifies the Goldbach symmetry as an algorithmic accelerator, mapping the density of the known prime field onto the distal interval with superior leverage compared to sequential scanning.
Furthermore, the efficiency of these dual search protocols may be enhanced by analyzing the sensitivity of M D R ( N ) and M X R ( N ) to the arithmetic characteristics of N, such as the number, variance, and multiplicity of its prime factors, enabling adaptive calibration and optimization of the search anchors. This is an area of future research.

Appendix B. Implications of Failure State (PP=0) on Prime Counting

In this section, we demonstrate that the assumption P P ( N ) = 0 permits the calculation of the global prime inventory π ( 2 N ) using a conditional testing algorithm that queries strictly fewer integers than the total population of the interval. Crucially, this relative reduction persists even when standard composite pre-filtering (e.g., excluding multiples of 3 or 5) is applied symmetrically to the domain. Drawing on the fundamental dichotomy between arithmetic structure and pseudorandomness established by Tao [11], and the algorithmic irreducibility of prime sequences described by Chaitin [12], we demonstrate that this precipitates an information-theoretic contradiction. The failure state implies that the global prime counting function can be fully reconstructed from the local modular geometry of a subset of composites, violating the established algorithmic irreducibility of the prime sequence.

Appendix B.1. The Conditional Testing Algorithm

Consider an algorithm designed to identify the set of odd Composite-Composite pairs, C c c , for a given non-prime integer N. The algorithm iterates through the partition rows k = 1 R o w s ( N ) , corresponding to the odd partition pairs ( x , y ) such that x + y = 2 N and x y .
The testing protocol is conditional:
1.
LHS Test: Perform a primality test on the left-hand summand x.
2.
Condition A (Prime): If x is prime, terminate the process for this row. (Under the hypothesis P P = 0 , a row with a prime x cannot be a C C pair, so the status of y is irrelevant for determining C c c ).
3.
Condition B (Composite): If x is composite, proceed to test the right-hand summand y.
4.
Classification: If both x and y are confirmed composite, increment the count C C ( N ) .

Appendix B.2. Derivation of Computational Counts

We now quantify the information cost of this algorithm compared to the baseline requirement for determining the Total Prime Inventory ( T P I ) of the unique integers in the interval.
Let S o d d be the set of all unique odd integers in the interval [ 3 , 2 N 3 ] . We introduce the parity indicator function 1 o d d ( N ) , which equals 1 if N is odd and 0 otherwise. This accounts for the center partition ( N , N ) where the left and right summands map to the same unique integer.
The cardinality of the unique set is:
| S o d d | = 2 · R o w s ( N ) 1 o d d ( N )
Next, we calculate T c o n d , the number of unique primality tests performed by the Conditional Algorithm:
  • LHS Tests: We test every unique x in the left column. Count = R o w s ( N ) .
  • RHS Tests: We test y if and only if x is composite. However, if N is odd, the center element y = N has already been tested as x = N (since N must be composite under the failure hypothesis). Therefore, we subtract the center case to avoid double-counting.
The count of unique RHS tests is the number of LHS composites minus the shared center case:
Unique RHS Tests = [ R o w s ( N ) π L H S ( N ) ] 1 o d d ( N )
The total number of tests performed is:
T c o n d = R o w s ( N ) + [ R o w s ( N ) π L H S ( N ) 1 o d d ( N ) ]

Appendix B.3. The Invariant Information Deficit

Subtracting the actual tests from the total population reveals the Information Deficit Δ I :
Δ I = | S o d d | T c o n d = [ 2 · R o w s ( N ) 1 o d d ( N ) ] [ 2 · R o w s ( N ) π L H S ( N ) 1 o d d ( N ) ] = π L H S ( N )
This confirms that regardless of whether N is even or odd, the assumption P P ( N ) = 0 allows the global state to be determined while skipping exactly π L H S ( N ) independent checks. The ability to determine the exact global inventory of primes while systematically bypassing π L H S ( N ) independent checks constitutes a violation of Information Conservation. This contradicts the established property that the prime sequence possesses irreducible pseudorandomness [11,12], implying that the existence of Goldbach partitions ( P P ( N ) > 0 ) serves as the structural residual required to preserve the algorithmic incompressibility of the prime sequence relative to local modular constraints.
Corollary A1  
(Extension to Prime Midpoints). The above information-theoretic contradiction extends to the case where the midpoint N is prime. If we assume that P P ( N ) = 1 (representing only the identity partition N + N ), the conditional logic of the testing algorithm remains valid for all off-center rows. Specifically, for every prime x < N , the assumption forces the complementary y to be composite. This allows the precise determination of the global prime inventory while skipping π L H S ( N ) 1 independent checks. To resolve this paradox and restore algorithmic irreducibility, the partition space must contain at least one distinct prime-prime pair, implying P P ( N ) 2 for all prime N 5 .

References

  1. Oliveira e Silva, T.; Herzog, S.; Pardi, S. Empirical verification of the even Goldbach conjecture and computation of prime gaps up to 4·1018. Math. Comp. 2014, 83, 2033–2060. [Google Scholar] [CrossRef]
  2. Nathanson, M.B. Additive Number Theory: The Classical Bases. In Graduate Texts in Mathematics; Springer-Verlag: New York, NY, USA, 1996; Volume 164. [Google Scholar]
  3. Selberg, A. Elementary Methods in the Theory of Primes. Norske Vid. Selsk. Forh., Trondheim 1947, 19, 64–67. [Google Scholar]
  4. Hardy, G.H.; Littlewood, J.E. Some problems of ’Partitio numerorum’; III: On the expression of a number as a sum of primes. Acta Mathematica 44, 1–70, 1923. [CrossRef]
  5. Papadakis, I. On the Binary Goldbach Conjecture: Analysis and Alternate Formulations Using Projection, Optimization, Hybrid Factorization, Prime Symmetry and Analytic Approximation. Math. Comput. Sci. 2024, 9, 96–113. [Google Scholar] [CrossRef]
  6. Papadakis, I.N.M. On the Universal Encoding Optimality of Primes. Mathematics 2021, 9, 3155. [Google Scholar] [CrossRef]
  7. Papadakis, I.N.M. Algebraic Representation of Primes by Hybrid Factorization. Math. Comput. Sci. 2024, 9, 12–25. [Google Scholar] [CrossRef]
  8. Papadakis, I. Representation and Generation of Prime and Coprime Numbers by Using Structured Algebraic Sums. Math. Comput. Sci. 2024, 9, 57–63. [Google Scholar] [CrossRef]
  9. Hardy, G.H.; Ramanujan, S. The normal number of prime factors of a number n. Quart. J. Math. 1917, 48, 76–92. [Google Scholar]
  10. Erdos, P.; Kac, M. The Gaussian law of errors in the theory of additive number theoretic functions. Am. J. Math. 1940, 62, 738–742. [Google Scholar] [CrossRef]
  11. Tao, T. Structure and Randomness in the Prime Numbers. In An Invitation to Mathematics; Schleicher, D., Lackmann, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 1–7. [Google Scholar]
  12. Chaitin, G.J. Algorithmic Information Theory; Cambridge University Press: Cambridge, UK, 1987. [Google Scholar]
Figure 1. k c r i t (orange line, left axis), 1.2 ( ln ( ln N ) ) 0.3 (dashed line, left axis), and k ( N ) (solid line, right axis) as N 10 6 .
Figure 1. k c r i t (orange line, left axis), 1.2 ( ln ( ln N ) ) 0.3 (dashed line, left axis), and k ( N ) (solid line, right axis) as N 10 6 .
Preprints 196989 g001
Figure 2. M D R ( N ) / N (solid line) and 0.45 ( ln N ) 2 / N (dashed line) as N 10 6 .
Figure 2. M D R ( N ) / N (solid line) and 0.45 ( ln N ) 2 / N (dashed line) as N 10 6 .
Preprints 196989 g002
Figure 3. M X R ( N ) / N (solid line) and 1 0.6 ln ( 2 N ) ln ( ln ( 2 N ) ) / N (dashed line) as N 10 6 .
Figure 3. M X R ( N ) / N (solid line) and 1 0.6 ln ( 2 N ) ln ( ln ( 2 N ) ) / N (dashed line) as N 10 6 .
Preprints 196989 g003
Figure 4. λ L ( N ) and λ ^ L ( N ) as N 10 6 .
Figure 4. λ L ( N ) and λ ^ L ( N ) as N 10 6 .
Preprints 196989 g004
Figure 5. FMA RHS criteria for Tier I (green), Tier II (purple) and Tier III: k = 1 (orange); k = 2 (blue); k = 3 (red, dashed) as N 10 6 .
Figure 5. FMA RHS criteria for Tier I (green), Tier II (purple) and Tier III: k = 1 (orange); k = 2 (blue); k = 3 (red, dashed) as N 10 6 .
Preprints 196989 g005
Table 1. Evolution of Conservative Boundary λ ^ L ( N ) , Safety Margin μ ( N ) , and Critical Radical Density γ c r i t for N = 10 m ( m = 2 , 3 , , 10 ).
Table 1. Evolution of Conservative Boundary λ ^ L ( N ) , Safety Margin μ ( N ) , and Critical Radical Density γ c r i t for N = 10 m ( m = 2 , 3 , , 10 ).
m N λ ^ L ( N ) μ ( N ) (%) γ c r i t
2 10 2 0.107 78.6 2.00
3 10 3 0.285 43.1 1.99
4 10 4 0.354 29.2 1.92
5 10 5 0.390 21.9 1.85
6 10 6 0.413 17.5 1.78
7 10 7 0.427 14.6 1.71
8 10 8 0.438 12.5 1.64
9 10 9 0.446 10.9 1.58
10 10 10 0.452 9.7 1.52
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated