Preprint
Article

This version is not peer-reviewed.

Tossing Coins with an NP-Machine

Submitted:

23 July 2025

Posted:

29 July 2025

You are already at the latest version

Abstract
In computational complexity, a tableau represents a hypothetical accepting computation path p of a nondeterministic polynomial time Turing machine N on an input w. The tableau is encoded by the propositional logic formula Ψ, defined as Ψ = Ψ_cell ∧ Ψ_rest. The component Ψ_cell enforces the constraint that each cell in the tableau contains exactly one symbol, while Ψ_rest incorporates constraints governing the step-by-step behavior of N on w. In recent work, we reformulated a critical part of Ψ_rest as a compact Horn formula. In other work, we evaluated the cost of this reformulation, though our estimates were intentionally conservative. In this article, we provide a more rigorous analysis and derive a tighter upper bound on two enhanced variants of our original Filling Holes with Backtracking algorithm: the refined (rFHB) and the streamlined (sFHB) versions, each tasked with solving 3-SAT.
Keywords: 
;  ;  ;  

1. Introduction

Let N be a nondeterministic Turing machine (TM) that, on input w of length n, either accepts or rejects w within n k steps for some constant k. A conservative, deterministic simulation of N requires up to 2 n k steps, with each computation path corresponding to a chronology of binary nondeterministic choices.
In prior work [1], we conveniently assumed that N’s stepwise behavior could be concisely captured by a Horn formula ψ s t e p η . Later, we confirmed this conjecture in [2]. These findings now allow us to concretely move beyond the classical top-down chronology of N’s computation toward a non-sequential understanding of computability.
Traditionally, an accepting path of N on w can be represented by an accepting tableau—a two-dimensional matrix of cells—and formalized in propositional logic as a satisfiable 3cnf-formula ψ . In our approach, however, we relax the tableau structure by allowing “holes,” replacing the 3cnf-formula ψ with a Horn formula ψ t r i m of size O ( n κ ) for some constant κ . (Determining the satisfiability of Horn formulas is, to date, substantially more efficient than for genuine 3cnf-formulas [3].) While ψ t r i m is guaranteed to be satisfiable whenever ψ is, the converse does not necessarily hold.
Specifically, we define:
ψ t r i m = ψ s t e p η ψ s t a r t ψ a c c e p t ψ c e l l ψ e x t r a 1 ψ e x t r a 2 ,
where each conjunct is a succinct Horn formula:
  • ψ s t e p η captures the step-by-step behavior of N on w—with the Greek letter eta ( η ), resembling the Latin letter h—highlighting that the formula is a Horn formula.
  • ψ s t a r t ensures that the initial row of the tableau encodes N’s start configuration on w.
  • ψ a c c e p t enforces that no cell in the tableau contains the reject state symbol q r e j e c t .
  • ψ c e l l ensures that at most one variable is “turned on” per cell in the tableau, where a “hole” in the tableau refers to a cell where all variables are “turned off”.
  • ψ e x t r a 1 captures the spatial dynamics of the TM’s head within the tableau (Theorem 1).
  • ψ e x t r a 2 expresses the inter-cell dependencies across distant rows (Theorem 2).
Crucially, if ψ t r i m is satisfiable with the corresponding tableau containing no holes, then the original formula ψ is satisfiable too, implying that N accepts the input w.
How do the technical desiderata (1–6) relate to our prior work and to this paper? The formal definition of * ψ s t e p η , which appears as the first conjunct in the following definition of ψ s t e p η ,
ψ s t e p η = ψ s t e p η * ψ s t e p det ,
is detailed in [2]. Extensive commentary of both conjuncts in (2) is provided in the present paper. (Notably, the second conjunct, ψ s t e p det , was not required in [2] due to the assumed presence of ψ c e l l rather than ψ c e l l , thereby enforcing each cell in the tableau to contain precisely one “turned on” variable.)
The definitions of components 2-5 appear in [1]. Theorems 1 and 2—related to items 5 and 6, respectively—also appear in [1]. While we shall define ψ e x t r a 1 and discuss Theorem 1 in this paper, our primary focus is on revisiting Theorem 2 and unpacking item 6, i.e., the “inter-cell dependencies across distant rows” that arise during the computation of N on input w. While the definition of ψ extra 2 is outlined in [1] for a 3-SAT solver N * , we provide a full, formal definition in this paper.
In essence, we offer a self-contained exposition of formula (1). This paper serves as a natural continuation of our previous works [1,2], while not subsuming them. Although certain formal definitions and proofs are deferred to those references, the present discussion remains accessible independently.

1.1. Methodology

Our approach centers on the formula ψ t r i m and a timeless tableau—a matrix with O ( n k ) × O ( n k ) cells. When certifying the satisfiability of ψ t r i m , this tableau typically contains holes, thereby encoding an exponential number of paths.
A simplistic reliance on a timeless tableau, where each cell’s content is guessed independently, leads to a blow-up in deterministic time complexity—from 2 n k to 2 n 2 k . To counteract this inefficiency, Theorems 1 and 2 from [1] convey two techniques that significantly reduce the number of required guesses.
  • Theorem 1: Compression via geometric constraints. By leveraging a compression result that captures the spatial dynamics of the TM’s head within the tableau, we shrink the search space for nondeterministic guesses. As a result, the deterministic time complexity is restored to the classical 2 n k bound.
    Consider an initially empty tableau. By designating two specific cells— c * (situated above) and c * (located several rows below and, say, to the far right)—as head positions, we constrain the machine’s behavior so that one or more binary nondeterministic choices collapse into deterministic transitions. In contrast, if only one cell holds a state symbol or if two nearby cells are populated with state symbols, the tableau exhibits a broader range of nondeterministic evolutions. For instance, if only c * holds a state symbol, the machine may move freely left or right. However, if c * , located far below and (say) to the right, also contains a state symbol, then some of the prior movements become restricted—only rightward transitions from c * to c * remain viable. This marks a shift from local stepwise reasoning to a more global, geometric form of constraint.
  • Theorem 2: Correlation in nondeterministic choices. A second form of compression arises by distinguishing between a pure coin-tossing machine and a 3-SAT solver modeled as a nondeterministic polynomial time TM N * . Unlike the coin-tossing machine that produces independent bits, N * generates correlated bits. This correlation allows for further compression of the tableau’s nondeterministic behavior.
    For example, if the state symbol in cell c * indicates that the machine has just produced a second coin toss of 1 (in one of multiple ways) and is about to toss a third coin, this constrains the allowable state symbol in a later cell c * further down in the tableau. Due to inter-cell dependencies—embedded in the tableau’s deterministic substructure—the machine may be forced to toss the next coin to 1. These subtle, long-range interactions are crucial for compressing the overall nondeterministic behavior. Importantly, the computation of N * on input w does not unfold top-down but grows in an interleaved fashion across the rows of the tableau.
Building on the first theorem, Theorem 2 establishes that if the original formula ψ is satisfiable, then it can be satisfied within K f ( n , k ) steps with our ψ t r i m -based approach, where f ( n , k ) n 2 3 k and K is a constant. In this paper, we revisit Theorem 2 with more rigor and improved theoretical bounds.
Reflecting on Fortnow’s recent speculation about compression [4], we argue that coin tossing by some nondeterministic polynomial time TM N—even when tasked to solve a cryptographic problem—is not entirely random after all. To our knowledge, Fortnow’s exploration provides an early, if not unprecedented, framing of this topic. While the present author is familiar with the literature on Kolmogorov complexity [5], including a prior contribution to the field [6], no compelling connection is presently discernible between that work and the arguments advanced in this paper.

1.2. Task

The standard textbook formulation of an accepting tableau is given by:
ψ = ψ s t e p ψ s t a r t ψ a c c e p t ψ c e l l ,
where both ψ s t e p and ψ c e l l are traditionally regarded as genuine 3cnf-formulas. Recently, however, we succeeded in re-expressing ψ s t e p as ψ s t e p η * , a compact Horn formula [2].
This improvement leaves ψ c e l l as the sole non-Horn component, which ensures that each cell in the tableau contains exactly one variable that is “turned on.” To move toward a purely Horn-based formulation, we weaken ψ c e l l to a Horn formula ψ c e l l and compensate by introducing three additional Horn formulas: ψ s t e p det , ψ e x t r a 1 , and ψ e x t r a 2 . Together, these components yield the Horn formula ψ t r i m , as defined via formulas (1) and (2).
Remark 1.
A Boolean formula in conjunctive normal form (cnf) is referred to as a 3cnf-formula if each clause consists of exactly three literals. A cnf-formula is referred to as a Horn formula if every clause contains at most one positive literal. Standard definitions appear in Appendix A.
The task laid out in this paper is to formalize each newly introduced conjunct in formula (1), to demonstrate that all components cohere and function in unison, and to establish a tight upper bound on the running time of our two ψ t r i m -based algorithms: rFHB and sFHB. These represent refined and streamlined variants, respectively, of the original FHB algorithm. All three algorithms employ a standard HORNSAT solver H and incorporate external user actions, automating her interventions while embedding backtracking as an intrinsic capability.
In revisiting our initial cost analysis of rFHB, we note that Theorem 2 of [1] was proven under an unduly conservative assumption: the ratio between the entire tableau and the coin-tossing section (also known as the mini tableau) was held constant—specifically at a value of 2—rather than allowed to scale with l. This assumption will be relaxed in the present work, where we explore and leverage its dependence on l, the number of binary (nondeterministic) choices performed by the TM in question.

1.3. Results

We present and rigorously analyze two novel algorithms: rFHB and sFHB. Both are structured around the ψ t r i m -based formulation given in formula (1), differing primarily in their final conjunct term, ψ e x t r a 2 . Now, let N ˜ denote an N , k machine, as defined in this paper, with k { 1 , 2 } . The machine works with unary or binary notation, and is tasked with solving the 3-SAT problem—using l binary choices on some input w of length n. We show that the runtime R ( l ) of both rFHB and sFHB, when applied to N ˜ and w, admits a tight upper bound, effectively rendering the original FHB algorithm obsolete. Moreover, sFHB is significantly simpler to teach and implement than rFHB, offering conceptual clarity.

1.4. Outline

This article is structured into three primary sections: Orientation (Section 2, Section 3 and Section 4), Main Body (Section 5 and Section 6), and Final Commentary with Analytical Addenda (Section 7 and Appendix A, Appendix B, Appendix C, Appendix D, Appendix E and Appendix F).
The Orientation spans 25 pages and presents a detailed yet approachable formalization of the 3-SAT solver N * . Key contributions include:
  • Commentary on the construction of ψ s t e p η using an extended tableau (Section 2)
  • The introduction of holes within the extended tableau framework (Section 3)
  • A complete definition of the solver N * (Section 4)
The Main Body, comprising 18 pages, primarily investigates long-range inter-cell dependencies, culminating in the definition of ψ e x t r a 2 (Section 5). It also introduces and analyzes two key algorithms: rFHB and sFHB (Section 6).
The article concludes with a Final Commentary and a set of Analytical Addenda. Section 7 offers reflective insights. Appendix A presents literature-based definitions and theorems; Appendix B introduces definitions specific to ψ e x t r a 1 ; Appendix C illustrates a long-range top-down constraint; Appendix D and Appendix F each offer a standard solution to a recurrence relation; and Appendix E provides an alternative proof of Theorem 3, the central result of the paper.
Remark 2.
Portions of the wording in the Orientation overlap with the author’s earlier works [1,2]. The author retains full ownership — including commercial rights — of the prior content. As such, there are no legal constraints on reusing portions of those works in the current article.

2. Extending the Tableau with Labels: Explicating ψ S t e p η

Consider an arbitrary nondeterministic TM N that, given an input w of length n, decides whether to accept or reject w in at most n k steps for some constant k. We contemplate the behavior of a hypothetical accepting computation path p of N on an extended input w ^ , with
w = w 0 w 1 w n 1 , w ^ = w ,
where the blank symbol □ occurs n k n times.
Path p depends on the execution of instructions, which can be uniquely labeled, such as:
t a b c : q 1 , a q 2 , b , , q 3 , c , + .
This nondeterministic instruction, labeled t a b c , can be split into two deterministic ones:
t a b : q 1 , a q 2 , b , , t a c : q 1 , a q 3 , c , + .
Each deterministic instruction is assigned a unique label (e.g., t a b ).
Instruction t a b specifies that when N is in state q 1 and reading symbol a, the machine is supposed to transition to state q 2 , rewrite the symbol a as b, and the tape head should move one cell to the left (−). A plus sign (+) indicates a unary move to the right.
In this section we focus on the formulation
ψ s t e p η = ψ s t e p η * ψ s t e p det .
We begin in Section 2.1 with a conventional account of nondeterminism. Section 2.2 then presents our own formulation, grounded in ψ s t e p η * . The core insights of this formulation are unpacked in Section 2.3. Finally, Section 2.4 explores the structure and role of ψ s t e p det .

2.1. Textbook Approach

To capture the step-by-step behavior of N on w ^ , we focus on the aforementioned instruction t a b c as it applies to the following TM configuration, denoted as C:
a a q 1 d
The symbol a q 1 indicates that the machine is currently in state q 1 , with its head oriented towards the tape cell containing the symbol a. This information can be expressed propositionally through the Boolean variable x i , j , a q 1 , where indices i and j denote the row i and column j in a tableau—a matrix of n k rows and n k + 2 columns, as shown in Table 1.
We analyze the execution of instructions t a b and t a c separately—both outcomes are depicted on the left and right sides of Table 2, respectively—before combining them into one implication. This results in an expression of the form:
C 1 C 2 C 3 T a b T a c ,
where both T a b and T a c take the form C 1 C 2 C 3 . Ultimately, this forms a 3cnf-formula corresponding to the notion of a 2 × 3 window. By taking conjunctions over all 2 × 3 windows defined by N, and for each row and column in the tableau, we derive a 3cnf-formula ψ s t e p of size O ( n 2 k ) .
To the best of our knowledge, every approach to NP -completeness ultimately hinges on the notion of “a tableau,” a concept that can be traced back to Cook’s seminal paper [7]. The work of Cook in the United States was mirrored by Levin’s concurrent developments in the Soviet Union [8,9].
Specifically, the two tableau illustrations presented in Table 2 are modified adaptations of the exemplars found in Sipser [10](p. 280). (Sipser’s textbook treatment uses “ q 1 a ” instead of “ a q 1 ” when referring to a 2 × 3 window [10](p. 280). However, this is merely a cosmetic variation on the concept at hand.) Similarly, Papadimitriou introduces the notion of a “computation table” in Section 8.2 of his work [11]. In the same spirit, Hopcroft, Motwani, and Ullman refer to a comparable structure as “an array of cell/ID facts” [12](p. 443). Aaronson also echoes this idea of a tableau, albeit using more informal language in his accessible book [13](p. 61).

2.2. Our Approach

Can the step-by-step behavior of N on w ^ be represented using a compact Horn formula, ψ s t e p η * , instead of a 3cnf-formula, ψ s t e p ? We answer affirmatively in [2] by introducing an extended tableau with 3 n k + 1 rows and n k + 2 columns, explicitly storing the instruction labels, such as t a b and t a c . Two parts of such a tableau are shown in Table 3, illustrating only one change occurring at a time. This contrasts with the two simultaneous changes depicted in each illustration in Table 2.
We arrived at this result by adopting Aaronson’s vision of philosophy as a “scout” that explores and maps out “intellectual terrain for science to later move in on, and build condominiums on …” [13](p. 6, original emphasis). Building on this metaphor, and in dialogue with the perspectives of Dean [14], Tall [15], and Turner [16], our investigation explores the interplay between two distinct modes of reasoning: Aristotelian, step-by-step thinking and Platonic, static reasoning—as largely formulated by Linnebo and Shapiro [17].
These contrasting perspectives are illustrated in the following two quotes:
  • Lance Fortnow as an Aristotelian:
    A Turing machine has a formal definition but that’s not how I think of it. When I write code, or prove a theorem involving computation, I feel the machine processing step by step. …I feel it humming along, updating variables, looping, branching, searching until it arrives as its final destination and gives an answer. (Quoted from Lance Fortnow’s blog post [18].)
  • Robin K. Hill as a Platonist:
    A Turing Machine is a static object, a declarative, a quintuple or septuple of the necessary components. The object δ that constitutes the transition function that describes the action is itself a set of tuples. All of this is written in appropriate symbols, and just sits there. (Quoted from Robin K. Hill’s CACM blog post [19].)
In our published work [2], we analyze these two intellectual modes in the context of nondeterministic TMs, and ultimately show how to transform the 3cnf-formula ψ s t e p , which captures the step-by-step behavior of N on w ^ , into the compact Horn formula ψ s t e p η * .
Technically, we employ an extended tableau—also called a tableau with labels. Here, the TM configurations are represented in rows 3 l 2 where 1 l n k + 1 . The two auxiliary rows, 3 l 1 and 3 l , each contain exactly one instruction label, and row 3 l contains precisely one s q symbol, where s is a tape symbol and q a state symbol. Corresponding to Table 3, we define in [2] the Horn formula ψ s t e p η * of size O ( n 3 k ) literals.
Remark 3.
The formula ψ s t e p η * is called ϕ s t e p η in [2].
The innovation behind ψ s t e p η * is representing the binary choice between t a b and t a c as a conjunction of two formulas:
( x 3 l , j , t a b i U i ) ( x 3 l , j , t a c i V i ) ,
yielding a Horn formula. In contrast, Sipser’s textbook treatment expresses this choice with a disjunction, recall formula (4), which necessitates a 3cnf-formula.
To be more precise, i U i represents the knowledge derived from x 3 l , j , t a b through both upward and downward reasoning in our extended tableau. We express this derivation with the following formula:
x 3 l , j , t a b i { 3 l 2 , 3 l 1 , 3 l + 1 } U i ,
which is equivalent to:
( x 3 l , j , t a b U 3 l 2 ) ( x 3 l , j , t a b U 3 l 1 ) ( x 3 l , j , t a b U 3 l + 1 ) ,
where U i is a placeholder for a literal. The formula is a Horn formula. Likewise for t a c and derived knowledge i V i , which amounts to:
x 3 l , j , t a c i V i ,
where the subscript i stands for: i { 3 l 2 , 3 l 1 , 3 l + 1 } .
To convey the essence of our prior contribution without providing formal definitions, let K ( i , j , t ) denote the knowledge derived from x i , j , t for some instruction label t of machine N stored in c e l l [ i , j ] , with i = 3 l . In [2], we demonstrate the construction of multiple Horn formulas, including:
ψ V 1 = , ψ V 2 = l j t x 3 l , j , t K ( 3 l , j , t ) , ψ V 3 = , ψ V = ψ V 1 ψ V 2 ψ V 3 , ψ H = , ψ s t e p η * = ψ V ψ H ,
where the latter formula represents N’s step-by-step behavior. We use “V” to denote “vertical” reasoning within the extended tableau, and “H” to signify “helicopter” reasoning across a block of rows, ranging from 3 l 2 to 3 l + 1 .

2.3. Explicating ψ s t e p η *

More rigorously, consider an arbitrary nondeterministic polynomial time TM N , k , or N for short. Let the tape alphabet Φ , state set Q, and label set T be extracted from the specifications of machine N.
Definition 1.
A nondeterministic polynomial time Turing machine, denoted as  N , k , is defined as N = Q , Γ , Φ , δ , T , q 0 , q a c c e p t , q r e j e c t , a nondeterministic Turing machine in accordance with Definition A5, which serves as a decider with a running time of n k — as specified in Definition A8, where n and k represent the length of input w and some constant, respectively.
Remark 4.
Without loss of generality, the nondeterminism associated with TM N consists solely of binary choices. For each such choice, say between instructions t 1 and t 2 , the movement of t 1 is to the left (−), while the movement of t 2 is to the right (+).
Recall that the propositional formula ψ t r i m is defined as:
ψ t r i m = ψ s t e p η ψ s t a r t ψ a c c e p t ψ c e l l ψ e x t r a 1 ψ e x t r a 2 ,
with
ψ s t e p η = ψ s t e p η * ψ s t e p det .
To elucidate the variables within ψ t r i m , we define:
Σ = Φ Φ × Q T , .
For each i and j ranging from 1 to respectively 3 n k + 1 and n k + 2 , and for every symbol s in Σ , we introduce a Boolean variable, x i , j , s . We have a total of O n 2 k such variables.
The formula
ψ s t e p η * = ψ V ψ H
reflects the coordination between the V and H subsystems. This coordination is achieved primarily by ensuring that specific vertical symbol conversions in the extended tableau are carried out in two distinct stages.
For instance, rather than directly converting symbol a q 0 into symbol a when traversing a column in the extended tableau top-down, the V subsystem first transforms a q 0 into the intermediate label t 0 , and only then into the symbol a . This deliberate two-step conversion guarantees that V produces a unique intermediate trace—namely, the instruction label t 0 of machine N—which can then be identified by the H subsystem. This example, involving the label t 0 , corresponds to the following deterministic machine instruction:
t 0 : q 0 , a q 1 , a , + .
In general however, an instruction of N is nondeterministic. For each binary choice of N, such as
t a b c : q 1 , a q 2 , b , , q 3 , c , + ,
we must first determinize the instruction by splitting it into two distinct deterministic ones:
t a b : q 1 , a q 2 , b , , t a c : q 1 , a q 3 , c , + .
Each deterministic instruction is assigned a unique label (e.g., t a b ). Notably, determinizing an instruction that is already deterministic—such as t 0 , t a b , or t a c —has no effect.
After applying determinization to all uniquely labeled instructions of N, we ensure that V, when selecting any deterministic instruction label t, explicitly records the label t as an intermediate trace in the extended tableau. Examples of t a b and t a c are shown in the center column, in the left and right illustrations, respectively, in Table 3. Consequently, H reads label t from the tableau and acts accordingly. The behavior of V and H is described by Horn formulas ψ V and ψ H , respectively, as formally defined in [2, Section 4].
Fundamentally, any conversion between two distinct tape symbols, say from a to b, in any column of the extended tableau, must occur through an intermediate trace. Table 4 provides an illustration, relying on the label t a b and, more precisely, the following instruction of machine N:
t a b : q 3 , a q 4 , b , .
The marked symbol a in the top row in Table 4 can only change into the marked symbol b in the bottom row via an intermediate trace, such as t a b .
A few additional clarifications regarding Table 4 are necessary. First, each symbol change from row to row is indicated with an arrow for better visualization. Second, the boxes surrounding symbols a and b are merely included to improve readability.
To summarize, the novelty of our approach in [2] is twofold. First, we introduce an extended tableau that explicitly stores instruction labels, enabling single-symbol changes between consecutive rows. Second, we analyze the tableau from both a vertical perspective ( ψ V ) and a helicopter perspective ( ψ H ), combining them into the succinct Horn formula: ψ s t e p η * = ψ V ψ H .

2.4. Explicating ψ S t e p det

We now explicate the second conjunct of equation (5), which pertains exclusively to the deterministic instructions t of the nondeterministic TM under consideration; specifically, those satisfying t T det , as defined in Definition A7.
We distinguish between the subsets T det + and T det , along with their associated formulas ψ + det and ψ det , respectively. The overall formula for deterministic transitions is thus expressed as:
ψ s t e p det = ψ + det ψ det .
We present the explicit definition of the first conjunct, ψ + det , leaving the construction of ψ det to the reader by symmetry:
ψ + det = l j t T det + s source t @ ( 3 l 2 , j ) s @ ( 3 l 2 , j + 1 ) t @ ( 3 l , j ) write t @ ( 3 l + 1 , j ) s target t @ ( 3 l + 1 , j + 1 ) ,
where 1 l n k + 1 and 1 j n k + 2 , with s Φ . For precise definitions of the operators such as source t , write t , and target t , see Definition A6.
The formula ψ + det comprises O ( n 2 k ) literals. An analogous construction, along with the same complexity bound, applies to ψ det . Together, these formulas encapsulate traditional top-down reasoning. Here, the top is row  3 l 2 , going via row  3 l to the bottom row  3 l + 1 . This aligns with the established result that such reasoning can be fully captured by a Horn formula [20](p. 35).

3. Extended Tableau with Holes

An extended tableau is a matrix consisting of ( 3 n k + 1 ) × ( n k + 2 ) cells. It is formed by augmenting each of the n k rows of the basic tableau (except the bottom row) with two auxiliary rows. Going forward, the context will clarify which version of the tableau—extended or basic—is being referred to. The reader is expected to switch between these representations as appropriate.

Notation 

Given an extended tableau, the cell at row i and column j is denoted by c e l l i , j and is intended to store a symbol s Σ . The contents of these cells are represented using the variables of ψ t r i m , which, unless specified, is used in place of the original formula ψ .
When the variable x i , j , s is assigned the value 1, it signifies that c e l l i , j in the extended tableau contains the symbol s. We also denote this situation as follows:
s @ ( i , j ) .
Conversely, we write
¬ s @ ( i , j ) or s @ ( i , j ) ¯
when x i , j , s is assigned 0, using either notation interchangeably to improve readability.
When referring to the corresponding basic tableau, which consists of n k × ( n k + 2 ) cells, we use the notation
s @ ( i , j ) B
to abbreviate
s @ ( 3 i 2 , j ) ,
indicating that the cell at row i and column j in the basic tableau corresponds to row 3 i 2 and column j in the extended tableau. Similarly, we write
¬ s @ ( i , j ) B or s @ ( i , j ) B ¯
to mean
s @ ( 3 i 2 , j ) ¯ .
Definition 2.
Consider ψ t r i m and, correspondingly, its extended and basic tableaux. We say that c e l l [ i , j ] in the extended tableau contains a hole iff x i , j , s is false for all s Σ . We say that c e l l [ i , j ] in the basic tableau contains a hole iff c e l l [ 3 i 2 , j ] in the extended tableau contains a hole.
Definition 2 does not exclude the possibility that a specific variable x i , j , s * , for some s * Σ , may later be “turned on,” effectively filling the hole with symbol s * . For example, suppose c e l l [ i , j ] initially contains a hole, but a human agent (e.g., a user of an off-the-shelf HORNSAT solver H ) later assigns it the tape-state symbol a q 5 . This scenario is illustrated in:
  • Table 5 (with an extended tableau), and
  • Table 6 (with the corresponding basic tableau).
What are the implications of the proposition a q 5 @ ( 3 l 2 , j ) , with 1 l n k + 1 ?
To address this question in the remainder of this section, we first introduce the Horn formula ψ e x t r a 1 in Section 3.1, followed by a closer examination of one of its subformulas, ψ e x t r a e x t e n d , in Section 3.2. Then we focus on the concept of “user interaction” in Section 3.3. Finally, we present the original FHB (Filling Holes with Backtracking) algorithm in Section 3.4.

3.1. Introducing ψ e x t r a 1

The formula ψ e x t r a 1 captures global properties of a TM computation. While it is redundant in the context of ψ ’s satisfiability, it proves useful when ψ t r i m is satisfiable in the general case—namely when the basic tableau still contains holes. To convey the meaning of ψ e x t r a 1 , we begin by examining the example in Table 5.
Remark 5.
The Horn formula ψ e x t r a 1 corresponds to ψ e x t r a η from [1], with two notational differences:
  • the formula ψ e x t r a η is defined over a basic tableau, rather than an extended tableau; and
  • ψ e x t r a η uses the notation q s (instead of s q ) to denote the TM’s head is scanning s in state q.
A precise translation from ψ e x t r a η to ψ e x t r a 1 is straightforward. We treat ψ e x t r a 1 informally here and provide a formal definition in Appendix B. Importantly, ψ e x t r a 1 is a Horn formula of size O ( n κ ) , where the constant κ = 4 k .
If a q 5 @ ( i , j ) holds (where a Φ , q 5 Q , and i = 3 l 2 ), meaning x i , j , a q 5 = 1 , then the tape head of N in c e l l [ i , j ] cannot reach any of the crossed-out cells in Table 5. This restriction follows from the fact that a TM can move its tape head by at most one cell per transition—either left or right. In terms of Table 5, this corresponds to a transition between rows 3 l 2 and 3 l + 1 , with 1 l n k —or equivalently, between any two consecutive rows marked with crosses. These transitions mirror those between consecutive rows in the basic tableau (Table 6).
Furthermore, in each column of the basic tableau (and similarly in the extended tableau), all crossed-out cells either contain or are required to contain the same tape symbol s Φ . This constraint follows from the fact that a TM can only modify a tape symbol when its head is directly over that cell.
Therefore, filling the hole in c e l l [ 3 l 2 , j ] (Table 5) with the symbol a q 5 effectively amounts to filling in all crossed-out cells, albeit indirectly. In other words, the condition a q 5 @ ( 3 l 2 , j ) in Table 5 ensures that only the uncrossed cells in Table 6 can encode the binary choices made by N on w.
Formula ψ e x t r a 1 expresses these restrictions as a conjunction of four parts:
ψ e x t r a 1 = 1 l n k + 1 1 j n k + 2 s Φ q Q ψ e x t r a s i n g l e s Φ ψ e x t r a l e f t ψ e x t r a r i g h t ψ e x t r a e x t e n d
The formal definition of each conjunct, provided in Appendix B, aligns with Table 7, which extends the structure shown in Table 5. In each column of Table 7, every cross represents the same tape symbol.
Each conjunct plays a distinct role:
  • Single part ( ψ e x t r a s i n g l e ): Ensures that each row 3 l 2 contains at most one tape-state symbol s q , such as a q 5 .
  • Left part ( ψ e x t r a l e f t ): Handles the crossed-out cells to the left of a q 5 @ ( 3 l 2 , j ) in Table 7.
  • Right part ( ψ e x t r a r i g h t ): Covers the crossed-out cells to the right of a q 5 @ ( 3 l 2 , j ) in Table 7.
  • Extend part ( ψ e x t r a e x t e n d ): Introduces additional refinements not discussed in [1], as that work does not consider the ψ s t e p η -based intricacies of an extended tableau.
The final point is elaborated in the following section.

3.2. Formula ψ e x t r a e x t e n d

To illustrate the use of ψ e x t r a e x t e n d , consider Table 3 again and suppose that proposition
a q 1 @ ( 3 l 2 , j )
has already been guessed (by the human agent) for some fixed l and j. We present two clarifications.

3.2.1. Clarification 1

If it later follows that
a q 2 @ ( 3 l + 1 , j 1 )
also holds—either due to another user guess or, more realistically, as a consequence of c e l l [ 3 l + 1 , j + 1 ] containing a tape symbol—then
t a b @ ( 3 l , j )
should automatically hold as well. The current state of affairs corresponds to the left illustration in Table 3.
This inference arises from implications embedded in ψ e x t r a e x t e n d , such as:
a q 1 @ ( 3 l 2 , j ) a q 2 @ ( 3 l + 1 , j 1 ) t a b @ ( 3 l , j ) .
Similarly, if t a c applies instead of t a b , as shown with the right illustration in Table 3, we have:
a q 1 @ ( 3 l 2 , j ) d q 3 @ ( 3 l + 1 , j + 1 ) t a c @ ( 3 l , j ) .

3.2.2. Clarification 2

Resuppose that
a q 1 @ ( 3 l 2 , j )
has been guessed for some fixed l and j and that, in adherence to the left illustration in Table 3, c e l l [ 3 l + 1 , j + 1 ] contains some tape symbol s. Then the following family of inferences,
s Φ s Φ a q 1 @ ( 3 l 2 , j ) s @ ( 3 l + 1 , j + 1 ) s @ ( 3 l 2 , j 1 ) s q 2 @ ( 3 l + 1 , j 1 ) ,
allows for an automatic derivation of
a q 2 @ ( 3 l + 1 , j 1 ) ,
where s and s stand for d and a, respectively, and q 2 denotes the target state of t a b (in our running example).
Remark 6.
The formal definition of ψ e x t r a e x t e n d is provided in Appendix B and incorporates both clarifications discussed above. However, Clarification 2 also hinges upon the constraints in [2, Section 4.3].

3.3. User Interaction

We assume that ψ t r i m is satisfiable and that the extended tableau reflects this condition, typically containing several holes, as illustrated in Table 7. A hole in the (extended) tableau, located at row index i and column index j, represents more than just an empty cell. To be conservative, we stipulate the following:
  • Single Hole: If c e l l [ i , j ] is the only hole in the tableau, it corresponds to at most c possible accepting computation paths, where c is the cardinality of Σ . (In fact, in this scenario, it contributes to representing at most one accepting path.)
  • Two Holes: If c e l l [ i , j ] is one of two holes in the tableau, it contributes to representing up to c × c possible accepting computation paths.
  • Three Holes: If c e l l [ i , j ] is one of three holes, it contributes to representing up to c × c × c possible accepting computation paths.
This pattern continues, with each additional hole multiplying the maximum number of possible accepting computation paths by c .
In the general case, the (extended) tableau, composed of a polynomial number of cells, indirectly represents an exponentially large number of paths for N on w, including paths that are syntactically inadmissible from the perspective of N’s step-by-step behavior. Among the syntactically admissible paths, there are both rejecting and accepting paths.
This flexibility is achieved by leaving most cells unfilled. The Horn clauses associated with ψ t r i m remain implicitly active in the background, waiting for an external user to fill in a hole via an additional specification, such as
s * q * @ ( i * , j * ) ,
where
s * Φ and q * Q ,
i * = 3 l * 2 and 1 < l * < n k + 1 ,
1 < j * < n k + 2 .
Consequently, the HORNSAT solver H is called upon again, now tasked with satisfying
ψ t r i m s * q * @ ( i * , j * ) ,
which stands for
ψ t r i m x i * , j * , s * q * .
After two more user interventions, the solver is tasked with satisfying the following type of formula:
ψ t r i m s * q * @ ( i * , j * ) s * * q * * @ ( i * * , j * * ) s * * * q * * * @ ( i * * * , j * * * ) ,
where
i * * = 3 l * * 2 ,
i * * * = 3 l * * * 2 .
However, if the user’s guess (e.g., s * q * @ ( i * , j * ) ) leads the solver H to detect unsatisfiability, backtracking is required. In such cases, the user may revise the assignment—for instance, replacing
s * q * @ ( i * , j * ) with s * q * @ ( i * , j * ) ,
where ( s * , q * ) ( s * , q * ) . This means that s * s * or q * q * or both.
Fortunately, as shown in Claim 3 of [1] in the context of the basic tableau, asymptotic analysis (with n ) reveals that filling any hole in the center row of a convex polygon of holes reduces the space for binary choices by a factor of 1 2 . This effect is illustrated in two places:
  • Table 8 demonstrates the initial scaling.
  • Table 9 shows a second intervention in row 4, where enforcing b q 9 @ ( 4 , 12 ) B results in 41 additional crossed-out cells.
Remark 7.
To simplify our exposition, the leftmost columns in our depicted tableaux do not contain the boundary marker ⊢. However, strictly speaking, column 1 should contain the boundary marker ⊢, while tape and tape-state symbols appear only from column 2 onward.

3.4. The FHB Algorithm

Conceptually, the FHB algorithm relies on a standard HORNSAT solver H and integrates the actions of the external user, automating her interventions and incorporating backtracking as a built-in feature. We remark that H operates in nearly linear time [21].
Recall the definition of ψ t r i m ,
ψ t r i m = ψ s t e p η ψ s t a r t ψ a c c e p t ψ c e l l ψ e x t r a 1 ψ e x t r a 2 ,
which will turn out to be O ( n κ 0 ) literals long for some constant κ 0 . For the time being, we substitute the truth value 1 for ψ e x t r a 2 (to be revisited in Section 5). In this context, ψ e x t r a 1 is the largest conjunct, of size O ( n 4 k ) , as follows from Appendix B.
Additionally, we will have at most O ( n k ) extra stipulations of the form s q @ ( 3 l 2 , j ) , namely, one per three rows in the extended tableau. Hence, an upper bound on the total cost of our HORNSAT instance,
ψ t r i m s * q * @ ( 3 l * 2 , j * ) ,
can be expressed as
p ( n ) = κ 1 · n κ 2 ,
for some constants κ 1 and κ 2 .
The FHB algorithm begins with ψ t r i m , an instance of size < p ( n ) , and thus p ( n ) , and runs the solver H on it, resulting in a trivial “satisfiable” as a tentative outcome. (If N’s computation on w is deterministic, then the outcome is permanent and either satisfiable or unsatisfiable.) Next, the algorithm selects the center row, or one of the two center rows, of the basic tableau and injects the first tape-state symbol—the first s q symbol appearing in a standard list representation of Φ × Q —into the leftmost hole in that row. If backtracking is required, subsequent iterations will use different tape-state symbols, and if this does not suffice, the next hole (from left to right) in the row will be filled instead, starting again with the first s q symbol appearing in a standard ordering of Φ × Q , and so on.
For a row containing holes, there are at most n k ways to inject some specific tape-state symbol s q , with ( s , q ) Φ × Q , into that row. This leads to our key observation:
There are at most c 0 · n k ways to inject any tape-state symbol s q , with ( s , q ) Φ × Q , into a row, where c 0 denotes the cardinality of Φ × Q .
The first user intervention results in scaling the space of binary choices by 1 2 , shrinking from size p ( n ) to size Δ · p ( n ) , with Δ = 1 2 . In the next two interventions, our algorithm selects the middle row (or, if applicable, one of the two middle rows) of the first and second convex polygons of holes, read from top to bottom in the basic tableau. In the next four interventions, our algorithm selects the middle row (or one of two middle rows) of each of the four smaller convex polygons of holes, moving sequentially from top to bottom. This pattern continues in subsequent steps.
Immediately after each intervention, the FHB algorithm directs the solver H to check the entire (extended) tableau for unsatisfiability and, in the process, simplify the underlying Horn clauses as much as possible, taking into account all constraints specified by ψ t r i m , where the dots refer to the cumulative intervention stipulations made up to that point.
After each stage of interventions—one intervention in stage 1, two interventions in stage 2, four interventions in stage 3, eight interventions in stage 4, and so on—the solver H runs on an instance that has been shrunk in size by Δ = 1 2 . To be technically precise, the solver H continues operating on the entire instance, but the space of binary choices has been shrunk by a factor of Δ after each stage. As a result, we are intrinsically dealing with an instance of size Δ m · p ( n ) after m stages. Hence, m is bounded from above by O ( l o g ( n ) ) . Additionally, the bookkeeping for backtracking itself incurs at most a polynomial cost. A runtime stack with a constant overhead per recursive call is sufficient in practice [22].
Remark 8.
In future work, the software engineer could reduce the exponent κ 2 in
p ( n ) = κ 1 · n κ 2 ,
by considering the on-the-fly generation of the Horn constraints associated with ψ s t e p η and/or ψ e x t r a 1 .
For instance, the formula ψ s t e p η could expand and contract based on the placement of the s q symbols in the tableau, rather than conservatively accounting for all possibilities in advance. Similarly, the tailored constraints related to ψ e x t r a 1 could be added only when a guess s * q * @ ( 3 l * 2 , j * ) is made, causing the formula ψ e x t r a 1 to grow incrementally with each additional guess and shrinking during backtracking.
Theorem 1
(Reappropriated from Daylight [1](p. 27).) Consider a nondeterministic polynomial time TM N , k that runs on an input w of length n. The runtime R ( n ) of the FHB algorithm, applied to N , k and w, satisfies
R ( n ) 2 κ · n k ,
where the constant κ > 1 .
Theorem 1 suggests a prohibitive runtime. However, a refinement of the method leads to a significantly tighter upper bound, as discussed in the remainder of this paper.

4. The 3-SAT Solver N *

Even a devil’s advocate would have to admit that the analysis thus far is unduly pessimistic, as it assumes that every cell in the basic tableau could involve a binary nondeterministic guess. In reality, the situation is considerably more favorable. Only a portion of the basic tableau entails binary choices, and crucially, the outcome of each guess (i.e., the transition to either state q a or q b ) determines the presence of a specific state symbol (e.g., q c ) in another cell—typically further down—in the basic tableau. These observations about the basic tableau naturally carry over to the extended tableau.
To clarify why the situation is more favorable, we begin in this section by formally defining the 3-SAT solver N * , supplemented by informal insights. In Section 5, we explore the inter-cell dependencies of N * across distant rows of the basic tableau. Then, in Section 6, we conclude with a presentation of two algorithms: the refined rFHB and the streamlined sFHB algorithm.

Overview

The TM N * runs on an input word w of length n, where a substring w ˜ of input word w encodes a 3-SAT formula ϕ with l propositional variables ( x 1 , , x l ) and m clauses. Each clause consists of three literals—for example, x 2 ¬ x 7 x 92 . Here, l m < n .
With respect to 3-SAT itself, an informal grasp of the following stipulations suffices:
  • Each variable x i appears in at least two literals across the formula ϕ ; otherwise, such a variable (ocurring only once) can be eliminated through preprocessing.
  • No clause contains the same variable x more than once—whether as x x , x x ¯ , or x ¯ x ¯ . Hence, l m .
  • Accordingly, we assume that as m increases sufficiently, so does l.
    -
    The lower bound is m = Ω ( l ) , as each variable must be constrained in some way.
    -
    In a deliberately conservative upper bound, we posit m = O ( l 3 ) .
TM N * is an N , 2 machine. Its input word w has the form:
w ˜ # # ,
where the number of blank symbols (□) in between the two # markers is exactly l, and the comma is included solely for readability.
The operation of N * , with state set Q and tape alphabet Φ , proceeds through three sequential stages:
  • Coin-Tossing Stage  S ^ : To denote elements specific to this stage, we annotate instruction labels and state symbols with a “roof” symbol ^. This annotation signifies divergence—that is, the nondeterminism which arises exclusively during this stage. The coin-tossing stage uses seven machine instructions labeled t 1 ^ , t 2 ^ , , t 7 ^ and operates with the following state set:
    Q ^ = { q 0 ^ , q 1 ^ , Q 0 ^ , } with Q ^ Q .
  • Updating Stage  S : Here, instruction labels and state symbols are annotated with a bidirectional arrow ↔, indicating the machine’s back-and-forth traversal across the tape. This movement is generally required to update the encoding of the formula ϕ (i.e., the substring w ˜ ) based on the coin-toss outcomes. The updating stage employs instructions labeled t 1 , t 2 , , and uses the following set of states:
    Q = { q 0 , q 1 , q 2 , Q 0 , U 0 , } with Q Q .
  • Checking Stage S : In this final stage, instruction labels and state symbols are annotated with a left arrow ←, reflecting the machine’s predominant leftward movement. While most transitions are leftward, occasional rightward steps will occur locally. The checking stage uses instructions labeled t 1 , t 2 , and the state set:
    Q = { q 3 , q 0 , Q 0 , q r e j e c t , } with Q Q .
    The symbol q r e j e c t serves as a surrogate for q r e j e c t , and we adopt the latter notation in the remainder of this paper.

4.1. Coin-Tossing Stage

The machine N * stores the encoding of the formula ϕ , represented as the string w ˜ , on its tape. The tape head is initially positioned at the first blank symbol, □, right after w ˜ # . More specifically, the initial tape configuration is as follows:
w ˜ # q 1 ^ # .
Here, N * is in state q 1 ^ , reading the first of l blank symbols. The punctuation is included solely for readability.
The machine generates l bits, proceeding from left to right and writing each bit—either 0 or 1—into a separate tape cell. This sequence, which is supposed to represent the outcome of l independent coin tosses, is enclosed at both ends by the marker #.
One outcome of any coin toss must correspond to a rightward movement (+) and the other to a leftward movement (−). To enforce this constraint—consistent with Remark 4—we implement the following behavior, starting in the q 1 ^ tossing state:
t 1 ^ : q 1 ^ , q 1 ^ , 1 , + t 2 ^ : q 1 ^ , q 0 ^ , 0 ,
Among all instructions pertaining to N * , only t 1 ^ and t 2 ^ involve nondeterministic choices.
  • If a coin toss yields bit 1, the machine moves its head one cell to the right and re-enters the q 1 ^ state. See instruction t 1 ^ .
  • If a coin toss yields bit 0, the machine first moves its head one cell to the left and enters state q 0 ^ . See instruction t 2 ^ . Then the machine performs two deterministic moves to the right, ending up in state q 1 ^  again.
    -
    See instructions t 3 ^ t 5 ^ for the first move to the right:
    t 3 ^ : q 0 ^ , # Q 0 ^ , # , + t 4 ^ : q 0 ^ , 0 Q 0 ^ , 0 , + t 5 ^ : q 0 ^ , 1 Q 0 ^ , 1 , +
    -
    See instruction t 6 ^ for the second move to the right:
    t 6 ^ : Q 0 ^ , 0 q 1 ^ , 0 , +
  • Once the machine reaches the rightmost # marker (while in state q 1 ^ ), it moves leftward and enters state q 2 :
    t 7 ^ : q 1 ^ , # q 2 , # ,
Upon completing the coin-tossing process, the machine will have generated l bits,
b 1 , b 2 , , b l 1 , b l ,
for, respectively, the propositional variables:
x l , x l 1 , , x 2 , x 1 .
In other words, the j-th coin toss from the right ( 1 j l ) determines the truth assignment, 0 or 1, for propositional variable x j .
Assigning the truth value 1 to the variable x j entails that, during the Updating Stage  S , the machine will set each encoded occurrence of x j in the word w ˜ to 1, and each encoded occurrence of ¬ x j to 0. (A similar remark holds for the truth value 0.)
By preprogramming the proper constraints, to be detailed in Section 5, filling holes with tape-state symbols in the coin-tossing section of the basic tableau will automatically propagate to filling corresponding holes with tape-state symbols in lower sections of the entire basic tableau. Moreover, if and when all l coins have been tossed, the remaining basic tableau—and therefore the entire basic tableau—is fully determined. (Once the basic tableau is fully determined, the extended tableau is as well.) Even a devil’s advocate would expect this property to be reflected in a worst-case analysis of the FHB algorithm or a refinement thereof.

4.1.1. Four Coin Tosses

Table 10 illustrates the structure of the coin-tossing process for l = 4 . The hyphens and dots in the illustration represent potential positions of the tape head (i.e., tape-state symbols), with the distinction between the two serving only for visual clarity. The two extreme computation runs are depicted solely with hyphens: the diagonal run at the top (consisting of five hyphens) produces all four bits as 1, while the zigzagging run takes longer to complete and results in all four bits being 0.
As Table 10 conveys, the coin-tossing process is represented by a matrix with 3 l + 1 rows and l + 2 columns. This matrix can be embedded within a basic ( 3 l + 1 ) × ( 3 l + 3 )  mini tableau. The corresponding extended ( 3 · ( 3 l + 1 ) + 1 ) × ( 3 l + 3 )  mini tableau is not shown here.
Theorem 1 in Section 3.4 provides a basis for analyzing the runtime associated with the mini tableau. Crucially, if the 3-SAT solver N * were solely responsible for tossing l coins, then no tighter bound than 2 l —akin to Theorem 1’s worst-case runtime of the FHB algorithm—applies. In reality, however, the coin tosses of N * are made to correlate through the word w ˜ , which encodes the 3-SAT formula ϕ . For not all sequences of l coin tosses are valid—if any at all.

4.1.2. Properties of Computation

To appreciate (and ultimately formalize) the correlation between the l coin tosses, we begin by presenting two basic insights regarding the computation runs of N * on w:
  • The basic mini tableau—and, more rigorously, the extended mini tableau—captures all nondeterminism (coin tossing) inherent to N * , while also including rote deterministic computations.
    • Example 1: If four 1 bits are tossed consecutively, N * ’s tape head lands on c e l l [ 5 , 6 ] of Table 10 and immediately begins rote deterministic computation from row 6.
    • Example 2: If four 0 bits are tossed instead, N * uses the entire mini tableau to complete the coin tossing, reaching c e l l [ 13 , 6 ] , before starting rote deterministic computation in row 14 onward.
  • The rote deterministic computation does not revisit column 6 (in Table 10) or any column to its right. More formally, the rightmost column c ^ of the basic mini tableau, which is of length 3 l + 1 , contains exactly one tape-state symbol (namely, # q 1 ^ ). Furthermore, in the rest of the basic tableau, the same column c ^ contains only the # tape symbol and thus no other tape-state symbols.
    (a)
    Although the two hyphens and three dots in column  c ^ indicate multiple possible positions for a tape-state symbol, only one tape-state symbol can appear in any particular computation. Moreover, that symbol must be # q 1 ^ , implying that the following proposition holds:
    # q 1 ^ @ ( r ^ , c ^ ) B ,
    for a suitable row index r ^ .
    (b)
    For each input formula ϕ and any valid placement of # q 1 ^ in the rightmost column c ^ , which amounts to specifying r ^ , we can determine (and thus preprogram) the position of the machine’s head—though not the corresponding tape-state symbol—in every subsequent row of the entire basic tableau. In other words, the implication of guessing row index r ^ —with r ^ { 5 , 7 , 9 , 11 , 13 } in Table 10—is as follows:
    Apart from a substantial portion of the basic mini tableau, each row from r ^ + 1 onward in the basic tableau contains exactly one uncrossed cell—indicating that the machine’s tape head is restricted from occupying the crossed-out cells.
This property is ensured through the construction of N * , detailed further in Section 4.2 and Section 4.3.

4.2. Updating Stage

Upon entering Stage  S , the machine N * has its tape head positioned at the rightmost bit, immediately before the rightmost # marker. The tape configuration is as follows:
& $ c l m $ $ c l 2 $ c l 1 $ # 1 0 0 0 q 2 # ,
with
0 q 2 @ ( r ^ + 1 , c ^ 1 ) B .
The machine is in state q 2 , with the head reading the bit 0 (in the current example).
Observe that the leftmost & symbol marks the beginning of the string w ˜ , and that the rightmost $ symbol marks the end of w ˜ . Moreover, each encoded clause, such as c l 1 , is delimited by the $ marker.

4.2.1. Unary Encoding

To simplify the control flow, we adopt a unary encoding for each variable and its negation. Specifically, a variable x α is encoded as a string of α copies of the symbol a:
a a a .
Similarly, the negated variable ¬ x α is encoded as a string of α copies of a distinct symbol a ¯ :
a ¯ a ¯ a ¯ .
Remark 9.
Encoding l propositional variables in binary requires l × ( log l ) bits, whereas a unary encoding requires l × l bits. Although this results in an increase, it does not amount to an exponential blow-up; furthermore, it remains acceptable within the scope of computability theory [23].
Rather than stating formal desiderata, the unary encoding scheme is best illustrated through a few examples. Suppose Clause 1 is as follows:
( x 3 ¬ x 2 x 1 ) .
Its unary encoding, denoted as c l 1 , is:
a a a a ¯ a ¯ a ,
where the comma and extra spacing are included solely to enhance readability.
The subscript of the variable indicates the number of occurrences of either the symbol a or a ¯ . If the literal is positive, the symbol a is used, if the literal is negative, a ¯ is used.
Now consider Clause 2:
( ¬ x 4 x 2 ¬ x 3 ) .
Its unary encoding, denoted c l 2 , is:
a ¯ a ¯ a ¯ a ¯ a a a ¯ a ¯ a ¯ .
The punctuation is added solely for clarity in this exposition.

4.2.2. Updating a Unary Encoding: Part 1

Let us revisit Clause 1 and its unary encoding, c l 1 :
a a a a ¯ a ¯ a .
Suppose the variable x 1 is assigned the truth value 1:
x 1 = 1 .
Then, after an update operation, the coresponding string encoding should take the following form:
? ? ? ? ? 1 ,
where each question mark (?) denotes a “don’t care” entry—a meta-symbol whose assigned value (a specific symbol from the alphabet Σ ) is inconsequential within this context.
Similarly, if
x 2 = 1 ,
then, eventually, the coresponding encoding should take the form:
? ? ? 0 ? ? .
Finally, if
x 3 = 0 ,
the encoding should ultimately take the form:
0 ? ? ? ? ? ,
again with question marks indicating “don’t care” entries.
In summary, for each encoded literal, the leftmost symbol σ { a , a ¯ } should be overwritten with the appropriate truth value. The remaining symbols a and a ¯ in the encoded literal are superfluous.

4.2.3. Updating a Unary Encoding: Part 2

Suppose now that the machine’s tape head approaches the encoded clause c l 1 :
a a a a ¯ a ¯ a ,
from the right, under the assumption that x 1 = 1 . The machine then traverses the entire (encoded) clause from right to left, continuing leftward across all (encoded) clauses preceding c l 1 . During this traversal, the machine updates the first symbol σ { a , a ¯ } of each encoded literal—in every (encoded) clause—according to the following rule:
  • If σ = a , overwite it with 1.
  • If σ = a ¯ , overwite it with 0.
Applying this rule to c l 1 yields the updated encoding:
a a 1 a ¯ 0 1 .
We now proceed to the second update pass, under the assumption that x 2 = 1 . The machine’s tape head approaches the previously updated string from the right-hand side, traversing the entire clause from right to left, as well as all (encoded) clauses to its left. In this pass, the machine updates the second symbol σ { a , a ¯ } of each encoded literal—in every (encoded) clause—according to the following rule:
  • If σ = a , overwite it with 1.
  • If σ = a ¯ , overwite it with 0.
Applying this rule, we obtain the updated string:
a 1 1 0 0 1 .
Finally, we arrive at the third update pass, under the assumption that x 3 = 0 . The machine’s tape head once again approaches the previously updated string from the right-hand side, traversing the entire clause from right to left, along with all (encoded) clauses to its left. In this pass, it updates the third symbol σ { a , a ¯ } of each encoded literal—in every (encoded) clause—according to the following rule:
  • If σ = a , overwite it with 0.
  • If σ = a ¯ , overwite it with 1.
Applying this rule yields the final updated string:
0 1 1 0 0 1 .
The resulting string has the desired form
0 ? ? 0 ? 1 ,
corresponding to the assignment
x 3 = 0 and x 2 = x 1 = 1 ,
which is best read from right to left, in keeping with the direction of the tape head’s movement.

4.2.4. The Updating Code

We shall now present the corresponding instructions of N * . Upon entering the Updating Stage  S , the machine N * has the configuration
& $ c l m $ $ c l 2 $ a a a a ¯ a ¯ a $ # 1 0 0 0 q 2 # ,
and starts with t 1 or t 2 :
t 1 : q 2 , 1 q 1 , # , , t 2 : q 2 , 0 q 0 , # , .
In both cases, the bit (1 or 0) is overwritten with the # symbol.
In our running example, instruction t 1 does not apply, whereas t 2 does—since the scanned bit is not 1, but 0. This yields:
& $ c l m $ $ c l 2 $ a a a a ¯ a ¯ a $ # 1 0 0 q 0 # # .
Due to the symmetry in the behavior of updating a 1-bit and a 0-bit, we restrict our analysis to the second case ( t 2 ). Accordingly, the instructions and labels presented below are not exhaustive; additional instructions exist but are omitted here for brevity.

Moving Left 

Instructions t 3 and t 4 allow the machine to remain in state q 0 while traversing all remaining bits from right to left:
t 3 : q 0 , 0 q 0 , 0 , t 4 : q 0 , 1 q 0 , 1 ,
These transitions preserve the bit values and simply move the tape head to the left.
Eventually, upon encountering the leftmost # marker, the machine transitions from state q 0 to Q 0 :
t 5 : q 0 , # Q 0 , # , ,
resulting in the following configuration:
& $ c l m $ $ c l 2 $ a a a a ¯ a ¯ a $ Q 0 # 1 0 0 # # .
When the machine encounters the $ symbol, or when it later visits the symbols a or a ¯ , it simply continues moving left, remaining in state Q 0 :
t 6 , x : Q 0 , x Q 0 , x , , x { $ , a , a ¯ } .
The result of executing the instruction t 6 , $ for the first time is as follows:
& $ c l m $ $ c l 2 $ a a a a ¯ a ¯ a Q 0 $ # 1 0 0 # # .
The machine has now reached the first (encoded) literal of an (encoded) clause and switches from state Q 0 to state U 0 , moving left:
t 7 : Q 0 , U 0 , , ,
resulting in the configuration:
& $ c l m $ $ c l 2 $ a a a a ¯ a ¯ a U 0 $ # 1 0 0 # # .
While in the update state U 0 , encountering any of the symbols ∨, $, 0, or 1 does not trigger a state change; the machine’s tape head simply continues moving left:
t 8 , x : U 0 , x U 0 , x , , x { , $ , 0 , 1 } .
However, upon reading the symbol a (or a ¯ ), the machine carries out the update by writing the bit 0 (or 1, respectively) and returning to state Q 0 :
t 9 : U 0 , a Q 0 , 0 , , t 10 : U 0 , a ¯ Q 0 , 1 , .
In our running example, we obtain:
& $ c l m $ $ c l 2 $ a a a a ¯ a ¯ Q 0 0 $ # 1 0 0 # # .
At this point, also reconsider the instructions t 6 , x , where x { $ , a , a ¯ } : the machine remains in state Q 0 while scanning from right to left in search of the next ∨ symbol, if one exists. In our running example, the next ∨ symbol has already been located.
Remark 10.
As previously mentioned, we are restricting our analysis to the case of updating a 0-bit. However, for future reference, we provide the twin instructions of t 9 and t 10 , as follows:
τ 9 : U 1 , a Q 1 , 1 , , τ 10 : U 1 , a ¯ Q 1 , 0 , ,
which update a and a ¯ to, respectively, 1 and 0.

Moving Right 

Finally, if the machine is in state U 0 or Q 0 and encounters the leftmost marker &, it transitions to state h and begins moving right:
t 11 : U 0 , & h , & , + ,
t 12 : Q 0 , & h , & , + ,
resulting in the following configuration:
& $ h c l m $ $ c l 2 $ a a 0 a ¯ 1 0 $ # 1 0 0 # # .
In state h , the machine scans rightward, searching for the leftmost occurrence of the symbol #. It continues moving right over the symbols ∨, $, 0, 1, a, and a ¯ without changing state:
t 13 , x : h , x h , x , + , x { , $ , 0 , 1 , a , a ¯ } .
This ultimately brings us to, e.g., the following configuration:
& $ c l m $ $ c l 2 $ a a 0 a ¯ 1 0 $ # h 1 0 0 # # .
Upon encountering #, the machine transitions from state h to i and continues moving right:
t 14 : h , # i , # , + ,
which results in one of the following two types of configurations:
& $ c l m $ $ c l 2 $ 0 1 0 0 1 0 $ # # i # # # # ,
and
& $ c l m $ $ c l 2 $ a a 0 a ¯ 1 0 $ # 1 i 0 0 # # .
In the first type of configuration, when there are no remaining toss outcomes (to the right of the leftmost # symbol), the machine transitions from state i to q 3 , and moves left:
t 15 : i , # q 3 , # , .
This transition marks the beginning of the final phase: the Checking Stage.
However, in the second type of configuration, when one or more toss outcomes are still present, the machine continues moving right, searching for the rightmost bit. It first transitions from i to j while preserving the bit value:
t 16 : i , 0 j , 0 , + , t 17 : i , 1 j , 1 , + .
Once in state j , the machine continues moving right through any remaining bits:
t 18 : j , 0 j , 0 , + , t 19 : j , 1 j , 1 , + .
When the leftmost # of the remaining # symbols is encountered, the machine transitions from state j to state q 2 and moves left:
t 20 : j , # q 2 , # , .
At this point, the machine has returned to state q 2 , scanning the rightmost bit:
& $ c l m $ $ c l 2 $ a a 0 a ¯ 1 0 $ # 1 0 0 q 2 # # .
It is now prepared to repeat the earlier steps, thereby continuing the Updating Stage for another iteration.

4.3. Checking Stage

Once all tossed coins have been processed, N * reaches—via instruction t 15 —the following configuration:
& $ c l m $ $ c l 2 $ 0 1 0 0 1 0 $ # q 3 # # # ,
Here, N * is ready to perform a final traversal of the tape—from its current position to the left, with local left-to-right movements—to check whether each instantiated clause is satisfied.
Remark 11.
The caveat is that we want the tape movement behavior to remain invariant—that is, independent of the contents of the (encoded) instantiated clauses.
The initial phase of the right-to-left traversal is governed by the following two instructions:
t 1 : q 3 , # q 0 , # , , t 2 : q 0 , $ q 0 , $ , ,
which results in the following configuration:
& $ c l m $ $ c l 2 $ 0 1 0 0 1 0 q 0 $ # # # # .
While in state q 0 , the machine switches to state Q 0 upon encountering the first ∨ symbol in the (encoded and instantiated) clause currently being scanned:
t 3 : q 0 , Q 0 , , ,
resulting, in our example, in the following configuration:
& $ c l m $ $ c l 2 $ 0 1 0 0 1 0 Q 0 $ # # # # .
The states Q 0 , Q 00 , and Q 000 indicate that the machine is currently processing—right to left—the first, second, and third (encoded and instantiated) literals of the clause being scanned, respectively. As we shall see shortly, transitions from these states to R 0 , R 00 , and R 000 , respectively, signify that the machine has located the leftmost bit of the corresponding literal, thereby determining its truth value.

4.3.1. First Literal

In state Q 0 , the machine continues moving left through the (encoded and instantiated) clause, remaining in the same state as it reads bits:
t 4 , x : Q 0 , x Q 0 , x , , x { 0 , 1 } .
Upon reaching the next ∨ symbol, the machine switches to the result state R 0 and moves one cell to the right:
t 5 : Q 0 , R 0 , , + ,
resulting in the following configuration:
& $ c l m $ $ c l 2 $ 0 1 0 0 1 0 R 0 $ # # # # .
The machine can now determine whether the clause is already satisfied or not:
t 6 : R 0 , 1 s 00 , 1 , , t 7 : R 0 , 0 q 00 , 0 , ,
where the letter “s” in s 00 stands for “satisfied.”
In our running example, the first literal is not satisfied, resulting in the following configuration:
& $ c l m $ $ c l 2 $ 0 1 0 0 1 q 00 0 $ # # # # .
Since the first literal is not satisfied, the machine proceeds in state q 00 (rather than s 00 ). As we shall see shortly, the machine subsequently transitions from q 00 to Q 00 .

4.3.2. Second Literal: Part 1

The states q 00 and Q 00 indicate that the machine is currently processing—right to left—the second (encoded and instantiated) literal of the clause being scanned. The transition from Q 00 to R 00 signifies that the machine has located the leftmost bit of the literal, thereby determining its truth value.
The relevant transitions are as follows:
t 8 : q 00 , Q 00 , , .
t 9 , x : Q 00 , x Q 00 , x , , x { 0 , 1 } .
t 10 : Q 00 , R 00 , , + .
Once in the result state R 00 ,
& $ c l m $ $ c l 2 $ 0 1 0 0 R 00 1 0 $ # # # # ,
the machine inspects the bit:
  • If the truth value is 1, it switches to state s 000 .
  • If the truth value is 0, it switches to state q 000 .
These transitions are captured by:
t 11 : R 00 , 1 s 000 , 1 , , t 12 : R 00 , 0 q 000 , 0 , ,
where the letter “s” in s 000 stands for “satisfied.”
In our running example, the second literal is not satisfied, resulting in:
& $ c l m $ $ c l 2 $ 0 1 0 q 000 0 1 0 $ # # # # .
Since the second literal is not satisfied, the machine proceeds in state q 000 (rather than s 000 ). As we shall see shortly, the machine subsequently transitions from q 000 to Q 000 .

4.3.3. Second Literal: Part 2

Similar to q 00 and Q 00 , the states s 00 and S 00 also indicate that the machine is currently processing—right to left—the second (encoded and instantiated) literal of a clause. However, in this case, the clause is already known to be satisfied:
& $ c l m $ $ c l 2 $ 0 1 0 0 1 s 00 1 $ # # # # .
In conformity with Remark 11, the transition from S 00 to T 00 indicates that the machine has located the leftmost bit of the literal.
The corresponding transitions are:
t 8 : s 00 , S 00 , , .
t 9 , x : S 00 , x S 00 , x , , x { 0 , 1 } .
t 10 : S 00 , T 00 , , + .
Regardless of the leftmost bit’s value, the machine proceeds to state s 000 :
t 11 : T 00 , 1 s 000 , 1 , , t 12 : T 00 , 0 s 000 , 0 , .

4.3.4. Third Literal: Part 1

The states q 000 and Q 000 indicate that the machine is currently processing—right to left—the third (encoded and instantiated) literal of the clause being scanned. The transition from Q 000 to R 000 signifies that the machine has located the leftmost bit of the literal, thereby determining its truth value.
The relevant transitions are as follows:
t 13 : q 000 , Q 000 , , .
t 14 , x : Q 000 , x Q 000 , x , , x { 0 , 1 } .
t 15 : Q 000 , $ R 000 , $ , + .
Once in the result state R 000 ,
& $ c l m $ $ c l 2 $ 0 R 000 1 0 0 1 0 $ # # # # ,
the machine inspects the bit:
  • If the truth value is 1, it switches to state q 0 .
  • If the truth value is 0, it switches to state q r e j e c t .
These transitions are captured by:
t 16 : R 000 , 1 q 0 , 1 , , t 17 : R 000 , 0 q r e j e c t , 0 , .
In our running example, the third literal is not satisfied, and we obtain:
& $ c l m $ $ c l 2 $ q r e j e c t 0 1 0 0 1 0 $ # # # # .

4.3.5. Third Literal: Part 2

Similar to q 000 and Q 000 , the states s 000 and S 000 also indicate that the machine is currently processing—right to left—the third (encoded and instantiated) literal of a clause. However, in this case, the clause is already known to be satisfied. In conformity with Remark 11, the transition from S 000 to T 000 indicates that the machine has located the leftmost bit of the literal.
The corresponding transitions are:
t 13 : s 000 , S 000 , , .
t 14 , x : S 000 , x S 000 , x , , x { 0 , 1 } .
t 15 : S 000 , $ T 000 , $ , + .
Regardless of the leftmost bit’s value, the machine proceeds to state q 0 :
t 16 : T 000 , 1 q 0 , 1 , , t 17 : T 000 , 0 q 0 , 0 , .

4.3.6. Early Rejection

The symbol q r e j e c t must never appear in any cell of the (extended) tableau—this follows directly from the semantics of the Horn formula ψ a c c e p t (see point 2 in the introduction of this paper). Arguably for the sake of mathematical aesthetics, one could handle its hypothetical presence similarly to t 2 , using the transition:
t 18 : q r e j e c t , $ q 0 , $ , .
However, we prefer to discard instruction t 18 from further consideration.

4.3.7. Looping on the Left

If, and only if, all (encoded and instantiated) clauses have been examined from right to left, the tape head enters a loop confined to the two leftmost (non-blank) cells of the tape. In combination with instruction t 2 , the following instruction
t 19 : q 0 , & q 0 , & , +
ensures that the tape head oscillates indefinitely between these two leftmost cells associated with the word w ˜ .

5. Long-Range Inter-Cell Dependencies: Defining ψ e x t r a 2

Consider a devil’s advocate who adheres to the formal definition of the 3-SAT solver N * presented in Section 4. She will recognize that, for a given ϕ , once the row r ^ corresponding to the guess
# q 1 ^ @ ( r ^ , c ^ ) B ,
is fixed, the positions of tape-state symbols in all subsequent rows of the basic tableau become fully determined. Furthermore, only the first r ^ rows of the basic tableau involve binary (nondeterministic) choices.
Based on this insight and the notion of “long-range inter-cell dependency,” introduced in this section, we aim to improve the prohibitive runtime established by Theorem 1, as revisited in Section 6. To this end, we first introduce the concept of a coin-tossing scenario (Section 5.1), followed by an examination of two distinct forms of long-range inter-cell dependencies within the basic tableau of N * : top-down constraints (Section 5.2) and bottom-up constraints (Section 5.3). These dual perspectives are ultimately unified in a single framework—termed correlated coin tossing (Section 5.4).

5.1. Coin Tossing Scenarios

We want to focus on declarative coin toss outcomes (on the one hand) and imperative coin-tossing scenarios (on the other hand).
Definition 3.
Given a coin x α , where l α 1 , we say that
x α = b , with b { 0 , 1 } ,
is a coin toss outcome.
Moreover, a coin-tossing scenario implementing the outcome x α = 1  is represented by a proposition of the form
q 1 ^ @ ( i , j ) B or # q 1 ^ @ ( i , c ^ ) B ,
where i and j denote suitable row and column indices, respectively.
Similarly, a coin-tossing scenario implementing the outcome x α = 0  is represented by a proposition of the form
0 Q 0 ^ @ ( i , j ) B ,
where i and j denote suitable row and column indices, respectively.
The two instances of “suitable” in Definition 3 can be formalized; however, we choose to illustrate the concept through a series of examples instead. In the following examples, note that coin x 1 is the last coin tossed, as the coins are ordered as follows:
x l , x l 1 , , x 2 , x 1 .
Example 1.
Consider l = 4 and Table 10. The coin toss outcome x 4 = 1 is exhaustively implemented by the coin-tossing scenario
q 1 ^ @ ( 2 , 3 ) B .
The TM N * can produce a result of 1 for the leftmost coin, x 4 , in exactly one coin-tossing scenario.
Example 2.
Consider l = 3 and Table 11. The coin toss outcome x 3 = 1 is exhaustively implemented by the same coin-tossing scenario as in the previous example.
Remark 12.
To facilitate Definition 3, observe that for a given coin x α and appropriately chosen indices i and j, the conjunction
¬ 0 Q 0 ^ @ ( i + 1 , j 1 ) q 1 ^ @ ( i , j ) B
represents a coin-tossing scenario implementing the outcome x α = 1 . Furthermore, the first conjunct trivially follows from the second when the instructions of N * —pertaining to the Coin-Tossing Stage—are considered in conjunction with the Horn formula ψ s t e p η . Hence, the first conjunct is redundant and does not appear in Definition 3.
Example 3.
Consider l = 4 and Table 10. There are four distinct coin-tossing scenarios that exhaustively implement x 1 = 1 ; they are:
# q 1 ^ @ ( 5 , 6 ) B , # q 1 ^ @ ( 7 , 6 ) B , # q 1 ^ @ ( 9 , 6 ) B , and # q 1 ^ @ ( 11 , 6 ) B .
Example 4.
Consider l = 3 and Table 11. There are three distinct coin-tossing scenarios that exhaustively implement x 1 = 0 ; they are:
0 Q 0 ^ @ ( 5 , 4 ) B , 0 Q 0 ^ @ ( 7 , 4 ) B , and 0 Q 0 ^ @ ( 9 , 4 ) B .
Example 5.
The three coin-tossing scenarios in Example 4, which involve the coin x 1 and l = 3 (Table 11), also exhaustively implement the coin toss outcome x 2 = 0 when l = 4 (Table 10).
The crux in all these examples is that, for any given coin x α and any specific coin toss outcome x α = b , there exists exactly one column in the basic mini tableau that harbors all coin-tossing scenarios implementing x α = b .
Definition 4.
A composite coin-tossing scenario, or simply a scenario, is a conjunction of one or more coin-tossing scenarios.
For clarity, we introduce Example 6 prior to the definitions.
Example 6.
Consider l = 4 and Table 10. The scenario
q 1 ^ @ ( 2 , 3 ) B # q 1 ^ @ ( 5 , 6 ) B
implements the combined outcome
x 4 = 1 and x 1 = 1 .
Definition 5.
We say that c of the form
c 1 and and c n ,
with each c i of the form x α = b , is a combined outcome when any two conjuncts c i and c j ( i j ) in c pertain to a different coin.
For instance, expression (8) represents a combined outcome; however,
x 4 = 1 and x 4 = 1
does not, nor does
x 4 = 1 and x 4 = 0 .
Definition 6.
We say that a scenario s of the form
s 1 s n ,
implements a combined outcome c of the form
c 1 and and c n ,
when each conjunct s i in s implements the corresponding conjunct c i of c.
Lemma 1.
Given a coin x α , with l α 1 , and an outcome b { 0 , 1 } , there are at most l distinct coin-tossing scenarios that exhaustively implement the coin toss outcome x α = b .
Proof. 
We present a geometrical argument concerning the basic mini tableau, based on the definitions provided for the Coin-Tossing Stage (Section 4.1). For any non-zero value of l, no column c to the left of the rightmost column c ^ contains more occurrences of a dot or a hyphen. Consequently, we present an argument for the column c ^ . (A similar argument can then be made for any other column c that contains an equal number of occurrences.)
Due to the stepwise behavior of any TM, the first l cells in column c ^ cannot contain a dot or a hyphen. Similarly, the last cell in that column—at column index 3 l + 1 —contains a hyphen (see item 1 in Section 4.1.2). Moreover, every other cell in the rightmost column, ranging from row index l + 1 to 3 l + 1 , cannot contain a dot or a hyphen, owing to the properties of TM movement. Therefore, at most x cells in column c ^ can contain a dot or a hyphen, where
x = ( 3 l + 1 ) l 2 = l + 1 2 .
This yields l + 1 . Finally at least one of these cells does not contribute to the outcome b but solely to 1 b , establishing the desired upper bound of l.    □
Corollary 1.
Consider three coins x γ , x β , and x α , with l γ > β > α 1 , and three coin toss outcomes b γ , b β , b α { 0 , 1 } . There are at most l 3 distinct scenarios that exhaustively implement the combined outcome: x γ = b γ and x β = b β and x α = b α .
Definition 7.
Consider a coin toss outcome x α = b , i.e., with α and b fixed. We write
( x α = b ) @ [ β ] ,
with the parameter β ranging from 1 to l, to denote the β-th coin-tossing scenario within a fixed, standard ordering of all coin-tossing scenarios that implement x α = b . If, for sufficiently large β, expression (9) does not correspond to a coin-tossing scenario, it instead serves as a placeholder for the truth value 0, meaning false.
Remark 13.
From Lemma 1, we know that each coin toss outcome can correspond to at most l distinct coin-tossing scenarios. This explains why β in Definition 7 ranges from 1 to l.
Example 7.
Suppose l = 3 and that any scenario (cf. Definitions 4 and 6) implementing the combined outcome x 2 = 1 and x 1 = 0 (cf. Definition 5) must imply the presence of bit 1 in c e l l [ 4 , 2 ] in the basic tableau (Table 11). We formulate this constraint with the following Horn formula:
1 β 2 l 1 β 1 l ( x 2 = 1 ) @ [ β 2 ] ( x 1 = 0 ) @ [ β 1 ] 1 @ ( 4 , 2 ) B ,
which is of complexity O ( l 2 ) .
In the remainder of this subsection, we examine Example 7 (Table 11), which we will need later. We start with the following syntactic surrogates (≡) from Example 4:
  • ( x 1 = 0 ) @ [ 1 ] 0 Q 0 ^ @ ( 5 , 4 ) B ,
  • ( x 1 = 0 ) @ [ 2 ] 0 Q 0 ^ @ ( 7 , 4 ) B ,
  • ( x 1 = 0 ) @ [ 3 ] 0 Q 0 ^ @ ( 9 , 4 ) B .
Likewise, the reader can verify that coin x 2 , with l = 3 , can land on 1 in either of the following two tossing scenarios:
  • ( x 2 = 1 ) @ [ 1 ] q 1 ^ @ ( 3 , 4 ) B ,
  • ( x 2 = 1 ) @ [ 2 ] q 1 ^ @ ( 5 , 4 ) B .
Hence, formula (10) represents 2 × 3 = 6 implications of the three-literal form:
( x 2 = 1 ) @ [ β 2 ] ( x 1 = 0 ) @ [ β 1 ] 1 @ ( 4 , 2 ) B ,
as partially illustrated with the two mini tableaux in Table 12.
The left illustration in Table 12 depicts the two possible choices for the left conjunct in formula (11), while the right illustration presents the three possible choices for the right conjunct.
Among these six implications, only two remain potentially satisfiable when incorporating the constraints of ψ s t e p η (which accounts for the step-by-step behavior of N * on w) and ψ e x t r a 1 (which captures the spatial dynamics of the TM’s head within the tableau). These two remaining options are depicted in Table 13, with one option illustrated on the left and the other on the right. In both cases, the consequent
1 @ ( 4 , 2 ) B
from formula (11) is emphasized by a boldfaced 1 in the corresponding cell.
How do we transition from the left illustration (respectively the right illustration) in Table 13 to the corresponding left (respectively right) illustration in Table 14? As before, the transformation is achieved by incorporating the constraints ψ s t e p η and ψ e x t r a 1 . Moreover, only the left illustration in Table 14 remains potentially satisfiable, as the right illustration is rendered unsatisfiable due to the constraints imposed by the instructions t 2 ^ t 6 ^ in Section 4.1. Specifically, in the right illustration of Table 14, c e l l [ 3 , 2 ] must contain the tape-state symbol 0 Q 0 ^ . However, this conflicts with 1 @ ( 4 , 2 ) B and instruction  t 6 ^ .
Finally, the left illustration in Table 14 can be further refined by extending the column of 1s downward:
1 @ ( 7 , 2 ) B 1 @ ( 8 , 2 ) B .
This extension follows from ψ e x t r a 1 and the fact that row  r ^ must have been established as r ^ = 6 at the outset of this discussion—for else c e l l [ 5 , 4 ] may not contain ( x 1 = 0 ) @ [ 1 ] . Consequently,
# q 1 ^ @ ( 6 , 5 ) B
holds as a propositional fact.

5.2. Illustrating Top-Down (↓) Constraints

A central tenet in this paper is that any particular coin-tossing scenario in the basic mini tableau must determine the presence of specific tape-state symbols in lower cells, in the remainder of the basic tableau.
To illustrate, consider the coin x 2 with l = 3 (Table 11) which can land on 1 in either of the following two coin-tossing scenarios:
q 1 ^ @ ( 3 , 4 ) B q 1 ^ @ ( 5 , 4 ) B ,
as depicted in the left illustration of Table 12.
Each of these two coin-tossing scenarios necessitates the presence of tape-state symbols s q in cells located beyond the basic mini tableau but (obviously) still within the bounds of the basic tableau. This feature holds for both the Updating Stage  S and the Checking Stage  S (Section 4), giving rise to constraints (Section 5.2.1) and constraints (Section 5.2.2), respectively.
Remark 14.
We use the notation
l c r
as a shorthand for the conjunction
( l c ) ( r c ) .

5.2.1. Updating Stage

Recall from Section 4.2.4 that the Updating Stage  S for coin x 2 begins with one of the following two types of configurations:
& $ c l m $ $ c l 2 $ a a 0 a ¯ 1 0 $ # 1 0 1 q 2 # # .
& $ c l m $ $ c l 2 $ a a 0 a ¯ 1 0 $ # 1 0 0 q 2 # # .
The first configuration type necessitates familiarity with the instructions τ 9 and τ 10 ,
τ 9 : U 1 , a Q 1 , 1 , , τ 10 : U 1 , a ¯ Q 1 , 0 , ,
introduced in Remark 10. The second configuration type relies on:
t 9 : U 0 , a Q 0 , 0 , , t 10 : U 0 , a ¯ Q 0 , 1 , .
Eeach coin-tossing scenario implementing x 2 = 1 (high up in the basic tableau) necessarily involves either a U 1 in cell * c or a ¯ U 1 in cell c * (further down in the basic tableau), corresponding to τ 9 and τ 10 , respectively. These cases are examined in the first and second subsections, respectively. The complexity is further analyzed in a third subsection.

Case a 

Definition 8.
Consider a coin x α , with l α 1 . An (uninstantiated, encoded) literal in w ˜ that contains at least α occurrences of the symbol a is called an a-admissible literal relative to x α .
Definition 9.
Consider a coin x α , with l α 1 along with an a-admissible literal relative to x α , denoted as * l . Counting from right to left, focus on the α-th occurrence of a in * l . In the basic tableau, the a-updating cell corresponding to coin x α and literal * l  is the cell * c that contains either
a U 1 o r a U 0 ,
as prescribed by instructions τ 9 and t 9 , respectively.
The next claim follows from the construction of N * in Section 4:
Claim 1.
For each fixed row index  r ^ and coin x α , there exists a one-to-one correspondence between the literals * l and the cells * c , as established in Definition 9.
Example 8.
Referring to Table 11, let the row index r ^ be initially set to r ^ = 6 . Now, consider the coin toss outcome x 2 = 1 . Then, for each a-admissible literal * l in w ˜ relative to x 2 , the following constraints hold:
q 1 ^ @ ( 3 , 4 ) B a U 1 @ ( * i , * j ) B q 1 ^ @ ( 5 , 4 ) B ,
where c e l l [ * i , * j ] = c e l l  *c, and * c is the a-updating cell corresponding to x 2 and * l .
Remark 15.
To recatipulate, given that there are some number γ of a-admissible literals * l relative to x 2 , each of the two coin-tossing scenarios that implement x 2 = 1 necessitates the presence of a specific tape-state symbol in the corresponding γ cells * c further down in the (basic) tableau.

Case a ¯  

A similar discussion regarding x 2 = 1 and the symbol a ¯ (rather than a) brings us to the following definitions:
Definition 10.
Consider a coin x α , with l α 1 . An (uninstantiated, encoded) literal in w ˜ that contains at least α occurrences of the symbol a ¯ is called an  a ¯ -admissible literal relative to x α .
Definition 11.
Consider a coin x α , with l α 1 , along with an a ¯ -admissible literal relative to x α , denoted as l * . Counting from right to left, focus on the α-th occurrence of a ¯ in l * . In the basic tableau, the  a ¯ -updating cell corresponding to coin x α and literal l * is the cell c * that contains either
a ¯ U 1 o r a ¯ U 0 ,
as prescribed by instructions τ 10 and t 10 , respectively.
Claim 2.
For each fixed row index  r ^ and coin x α , there exists a one-to-one correspondence between the literals l * and the cells c * , as established in Definition 11.
Example 9.
Referring to Table 11, let the row index r ^ be initially set to r ^ = 6 . Now, consider the coin toss outcome x 2 = 1 . Then, for each a ¯ -admissible literal l * in w ˜ relative to x 2 , the following constraints hold:
q 1 ^ @ ( 3 , 4 ) B a ¯ U 1 @ ( i * , j * ) B q 1 ^ @ ( 5 , 4 ) B ,
where c e l l [ i * , j * ] = c e l l c * , and c * is the a ¯ -updating cell corresponding to x 2 and l * .
Remark 16.
To recatipulate, given that there are some number γ of a ¯ -admissible literals l * relative to x 2 , each of the two coin-tossing scenarios that implement x 2 = 1 necessitates the presence of a specific tape-state symbol in the corresponding γ cells c * further down in the (basic) tableau.

Complexity 

Generalizing from the Horn constraints (12) and (13), we take a conservative approach and assume that updating w ˜ to reflect a coin toss outcome x α = b requires modifying all 3 m encoded literals. In other words, this update necessitates performing 3 m overwrites at the α -th occurrences (counting from the right) of either a or a ¯ . Extending this to all l coins, and applying Lemma 1, we account for l coin-tossing scenarios per coin outcome, with two possible outcomes per coin. Consequently, the total number of constraints, each expressed in the two-literal Horn form
@ @ ,
is given by
3 m × l × l × 2 = O ( n 3 ) .
Remark 17.
The complexity expressed in terms of l, with m = O ( l 3 ) , is O ( l 5 ) .
Additional constraints can be introduced, yet still resulting in merely O ( n κ ) complexity, for some constant κ . Another example is provided in Appendix C. To establish our compression result (Theorem 2), it is not necessary to exhaustively list all constraints.

5.2.2. Checking Stage

Concerning the Checking Stage  S , we wish to give examples of constraints related to the outcome x 2 = 1 . However, this depends on the positioning of both x 2 and ¬ x 2 within ϕ and, in its encoded form, within w ˜ . Hence, for illustration, suppose that the formula ϕ contains x 2 only as the leftmost literal encoded in w ˜ , and ¬ x 2 only as the rightmost literal. Under this assumption, the uninstantiated word w takes the form:
& $ a a $ $ c l 2 $ a ¯ a ¯ $ # # .
Now, recall from Section 4.3 the following three instructions:
t 5 : Q 0 , R 0 , , + ,
t 15 : Q 000 , $ R 000 , $ , + ,
t 15 : S 000 , $ T 000 , $ , + .
These correspond to the following three kinds of configurations, respectively:
& $ 1 0 $ $ c l 2 $ 0 R 0 1 $ # # ,
& $ 1 R 000 0 $ $ c l 2 $ 0 1 $ # # ,
& $ 1 T 000 0 $ $ c l 2 $ 0 1 $ # # .
Focusing on the rightmost and leftmost encoded literals (cf.  R 0 on the one hand, and R 000 and T 000 on the other hand) we observe that two corresponding cells, c * and * c , contain 0 and 1, respectively:
q 1 ^ @ ( 3 , 4 ) B 0 R 0 @ ( i * , j * ) B q 1 ^ @ ( 5 , 4 ) B ,
q 1 ^ @ ( 3 , 4 ) B $ Q 000 @ ( * i 1 , * j 1 ) B 1 R 000 @ ( * i , * j ) B ,
q 1 ^ @ ( 3 , 4 ) B $ S 000 @ ( * i 1 , , * j 1 ) B 1 T 000 @ ( * i , * j ) B .
Three points warrant clarification. First, the cells c * and * c “contain 0 and 1, respectively,” as indicated by the use of tape-state symbols: 0 R 0 on the one hand, and 1 R 000 along with 1 T 000 on the other hand. Second, cells c * and * c are shorthand for c e l l [ i * , j * ] and c e l l [ * i , * j ] , respectively. Third, the reader will recognize that the counterparts to formulas (15) and (16), where the left conjunct is replaced by
q 1 ^ @ ( 5 , 4 ) B ,
should also be considered in our discussion. For the sake of brevity, we omit them.

Complexity 

The crux is again that every coin-tossing scenario implementing x 2 = 1 (positioned high up in the basic tableau) necessitates the presence of specific tape-state symbols appearing further down in the basic tableau. However, now we have also encountered three-literal Horn formulas (e.g., formulas (15) and (16)) instead of only two-literal Horn formulas (such as formula (14)).
Generalizing from the specific constraints illustrated in (14)–(16), we assume that checking—during the Checking Stage  S —whether the instantiated w ˜ is trivially true or trivially false requires examining each leftmost bit in all 3 m encoded and instantiated literals. Given that there are l coins, at most l coin-tossing scenarios per coin outcome, and two possible outcomes per coin toss, this results in a complexity of
3 m × l × l × 2 = O ( n 3 )
constraints, each expressed in either two-literal Horn form,
@ @ ,
or three-literal Horn form:
@ @ @ .
Remark 18.
Formula (15)—and many other formulas—can be extended as follows:
# q 1 ^ @ ( r ^ , c ^ ) B q 1 ^ @ ( 3 , 4 ) B $ Q 000 @ ( * i 1 , * j 1 ) B 1 R 000 @ ( * i , * j ) B
This more elaborate formulation also aligns with our discourse, in which the guess
# q 1 ^ @ ( r ^ , c ^ ) B ,
is explicitly part of the equation. Nevertheless, we continue to assume that the Horn constraints—such as (15)—are generated dynamically, thus rendering the first conjunct in formula (17) redundant.

5.3. Illustrating Bottom-Up (↑) Constraints

Bottom-up constraints are not a new consideration in this discussion. In fact, a specific type of bottom-up constraint has already emerged in the transition from the left illustration in Table 13 to the left illustration in Table 14. This transition showcases the upward propagation of 1s, driven by ψ e x t r a 1 . However, another form of bottom-up constraint (↑) also merits attention, which we will briefly explore here.
We revisit Clause 1 from Section 4.2.1, along with its unary encoding, c l 1 :
a a a a ¯ a ¯ a .
Assuming this encoded clause is integral to w ˜ , let x 2 = 1 and x 1 = 0 .
During the Updating Stage  S of the machine N * , the coresponding string encoding ultimately takes one of the following forms:
1 1 0 0 1 0 ,
or the form
0 1 0 0 1 0 ,
depending on whether x 3 = 1 or, respectively, x 3 = 0 .
As established in Section 4.3.4, the leftmost bit in these forms corresponds respectively to the following instructions:
t 16 : R 000 , 1 q 0 , 1 , , t 17 : R 000 , 0 q r e j e c t , 0 , .
The first form alone does not result in the unsatisfiability of
ψ t r i m ,
where the ellipsis () represents, among other factors, the guessed coin-tossing scenarios that implement x 2 = 1 and x 1 = 0 . In contrast, the second form does lead to unsatisfiability due to the q r e j e c t state symbol and the ψ a c c e p t constraint (recall Section 4.3.6).
Hence, we contend that a particular kind of bottom-up constraint (↑) is at play, extending from a lower cell c ̲ in the basic tableau—which, in adherence to t 16 and t 17 , potentially contains the 1 R 000 or 0 R 000 symbol—to the truth value b of x 3 :
b @ ( 4 , 2 ) B ,
as determined higher up, in the basic mini tableau (see the left illustration in Table 13).
A key consideration is that cell c ̲ may represent a hole in the basic tableau, preventing any symbol—including 1 R 000 and 0 R 000 —from being turned on. Now, by automatically filling it with 1 R 000 instead of 0 R 000 , the hole is resolved without user intervention. However, this does require an extra measure beyond top-down reasoning, leading us to the topic of correlated coin-tossing constraints.

5.4. Synthesis: Correlated Coin Tossing Constraints

We integrate the top-down and bottom-up perspectives on the 3-SAT solver N * within a framework that focuses exclusively on the mini tableau. To achieve this, let us contemplate an arbitrary clause in ϕ , which is typically not a Horn formula.
Consider for instance Clause 1:
x 3 x 2 ¯ x 1 .
Any of the following three equivalent formulations of Clause 1—where each antecedent represents a combined toss outcome of two coins (of the three coins in question)—are also, quite evidently, non-Horn formulas:
x 2 x 1 ¯ x 3
x 3 ¯ x 1 ¯ x 2 ¯
x 3 ¯ x 2 x 1
Yet, by synthesizing insights from both top-down and bottom-up reasoning, we can reformulate each of these expressions ((19)–(21)) as a compact family of Horn clauses.
Without loss of generality, we focus on implication (19) in this section. On the one hand, we express the consequent of (19) in the form of proposition (18) with b = 1 ,
1 @ ( 4 , 2 ) B ,
thereby establishing the coin toss outcome x 3 = 1 (Table 13). On the other hand, we express the antecedent of (19) as a scenario (see Definition 4) that implements the combined outcome
x 2 = 1 and x 1 = 0 .
Naturally, we must account for all possible such scenarios:
1 β 2 l 1 β 1 l ( x 2 = 1 ) @ [ β 2 ] ( x 1 = 0 ) @ [ β 1 ] 1 @ ( 4 , 2 ) B ,
which is Horn formula (10) from Example 7.
We have transformed the non-Horn formula (19) into the Horn formula (22). Similarly, the Horn formulas for implications (20) and (21) are, respectively:
1 β 3 l 1 β 1 l ( x 3 = 0 ) @ [ β 3 ] ( x 1 = 0 ) @ [ β 1 ] 0 @ ( 5 , 3 ) B ,
1 β 3 l 1 β 2 l ( x 3 = 0 ) @ [ β 3 ] ( x 2 = 1 ) @ [ β 2 ] 1 @ ( 8 , 4 ) B .
To summarize, the coins x 3 , x 2 , and x 1 are correlated via Clause 1. Initially, this correlation is expressed through non-Horn constraints (19)–(21) and, ultimately, through Horn constraints (22)–(24). Importantly, our method applies to every 3-literal clause in ϕ .

Complexity 

Each correlated coin-tossing constraint (e.g. (22)) consists of O ( l 2 ) literals. Moreover, there are three such constraints per clause in ϕ (cf. (22)–(24) for Clause 1), with a total of m clauses. Consequently, the overall complexity amounts to:
O ( l 2 ) × 3 m = O ( n 3 )
literals.
Remark 19.
The complexity expressed in terms of l, with m = O ( l 3 ) , is O ( l 5 ) .
By specifying all correlated coin-tossing constraints related to ϕ , we can eliminate the constraints (Section 5.2.1) and constraints (Section 5.2.2), rendering the operation of N * beyond the mini tableau obsolete. While the Coin-Tossing Stage  S ^ remains relevant, the specifics of the Updating Stage  S and Checking Stage  S are no longer necessary. This improvement leads from the rFHB algorithm to the streamlined variant called sFHB.

6. Two Algorithms: rFHB and sFHB

We now introduce and analyze the rFHB and sFHB algorithms. Both are governed by the Horn formula ψ t r i m , defined as:
ψ t r i m = ψ s t e p η ψ s t a r t ψ a c c e p t ψ c e l l ψ e x t r a 1 ψ e x t r a 2 .
The final conjunct, ψ e x t r a 2 , varies considerably, depending on the algorithm:
  • ψ e x t r a 2 [ rFHB ] includes the and constraints (Section 5.2),
  • ψ e x t r a 2 [ sFHB ] captures the correlated coin-tossing constraints (Section 5.4).
We present claims, statements that we assert to be true based on the constructive nature of our chain of reasoning. Claim 3, for instance, has already been established.
Claim 3.
In either case of rFHB and sFHB , ψ t r i m is O ( n κ 1 ) literals long, for some constant κ 1 .
The rFHB operates over the entire tableau, whereas sFHB is confined to the mini tableau. In Section 6.1, we describe rFHB with respect to the 3-SAT solver N * , i.e., with a fixed value of k = 2 and unary encoding. We also consider the alternative scenario where k = 1 and binary notation is used. In Section 6.2, by contrast, sFHB is defined independently of both k and the encoding scheme for literals, because it restricts itself to the Coin-Tossing Stage.

6.1. The rFHB Algorithm

The rFHB algorithm leverages the internal mechanics of the 3-SAT solver in question. We define rFHB in Section 6.1.1, and provide detailed explanations of steps 1 and 3 in Section 6.1.2 and Section 6.1.3, respectively. A comprehensive cost analysis is presented in Section 6.1.4.

6.1.1. Boxed Definition

To appreciate the specifics of the rFHB algorithm, recall the following points:
  • Row index r ^ in the basic mini tableau corresponds to row index 3 r ^ 2 in the extended mini tableau (Section 3).
  • If and when all l coins have been tossed, the entire (extended) tableau is determined.
  • The original FHB algorithm, denoted as A , relies on a HORNSAT solver H (Section 3.4).
  • The inferences from tape-state symbols within the mini tableau to those outside of it must be preprogrammed. This corresponds to formally defining ψ e x t r a 2 [ rFHB ] .
Preprints 169454 i004

6.1.2. Elaborating Step 1

The r ^ -guess introduced in step 1 of the boxed definition conceptually implies that several cells beyond B r ^ can be immediately crossed out (see item  2 ( b ) in Section 4.1.2). Conceptually again, these cell crossings are revoked when A backtracks from the r ^ -guess. In reality, however, A does not work with crosses, nor does the solver H actually inject symbols in an actual tableau.
Recall Remark 18, where we noted that numerous formulas admit extensions. For example, formula (26) can be extended into the form of (27), shown below:
q 1 ^ @ ( 3 , 4 ) B $ Q 000 @ ( * i 1 , * j 1 ) B 1 R 000 @ ( * i , * j ) B ,
# q 1 ^ @ ( r ^ , c ^ ) B q 1 ^ @ ( 3 , 4 ) B $ Q 000 @ ( * i 1 , * j 1 ) B 1 R 000 @ ( * i , * j ) B
Formula (27) makes the r ^ -guess explicit—through its first conjunct. However, we opt for the dynamic generation of (26) and similar constraints, rendering the first conjunct in (27) redundant.
Claim 4.
The dynamic generation of constraint (26), along with all other Horn constraints encapsulated by ψ t r i m , can be performed in O ( n κ ) time for some sufficiently large constant κ.
Claim 4 follows almost directly from Claim 3. To see this, first consider storing all constraints in a database using the format specified in (27) (after all). Then, when the r ^ -guess is made at runtime, prune the relevant constraints from the database and discard the first conjunct (which encodes the r ^ -guess) before passing them to H .

6.1.3. Elaborating Step 3

Recall Example 8 with regard to Table 11, row index r ^ = 6 , and the coin toss outcome x 2 = 1 . Then, for each a-admissible literal * l in w ˜ relative to x 2 , the following two constraints must hold:
q 1 ^ @ ( 3 , 4 ) B a U 1 @ ( * i , * j ) B q 1 ^ @ ( 5 , 4 ) B ,
where c e l l [ * i , * j ] is the a-updating cell corresponding to x 2 and * l .
Now, if in step 3 the tape-state symbol q 1 ^ is guessed for c e l l [ 3 , 4 ] in the basic mini tableau,
i . e . , q 1 ^ @ ( 3 , 4 ) B or , equivalently , q 1 ^ @ ( 3 · 3 2 , 4 ) ,
then—by virtue of the left implication in (28)—the solver H must incorporate the consequent a U 1 @ ( * i , * j ) B as a propositional fact in its subsequent satisfiability analysis.
Conceptually, the solver H injects the tape-state symbol a U 1 beyond the boundaries of T ; specifically, in c e l l [ * i , * j ] in the basic tableau. A parallel consideration applies to c e l l [ 5 , 4 ] in relation to the right implication in (28).

6.1.4. Complexity

Concerning the complexity of the rFHB algorithm, we begin with three observations. Observation 1 comprises three points:
  • The dimensions of the basic mini tableau are given by
    O ( l ) × O ( l ) = O ( l 2 ) .
    The same result holds for the extended mini tableau.
  • The rFHB algorithm runs on N * , implying k = 2 and the use of unary notation. Asymptotically, this leads to O ( l ) symbols per literal across the total 3 m literals in w ˜ , forming the horizontal dimension of the entire tableau:
    n = O ( l × m ) .
    For the vertical dimension, we focus on the Updating Stage, which asymptotically dominates the Coin-Tossing and Checking Stages. Here, we account for l iterations over the entire length of w ˜ :
    l × O ( l × m ) = O ( l 2 × m ) .
    Thus, the basic tableau, which is a quasi-square matrix, has the following dimensions:
    O ( l 2 × m ) × O ( l 2 × m ) .
    The same result holds for the extended tableau.
  • Comparing the overall tableau size ((32)) to the basic mini tableau ((29)), we obtain:
    O ( l 2 × m 2 ) .
    This indicates that as l (and thereby m) increases, the mini tableau becomes proportionally much smaller relative to the entire tableau.
Observation 2, consisting of three similar points, concerns the state of affairs had we employed a sophisticated and efficient 3-SAT solver—with k = 1 and binary notation:
  • The mini tableau remains unchanged: see (29).
  • The horizontal dimension of the entire tableau, previously given in (30), now becomes:
    n = O ( ( log l ) × m ) ,
    since only O ( log l ) symbols per literal are needed, in contrast to l symbols. The vertical dimension, updated from (31), becomes:
    O ( 1 ) × O ( ( log l ) × m ) = O ( ( log l ) × m ) ,
    reflecting the fact that the Updating Stage now operates in a single (essentially right-to-left) pass. Accordingly, the tableau forms a quasi-square matrix with dimensions:
    O ( ( log l ) × m ) × O ( ( log l ) × m ) = O ( log 2 l ) × O ( m 2 )
  • The ratio of the entire tableau ((36)) to the mini tableau ((29)) is:
    O ( log 2 l ) × O ( m 2 ) O ( l 2 ) .
    A devil’s advocate will attempt to minimize this expression by assuming m = Θ ( l ) , yielding:
    Ω ( log 2 l ) ,
    which, similar to the result in (33), still confirms that the mini tableau becomes proportionally smaller as l increases.
With respect to the rFHB algorithm, Observation 3 underscores that Theorem 2 from Daylight [1]—which establishes genuine compression for the case k = 1 —is derived under a notably conservative assumption: the ratio between the entire tableau and the mini tableau is fixed at a constant value (specifically, 2), rather than allowed to grow with l, as is the case in results (37) and (33).
Theorem 2.
(Reproduced from Daylight [1](p. 30)) Let N , k be a 3-SAT solver. Then the runtime R ( n ) of the rFHB algorithm pertaining to N satisfies the upper bound:
R ( n ) K n 0.67 k ,
for some constant K > 0 .
Rewriting the upper bound from Theorem 2 in terms of l, under the assumption m = Θ ( l ) , yields:
K n 0.67 k = K O l 1.34 k if n = O l 2 K O ( l · log l ) 0.67 k if n = O l · log l ,
where the first and second cases correspond to unary and binary notation, respectively. Recall (30) and (34), respectively.
Corollary 2.
Let N , 1 be a 3-SAT solver, operating in binary notation. Assume m = Θ ( l ) . Then, the runtime R ( l ) of the rFHB algorithm associated with N admits the upper bound:
R ( l ) C ( l · log l ) 0.67 ,
for some constant C > 0 .
Corollary 2 reveals genuine compression when m = Θ ( l ) , suggesting that NP machines are inferior to at least one exponential time deterministic TM. Yet rather than examining this potential novelty, we stress (again) that Corollary 2 is founded on a highly conservative premise: the hole-filling region external to the mini tableau matches the mini tableau in size for all values of l.
Remark 20.
In technical terms, we highlight that Daylight’s proof outline [1] (p. 31) treats the “reduction factor,” denoted as Δ, as a constant—rather than as a function Δ ( l ) that decreases monotonically with l, such that lim l Δ ( l ) = 0 .
The asymptotic reality is that, as l grows, the contribution of the mini tableau becomes an increasingly negligible portion of the overall tableau. Recall (37) and (33). We now formalize this observation with the following theorem.
Theorem 3.
(Refinement of Theorem 2) Let N ˜ be some N , k machine that solves 3-SAT, with k { 1 , 2 } , working in unary or binary. Let l denote the number of distinct (encoded) propositional variables in the input w of N ˜ . Then, the runtime R ( l ) of the rFHB algorithm associated with N ˜ and w admits the upper bound:
R ( l ) l · log l C ,
where C > 0 is a constant.
Remark 21.
Theorem 3 also concerns the machine N * , as defined in Section 4.
Proof. 
We can distinguish between eight cases based on three parameters. First, we consider two values for k: either k = 1 or k = 2 . Second, the machine operates using either unary or binary notation. Third, we differentiate between m = Θ ( l ) and m = Θ ( l 3 ) . In the remainder of this proof, it suffices to analyze the gravest case, which—as the reader can verify—occurs when k = 1 , binary notation is used, and m = Θ ( l ) .
In this context, recall from (37) the following result:
ratio ( l ) = log 2 l ,
which represents the smallest conceivable ratio between the size of the entire tableau and that of the mini tableau.
Furthermore, we reuse the recurrence relation from Daylight [1] (p. 31), which takes the following form:
T ( p ) = κ 0 · p · T p 2 · ratio ( l ) 2 ,
where the constant κ 0 > 0 depends on the specific TM, N ˜ , under analysis. Appendix D presents a standard derivation of the solution to this recurrence relation, yielding:
T ( p ) = p O ( 1 ) .
Additionally, as noted by Daylight [1](p. 31), at the onset of the recurrence, p denotes the initial area of possible binary nondeterministic choices made by N ˜ . Consequently, we revisit equation (36), this time replacing m with l, justified by the asymptotic relationship m = Θ ( l ) . Substituting O ( log 2 l ) × O ( l 2 ) for p in (39), we derive:
l · log l C ,
for some constant C > 0 .
Finally, we account for the polynomial overhead per step of the rFHB algorithm, which is O ( n κ ) for some constant κ —recall Claims 3 and 4. Given that n = O ( m · log l ) = O ( l · log l ) , it follows that our final solution is also in the form of (40).    □
In light of the implication of Theorem 3, we present an alternative approach that leads to the same conclusion: an iterative application of Theorem 2 outlined in Appendix E. We now turn to the streamlined variant of the rFHB algorithm.

6.2. The sFHB Algorithm

The sFHB focuses on the operation of the coin-tossing machine N ^ , which is solely tasked with executing stage S ^ of N * in O ( l ) nondeterministic time (Section 4.1). Now we define:
ψ t r i m = ψ s t e p η ψ s t a r t ψ a c c e p t ψ c e l l ψ e x t r a 1 ψ e x t r a 2 [ sFHB ] ,
where each component within the parentheses is of smaller size relative to the rFHB case. In particular, ψ s t e p η encodes only the instructions of N ^ , not N * . The formula ψ s t a r t characterizes the initial coin-tossing configuration (7) in Section 4.1 (without w ˜ ). Similarly, the remaining three conjuncts within the parentheses pertain solely to the structure of the (extended) mini tableau, not the (extended) entire tableau.
We distinguish between two arrangements that the sFHB can make per visited column (Section 6.2.1). Then we specify the algorithm (Section 6.2.2) and elaborate on step 4 in the specification (Section 6.2.3). Finally, we present a cost analysis (Section 6.2.4).

6.2.1. Two Arrangements

As with rFHB, the sFHB algorithm performs an r ^ -guess expressed as
# q 1 ^ @ ( r ^ , c ^ ) B ,
from which it can subsequently backtrack.
In contrast to rFHB, which processes the tableau rowwise, sFHB works across the columns (of the mini tableau). For each visited column c in the mini tableau (see, e.g., Table 11), sFHB guesses one of two possible symbol arrangements occurring at some position r among the first r ^ candidates in that column. The first possible arrangement corresponds to a coin toss yielding a 1:
q 1 ^ @ ( r , c ) B 1 @ ( r + 1 , c ) B 0 @ ( r + 2 , c ) B ¯ ,
while the second corresponds to a toss yielding a 0:
q 1 ^ @ ( r , c ) B 0 @ ( r + 1 , c ) B 0 Q 0 ^ @ ( r + 2 , c ) B .
When sFHB selects an arrangement at column c and row r in the basic mini tableau, it introduces a separation of concerns between the columns to the left and those to the right of c. While subsequent hole-filling choices on either side will typically constrain the options on the other side, they will not affect the corresponding computations an sich. Specifically, a commitment is made—either to the first arrangement (42) or the second arrangement (43)—that constrains the tape head of N ^ to visit column c in the following manner:
  • in the first arrangement ((42)), solely via c e l l [ r , c ] and, optionally, also via c e l l [ r + 2 , c ] ;
  • in the second arrangement ((43)), solely via both c e l l [ r , c ] and c e l l [ r + 2 , c ] .
The sFHB algorithm populates all remaining cells in column c—namely, those above c e l l [ r , c ] and below c e l l [ r + 2 , c ] , in the basic mini tableau—with designated tape symbols, in full compliance with the operation of the coin-tossing machine N ^ . Specifically, blank symbols (□) are inserted above row r in both arrangements, while the symbol 1 (respectively, 0) is written below row r + 2 in the first (respectively, second) arrangement.
To illustrate the first arrangement, recall instruction  t 1 ^ (Section 4.1) and Table 11 (Section 5.1). Suppose x 2 = 1 and specifically:
q 1 ^ @ ( 2 , 3 ) B 1 @ ( 3 , 3 ) B 0 @ ( 4 , 3 ) B ¯ ,
where the latter conjunct implicitly refers to either 1 @ ( 4 , 3 ) B or to 1 q 0 ^ @ ( 4 , 3 ) B , depending on the toss outcome for coin x 1 . Regardless of which of these two possibilities manifests, the computations to the left of column 3 in Table 11 remain unaffected. This invariance is ensured by the first conjunct in (44), which enforces the separation of concerns.
We now redefine notation (9) from Definition 7 via two examples.
Example 10.
Based on Table 11:
  • ( x 2 = 1 ) @ [ 1 ] the contents of (44)
  • ( x 2 = 1 ) @ [ 2 ] q 1 ^ @ ( 4 , 3 ) B 1 @ ( 5 , 3 ) B 0 @ ( 6 , 3 ) B ¯
  • ( x 2 = 1 ) @ [ 3 ] 0 (i.e., false)
Example 11.
Based on Table 11:
  • ( x 2 = 0 ) @ [ 1 ] q 1 ^ @ ( 2 , 3 ) B 0 @ ( 3 , 3 ) B 0 Q 0 ^ @ ( 4 , 3 ) B
  • ( x 2 = 0 ) @ [ 2 ] q 1 ^ @ ( 4 , 3 ) B 0 @ ( 5 , 3 ) B 0 Q 0 ^ @ ( 6 , 3 ) B
  • ( x 2 = 0 ) @ [ 3 ] 0

6.2.2. Boxed Definition

Preprints 169454 i005

6.2.3. Elaborating Step 4

Recall the following example of a correlated coin-tossing constraint. We have transformed the non-Horn formula (45) into the Horn formula (46).
x 2 x 1 ¯ x 3
1 β 2 l 1 β 1 l ( x 2 = 1 ) @ [ β 2 ] ( x 1 = 0 ) @ [ β 1 ] 1 @ ( 4 , 2 ) B .
Suppose the sFHB algorithm guesses ( x 2 = 1 ) @ [ 1 ] and ( x 1 = 0 ) @ [ 1 ] . Conceptually, the solver H then injects the symbol 1 into c e l l [ 4 , 2 ] of the basic tableau B .

6.2.4. Complexity

Regarding the computational complexity of the sFHB algorithm, a quick, conservative estimation of the combinatorial cost associated with the separation of concerns (Section 6.2.1) is given by:
T ( l 2 ) = 2 · O ( l ) · T l 2 2 + T l 2 2 ,
where the constant 2 refers to the two arrangements (42) and (43), and the plus sign (rather than a multiplication sign) embodies the separation of concerns.
Upon a change of variables, the recurrence relation transforms into:
T ( p ) = O ( p ) · T p 2 ,
which, by the standard derivation in Appendix F, yields the following approximate solution:
T ( p ) C ( log p ) 2 = p O ( log p ) ,
for some constant C > 0 . The result is superpolynomial but subexponential.
This already establishes that, at a minimum, an exponential-time deterministic TM exists that outperforms every N , k machine. Notably, this conclusion is reached independently of the cost analysis presented in [1]. Furthermore, Lemma 2 enables us to strengthen this result even further.
Lemma 2.
The runtime of the sFHB algorithm is bounded above by that of rFHB.
Proof. 
It is sufficient to demonstrate that, asymptotically, the size of ψ e x t r a 2 [ sFHB ] does not exceed that of ψ e x t r a 2 [ rFHB ] . This claim follows directly from the cost analyses presented in Section 5.2 and Section 5.4.    □
Hence, the tight upper bound for rFHB stated in inequality (38) of Theorem 3 also applies to the sFHB algorithm, which is conceptually much simpler to teach and implement.

7. Closing Remarks

A polynomially bounded tableau inherently lacks the capacity to concretely represent an exponential number of computation paths. As a result, the coin-tossing behavior of any N , k machine must exhibit significant correlation. Crucially, as the length of the input w to machine N increases, the portion of the tableau attributable to coin tosses becomes increasingly negligible in comparison to its total size.
In this article, we have demonstrated how to quantify these properties and derive theoretical insights accordingly. To ensure a level of rigor absent from our previous work [1], we have presented our analysis in great detail. A short summary of our findings will be developed in follow-up work.

Funding

This research received no funding.

Acknowledgments

Forthcoming.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

    The following abbreviations are used in this manuscript:
TM Turing Machine
NP Nondeterministic Polynomial

Appendix A.

Definition A1.
Cf. [10] (p. 259, 271). Variables that can take on the values TRUE and FALSE are called Boolean variables. We represent TRUE by 1 and FALSE by 0. The Boolean operations  AND , OR , and NOT , represented by the symbols ∧, ∨, and ¬, respectively, are described in the standard manner. We use the overbar as a shorthand for the ¬ symbol, so x ¯ means ¬ x . A Boolean formula is an expression involving Boolean variables and operations. It is satisfiable if some assignment of 0s and 1s to the variables makes the formula evaluate to 1. We say the assignment satisfies ϕ . The satisfiability problem is to test whether a Boolean formula is satisfiable. Let SAT = ϕ ϕ is a satisfiable Boolean formula , where ϕ refers to a standard encoding of ϕ.
Definition A2.
Cf. [10] (p. 273). A literal is a Boolean variable or a negated Boolean variable, as in x or x ¯ . The former is called a positive literal, while the latter is called a negative literal. A clause is several literals connected with ∨s, as in x 1 x 2 ¯ x 3 ¯ x 4 . A Boolean formula is in conjunctive normal form, called a cnf formula, if it comprises clauses with ∧s, as in x 1 x 2 ¯ x 3 ¯ x 4 x 3 x 5 ¯ x 6 x 3 x 6 ¯ . The Boolean formula is a 3cnf formula if each clause has three literals, as in x 1 x 2 ¯ x 3 ¯ x 3 x 5 ¯ x 6 x 3 x 6 ¯ x 4 . A 2cnf formula is an AND of clauses, where each clause is an OR of at most two literals.
Definition A3.
Cf. [20] (p. 34–35). A (propositional) Horn formula is a cnf formula where every disjunction contains at most one positive literal.
Also, HORNSAT = ϕ ϕ is a satisfiable Horn formula .
Remark A1.
Horn clauses can be written as implications by the following equivalence (≡):
x 1 ¯ x k ¯ x x 1 x k x
Theorem A1.
Cf. [20] (p. 35). HORNSAT P .
Definition A4.
Cf. Sipser [10] (p. 140). A deterministic Turing machine is an 8-tuple
Q , Γ , Φ , δ , T , q 0 , q a c c e p t , q r e j e c t , with Q ,   Γ ,   Φ ,  T finite sets:
   
Q is the set of states, and Γ is the input alphabet not containing the blank symbol □.
Φ  is the tape alphabet, where Φ and Γ Φ .
δ : Q × Φ Q × Φ × + ,  is the transition function.
Every transition in δ is accompanied by a distinct label t.
T is the label set, containing all such labels.
q 0 Q  is the start state.
q a c c e p t Q  is the accept state.  q r e j e c t Q  is the reject state, with  q r e j e c t q a c c e p t .
   
As a Turing machine computes, changes occur in the current state, the current tape contents, and the current head location. A setting of these three items is called a configuration of the Turing machine. The tape of the Turing machine is one-way infinite, from left to right. Specifically, for each input w 0 w 1 w n 1 of length n, machine M starts in configuration q 0 , for n = 0 , and in configuration w 0 q 0 w 1 w n 1 , for n > 0 . In both cases, the notation s q 0 , with s Φ , signifies that the head is located at the tape cell containing symbol s, while the machine resides in state q 0 . Machine M neither starts in q a c c e p t or q r e j e c t , nor progresses beyond either one of these states. Specifically, once M reaches q a c c e p t , it remains active solely in that state. Likewise for q r e j e c t . We take q a c c e p t to be some q m with m > 0 and similarly for q r e j e c t . Input word w is considered accepted when M on w reaches q a c c e p t . We write L M to denote the language accepted by M. We use notation t : q 1 , x q 2 , y , μ when referring to some transition in δ with label t T and movement μ + , . The plus sign (minus sign) signifies a movement to the right (to the left).
Definition A5.
Cf. Sipser [10] (p. 150). A nondeterministic Turing machine is an 8-tuple, Q , Γ , Φ , δ , T , q 0 , q a c c e p t , q r e j e c t . At any point in a computation, the machine may proceed according to several possibilities. The transition function for the machine has the form δ : Q × Φ P Q × Φ × + , , where P denotes the power set. The computation of the machine is a tree whose branches correspond to different possibilities for the machine. If some branch of the computation leads to the state q a c c e p t , the machine accepts its input.
In conformity with Definition A4, each transition in set δ is accompanied by a distinct label t, and T is now called the general label set, containing all such labels. For instance, consider notation t : q , x q 1 , y 1 , μ 1 , q 2 , y 2 , μ 2 , with label t T , states q , q 1 , q 2 Q , symbols x , y 1 , y 2 Φ , movements μ 1 , μ 2 + , , and with tuple q 1 , y 1 , μ 1 different from q 2 , y 2 , μ 2 . This notation captures the nondeterministic transition encompassing the deterministic transitions t 1 : q , x q 1 , y 1 , μ 1 and t 2 : q , x q 2 , y 2 , μ 2 . We define the basic label set, denoted as T [ ] , as the set that encompasses the labels of all deterministic transitions, such as t 1 and t 2 .
Remark A2.
In our discussion, we employ the term “nondeterministic transition” explicitly, while abbreviating “deterministic transition” and “basic label set” to simply “transition” and “label set.” We frequently omit brackets for ease of reading; e.g., we write t and t instead of t 1 and t 2 . The crux is that each basic label is unique.
Definition A6.
Consider an arbitrary nondeterministic Turing machine, N, and its basic label set, T [ ] . For any label t in T [ ] with corresponding signature t : q s o u r c e , s r e a d q t a r g e t , s w r i t e , + or t : q s o u r c e , s r e a d q t a r g e t , s w r i t e , , we let “ N - source t , ” “ N - target t , ” and “ N - write t ” stand for the symbols s r e a d q s o u r c e , q t a r g e t , and s w r i t e , respectively. When machine N is clear from the context, we shall simply note down “ source t , ” “ target t , ” and “ write t ,” respectively.
Definition A7.
Consider an arbitrary nondeterministic Turing machine, N, and its general label set, T. We let T det denote the subset of T containing the labels of all deterministic instructions of N. We let T det + , respectively T det , denote the subset of T det containg the labels of all deterministic instructions of N whose movement is to the right (+), respectively to the left (−).
Definition A8.
Let N be a nondeterministic Turing machine decider. Its running time is function t : N N , with t n the maximum number of steps N uses on any branch of its computation, on any input of length n, before halting.
Definition A9.
Cf. [10](pp. 251, 258). Let t : N R + be a function, with R + denoting the set of nonnegative real numbers. Define the time complexity class, TIME t n , to be the collection of all languages that are decidable by an O t n time Turing machine. P is the class of languages that is decidable in polynomial time on a deterministic single-tape Turing machine. In other words, P = k TIME n k .
Theorem A2.
Cf. [20](p. 35). HORNSAT P .
Definition A10.
Cf. [10](pp .265–267). A verifier for a language A is an algorithm V, where A =
w V accepts w , c for some string c . We measure the time of a verifier only in terms of the length of w, so a polynomial time verifier runs in polynomial time in the length of w. A language A is polynomial verifiable if it has a polynomial time verifier. NP is the class of languages that have polynomial time verifiers.
The remaining three items come from [10, p.266-276].
Theorem A3.
A language is in NP iff it is decided by some nondeterministic polynomial time Turing machine.
Definition A11.
NTIME t n = L L is decided by a O t n time nondeterministic TM . A function f : Σ * Σ * is a polynomial time computable function if some deterministic polynomial time Turing machine M exists that halts with just f w on its tape, when started on any input w. A language B is  NP -complete if it satisfies two conditions: (1) B is in NP , and (2) every A in NP is polynomial time reducible to B.
Theorem A4.
If B is NP -complete and B P , then P = NP .

Appendix B.

Appendix B.1. The Single Part

We define ψ e x t r a s i n g l e as follows:
ψ e x t r a s i n g l e = s q @ ( 3 l 2 , j ) s q j j ¬ s q @ ( 3 l 2 , j ) ,
where s Φ , q Q , and the column index j ranges from 1 to n k + 2 . This condition ensures that if the tape-state symbol s q is stored in c e l l [ 3 l 2 , j ] , it is the only tape-state symbol in row 3 l 2 . In other words, no tape-state symbol s q can be stored in any other cell within the same row.

Appendix B.2. The Left Part

We define ψ e x t r a l e f t as follows:
ψ e x t r a l e f t = ψ 1 l e f t ψ 2 l e f t ψ 3 l e f t
On the one hand, we reason from row i both towards earlier rows ( δ ) and towards later rows ( + δ ):
ψ 1 l e f t = 0 < m < j s q @ 3 l 2 , j s @ 3 l 2 , j m r ( δ ) s @ ( 3 l 2 ) + 3 δ , j m ,
where the restriction on δ , denoted by r ( δ ) , is defined by the following inequalities:
  • 0 δ m 1 .
  • 1 ( 3 l 2 ) 3 δ ( 3 l 2 ) + 3 δ 3 n k + 1 .
Additionally, the expression
s @ i + 3 δ , j
is shorthand for:
s @ ( i 3 · δ m i n ( m ) , j ) s @ ( i 3 · δ m i n ( m ) + 1 , j ) s @ ( i 3 · δ m i n ( m ) + 2 , j ) s @ ( i + 3 · δ m a x ( m ) 1 , j ) s @ ( i + 3 · δ m a x ( m ) , j ) ,
where δ m i n ( m ) is defined as the maximum (not the minimum) of the set:
{ δ | 0 δ m 1 and 1 i 3 δ 3 n k + 1 } .
Similarly, δ m a x ( m ) is defined as the maximum of the set:
{ δ | 0 δ m 1 and 1 i + 3 δ 3 n k + 1 } .
Example A1.
For instance, if a q 5 @ ( i , j ) and s * @ ( i , j 3 ) hold, with i = 3 l 2 , then the following must also hold:
r ( δ ) [ s * @ ( i + 3 δ , j 3 ) ] ,
where r ( δ ) is defined by the following inequalities:
  • 0 δ 2 .
  • 1 i 3 δ i + 3 δ 3 n k + 1 .
The reader can verify that all relevant cells range from c e l l [ i 3 · 2 , j 3 ] at the top to c e l l [ i + 3 · 2 , j 3 ] at the bottom. These 3 · 2 + 1 + 3 · 2 = 13 cells correspond to the 13 crossed-out entries in column j 3 in Table 7.
On the other hand, we reason from the earliest row ( i 3 · δ m i n ( m ) ) and, respectively, the latest row ( i + 3 · δ m a x ( m ) ) toward row i, with i = 3 l 2 :
ψ 2 l e f t = 0 < m < j s q @ ( 3 l 2 , j ) s @ ( 3 l 2 ) 3 · δ m i n ( m ) , j m s @ ( 3 l 2 , j m ) ,
ψ 3 l e f t = 0 < m < j s q @ ( 3 l 2 , j ) s @ ( ( 3 l 2 ) + 3 · δ m a x ( m ) , j m ) s @ ( 3 l 2 , j m ) .
The satisfiability of ψ 3 l e f t ensures that if a specific cell, such as
c e l l ( 3 l 2 ) + 3 · δ m a x ( m ) , j m ,
for some fixed m, is filled with the tape symbol s * , then—through a chain reaction partially facilitated by ψ 1 l e f t —all crosses in column j m are replaced with the symbol s * . A similar observation applies to the satisfiability of ψ 2 l e f t , which affects the propagation from an earlier cell, i.e.,
c e l l ( 3 l 2 ) 3 · δ m i n ( m ) , j m .

Appendix B.3. The Right Part

Given the inherent symmetry of the problem, the formal definition of ψ e x t r a r i g h t closely mirrors that of ψ e x t r a l e f t and is therefore omitted from this paper.

Appendix B.4. The Extend Part

ψ e x t r a e x t e n d = ψ e x t r a ψ e x t r a , ψ e x t r a + ψ e x t r a + , + ,
where—for s , s Φ and q , q Q —we have:
ψ e x t r a = s q s q s q @ ( 3 l 2 , j ) s q @ ( 3 l + 1 , j 1 ) t ( s , q ) @ ( 3 l , j ) ,
ψ e x t r a , = s q s s q @ ( 3 l 2 , j ) s @ ( 3 l + 1 , j + 1 ) t ( s , q ) @ ( 3 l , j ) ,
ψ e x t r a + = s q s q s q @ ( 3 l 2 , j ) s q @ ( 3 l + 1 , j + 1 ) t + ( s , q ) @ ( 3 l , j ) ,
ψ e x t r a + , + = s q s s q @ ( 3 l 2 , j ) s @ ( 3 l + 1 , j 1 ) t + ( s , q ) @ ( 3 l , j ) .
Here, t ( s , q ) and t + ( s , q ) represent two distinct labels of the machine N, such that:
N - source t ( s , q ) = s q = N - source t + ( s , q ) .
Recall Definition A6. The minus (plus) sign indicates that N moves to the left (right).

Appendix B.5. Consolidation

In summary, ψ e x t r a 1 is a Horn formula, with a size of O ( n 4 k ) , where the constant k corresponds to the running time n k of machine N.

Appendix C.

We can readily formulate several more constraints. For instance, let us continue with l = 3 and Table 11. Suppose that, in the mini tableau, the coin toss outcome x 2 = 1 has materialized:
q 1 ^ @ ( 3 , 4 ) B q 1 ^ @ ( 5 , 4 ) B .
Now, consider the cell c * * —positioned at row i * * and column j * * , beyond the confines of the first r ^ rows of the mini tableau—for which the following proposition holds:
b U 0 @ ( i * * 1 , j * * + 1 ) B , with b { 0 , 1 } .
In other words, relative to the cell c * * , one time step earlier and one cell to the right, the head is scanning a bit b while in state U 0 . This state of affairs is exemplified by the following configuration and b = 0 :
& $ c l m $ $ c l 2 $ a 1 0 U 0 0 1 0 $ # # .
In conformity with instructions  t 8 , 0 and  t 8 , 1 —recall from Section 4.2.4:
t 8 , x : U 0 , x U 0 , x , , x { , $ , 0 , 1 }
—we then posit that
1 U 0 @ ( i * * , j * * ) B
must hold. Here, the tape-state symbol 1 U 0 signifies that the TM’s head is scanning—from right to left—in search of the first occurrence of a symbol σ { a , a ¯ } located to the left of the bit 1 in cell c * * . (When such a σ is found, the machine overwrites it: replacing a with 0 and a ¯ with 1.)
Formally, these long-range dependencies are captured by the following constraints—each a Horn formula with three literals:
q 1 ^ @ ( 3 , 4 ) B 1 U 0 @ ( i * * 1 , j * * + 1 ) B 1 U 0 @ ( i * * , j * * ) B ,
q 1 ^ @ ( 5 , 4 ) B 1 U 0 @ ( i * * 1 , j * * + 1 ) B 1 U 0 @ ( i * * , j * * ) B ,
q 1 ^ @ ( 3 , 4 ) B 0 U 0 @ ( i * * 1 , j * * + 1 ) B 1 U 0 @ ( i * * , j * * ) B ,
q 1 ^ @ ( 5 , 4 ) B 0 U 0 @ ( i * * 1 , j * * + 1 ) B 1 U 0 @ ( i * * , j * * ) B .
Conservatively, this yields a complexity of:
O ( n 2 ) × l × l × 2 = O ( n 4 ) .

Appendix D.

To solve the recurrence relation, we begin with the following setup:
T ( p ) = κ 0 · p · T p 2 ( log l ) 2 2 , with T ( 1 ) = O ( 1 ) ,
where κ 0 is a constant.

Step 1: Change of Variables

Let us set:
q = log p p = 2 q , so p = 2 q / 2 .
Now consider the recurrence:
p 2 ( log l ) 2 = 2 q 2 ( log l ) 2 = 2 q log ( 2 ( log l ) 2 ) = 2 q log 2 2 log log l .
Let:
c = log 2 + 2 log log l = 1 + 2 log log l .
Then the recurrence becomes:
T ( 2 q ) = κ 0 · 2 q / 2 · T ( 2 q c ) 2 .
Taking logarithms (base 2) on both sides:
log T ( 2 q ) = log κ 0 + q 2 + 2 log T ( 2 q c ) .
Let:
S ( q ) = log T ( 2 q ) ,
so the recurrence becomes:
S ( q ) = log κ 0 + q 2 + 2 S ( q c ) .

Step 2: Solve the Linear Recurrence

Unrolling the recurrence for n steps:
S ( q ) = i = 0 n 1 2 i log κ 0 + q i c 2 + 2 n S ( q n c ) .
To reach the base case S ( q n c ) = O ( 1 ) , choose:
n q c n = q c .
Then:
S ( q ) = O ( 2 n q ) = O 2 q / c q .
Recalling that q = log p , we obtain:
S ( q ) = log T ( p ) = O p 1 / c log p .
Exponentiating both sides:
T ( p ) = 2 S ( log p ) = exp 2 O p 1 / c log p ,
where:
c = 1 + 2 log log l .
Thus, the solution is:
T ( p ) = exp 2 O p 1 / ( 1 + 2 log log l ) · log p .

Asymptotic Behavior as l→∞

As l , we have:
log l log log l ,
so:
1 1 + 2 log log l 0 .
Then:
p 1 / ( 1 + 2 log log l ) 1 ,
and thus:
T ( p ) = exp 2 O ( log p ) = O ( p α ) ,
for some constant α > 0 .
Hence, asymptotically:
T ( p ) = p O ( 1 )
as l . This implies that T ( p ) is polynomial in p, and the degree becomes smaller as l increases.

Appendix E.

We apply Theorem 2 iteratively to establish the validity of Theorem 3.

Appendix E.1. First Iteration

Theorem 2 conveys that any 3-SAT solver N 1 , 1 , with state set Q 1 and tape alphabet Φ 1 —operating with O ( n ) nondeterministic binary guesses followed by O ( n ) deterministic verification time—can be simulated deterministically by some M 1 via the rFHB algorithm.
Via a single application of Theorem 2, the resulting deterministic TM M 1 runs in at most
K 1 n 0.67 = 2 ( log K 1 ) · n 0.67 = 2 α 1 · n 0.67
time, where:
  • The constant K 1 depends on the sizes of Q 1 and Φ 1 .
  • The constant α 1 = log K 1 > 0 .
Inspired by the existence of deterministic machine M 1 , we can now construct a nondeterministic polynomial time TM N 2 , with state set Q 2 and tape alphabet Φ 2 , where
| Q 1 | | Q 2 | and Φ 1 Φ 2 .
This machine N 2 is functionally equivalent to M 1 , and hence is also a 3-SAT solver.
At first glance, N 2 operates with
O α 1 · n 0.67 = O n 0.67
nondeterministic binary guesses, followed by
ν = O n κ 1
deterministic verification time. Recall Claim 3.
Upon closer inspection, N 2 operates with, say,
O n 0.70
nondeterministic binary guesses. An arbitrary small but nonzero increase in the number of guesses (from 0.67 to 0.70 ) is required here, for the following reason: N 2 guesses (and correctly so, in the satisfiable case) the guesses made by rFHB relative to N 1 . Although the total number of holes is O ( n 0.67 ) , rather than linear in n, an additional O ( log l ) bits are needed to uniquely identify each guess within the tableau. Consequently, N 2 makes
O ( log l ) × O ( n 0.67 )
binary guesses, which we simplify to O ( n 0.70 ) for subsequent analysis.
To recatipulate, functionally equivalent machines N 1 and N 2 require O ( n ) and O ( n 0.70 ) guesses, respectively. For sufficiently large n, machine N 2 makes fewer guesses than N 1 .

Appendix E.2. Second Iteration

Via a second application of Theorem 2, we may infer the existence of a deterministic machine M 2 that simulates the nondeterministic machine N 2 . This machine M 2 runs in at most
2 α 2 · ( n 0.70 ) 0.67 × ν = 2 α 2 · n 0.47 × ν
time, where:
  • The constant α 2 = log K 2 > 0 .
  • K 2 depends on the sizes of Q 2 and Φ 2 , with K 1 < K 2 .
Inspired by the existence of deterministic machine M 2 , we can now construct a functionally equivalent nondeterministic polynomial time TM N 3 , with state set Q 3 and alphabet Φ 3 , where
| Q 2 | | Q 3 | and Φ 2 Φ 3 .
This machine N 3 operates, not with
O n 0.47 ,
but with, say,
O n 0.50 ,
nondeterministic binary guesses, followed by
ν + ν = 2 ν
verification time. Again, the rationale for the slight increase in the exponent is due to the extra bits that are needed to uniquely identify each guess within the tableau.
To recatipulate, N 2 and N 3 require O ( n 0.70 ) and O ( n 0.50 ) guesses, respectively. For sufficiently large n, machine N 3 makes fewer guesses than N 2 .

Appendix E.3. Third Iteration

Via a third application of Theorem 2, we may infer the existence of a deterministic TM M 3 that simulates the nondeterministic machine N 3 . And so on.

Appendix E.4. The j-th Iteration

Inspired by the existence of deterministic machine M j , we can now construct a functionally equivalent, nondeterministic polynomial time TM N j + 1 , with state set Q j + 1 and tape alphabet Φ j + 1 , where
| Q j | | Q j + 1 | and Φ j Φ j + 1 .
This machine N j + 1 operates, not with
O n ( 0.67 ) j ,
but with, say,
O n ( 0.70 ) j
nondeterministic binary guesses, followed by
j · ν
verification time.

Appendix E.5. Consolidation

Take j * = κ × ( log n ) , for some sufficiently large constant κ . Then machine N j * + 1 performs O ( 1 ) guesses, followed by
κ × ( log n ) × ν = O ( n κ 1 + 1 )
deterministic verification time. That is, N j * + 1 is a deterministic polynomial time TM.

Appendix F.

Solve the recurrence relation:
T ( p ) = κ · p · T p 2
where κ is a constant.

Solution

Assume p = 2 n . Then the recurrence becomes:
T ( 2 n ) = κ · 2 n · T ( 2 n 1 ) = κ · 2 n / 2 · T ( 2 n 1 )
Apply the recurrence repeatedly:
T ( 2 n ) = κ · 2 n / 2 · κ · 2 ( n 1 ) / 2 · T ( 2 n 2 ) = κ 2 · 2 n / 2 + ( n 1 ) / 2 · T ( 2 n 2 ) = = κ n · i = 0 n 1 2 ( n i ) / 2 · T ( 1 )
Now simplify the exponent:
i = 0 n 1 n i 2 = 1 2 j = 1 n j = 1 2 · n ( n + 1 ) 2 = n ( n + 1 ) 4
Therefore:
T ( 2 n ) = κ n · 2 n ( n + 1 ) / 4 · T ( 1 )
Substituting back n = log p , we get:
T ( p ) = κ log p · 2 log p ( log p + 1 ) 4 · T ( 1 )

Asymptotic Form

As p ,
T ( p ) = 2 Θ ( ( log p ) 2 ) = p O ( log p ) ,
which implies that the growth rate is quasi-polynomial—faster than any polynomial, yet slower than exponential.

References

  1. Daylight, E.G. Tableau with Holes: Clarifying NP-Completeness. Symmetry 2025, 17. [CrossRef]
  2. Daylight, E. Injecting Observers into Computational Complexity. Philosophies 2025, 10. [CrossRef]
  3. Dean, W. Computational Complexity Theory. The Stanford Encyclopedia of Philosophy 2016.
  4. Fortnow, L. Why Can’t We Break Cryptography? https://blog.computationalcomplexity.org/2025/04/why-cant-we-break-crytptography.html, 2025. Accessed: 2025-07-01.
  5. Li, M.; Vitanyi, P. An Introduction to Kolmogorov Complexity and Its Applications, 4 ed.; Texts in Computer Science, Springer, 2019. [CrossRef]
  6. Daylight, E.; Koolen, W.; Vitányi, P. Time-Bounded Incompressibility of Compressible Strings and Sequences. Information Processing Letters 2009, 109, 1055–1059. [CrossRef]
  7. Cook, S. The Complexity of Theorem-Proving Procedures. In Proceedings of the Proc. 3rd Annual ACM Symposium on Theory of Computing. Association for Computing Machinery, 1971, pp. 151–158.
  8. Levin, L. Universal Sorting Problems. Problems of Information Transmission 1973, 9, 265–266. English translation of original in Problemy Peredachi Informatsii.
  9. Fortnow, L.; Homer, S. A Short History of Computational Complexity. Bulletin of the European Association for Theoretical Computer Science 2003, 80, 95–133.
  10. Sipser, M. Introduction to the Theory of Computation; Thomson Course Technology, 2006.
  11. Papadimitriou, C. Computational Complexity; Addison Wesley Longman, 1994.
  12. Hopcroft, J.; Motwani, R.; Ullman, J. Introduction to Automata Theory, Languages, and Computation; Addison Wesley / Pearson Education, 2007.
  13. Aaronson, S. Quantum Computing since Democritus; Cambridge University Press, 2013.
  14. Dean, W. Algorithms and the Mathematical Foundations of Computer Science. In Gödel’s Disjunction, First ed.; Horsten, L.; Welch, P., Eds.; Oxford University Press, 2016.
  15. Tall, D. How Humans Learn to Think Mathematically: Exploring the Three Worlds of Mathematics; Cambridge University Press, 2013.
  16. Turner, R. Computational Abstraction. Entropy 2021, 23, 213. [CrossRef] [PubMed]
  17. Linnebo, .; Shapiro, S. Actual and Potential Infinity. Noûs 2019, 53, 160–191. [CrossRef]
  18. Fortnow, L. Can you feel the machine? https://blog.computationalcomplexity.org/2024/03/can-you-feel-machine.html, 2024. Accessed: 2024-08-22.
  19. Hill, R.K. The Imperativity of Algorithms. https://cacm.acm.org/blogcacm/the-imperativity-of-algorithms/, 2023. Accessed: 2024-08-22.
  20. Grädel, E. Complexity Theory: WS 2009/10; Mathematische Grundlagen der Informatik, RWTH Aachen, creativecommons.org, 2009.
  21. Dowling, W.; Gallier, J. Linear-Time Algorithms for Testing the Satisfiability of Propositional Horn Formulae. J. Logic Programming 1984, 1, 267–284. [CrossRef]
  22. Daylight, E. Dijkstra’s Rallying Cry for Generalization: the Advent of the Recursive Procedure, late 1950s – early 1960s. The Computer Journal 2011, 54, 1756–1772. [CrossRef]
  23. Shapiro, S. Acceptable notation. Notre Dame Journal of Formal Logic 1982, 23, 14–20. [CrossRef]
Table 1. A tableau: an n k × n k + 2 matrix. All cells in the leftmost column contain the boundary marker ⊢. Likewise for the rightmost column and the marker ⊣.
Table 1. A tableau: an n k × n k + 2 matrix. All cells in the leftmost column contain the boundary marker ⊢. Likewise for the rightmost column and the marker ⊣.
w 0 q 0 w 1 w 2 w n 1
 
 
 
 
 
Table 2. Illustrating two 2 × 3 windows. The effect of instruction t a b is shown on the left, while the effect of instruction t a c is displayed on the right, with both illustrations read from top to bottom. Column indices range from j 1 to j + 1 . Each vertical arrow represents a change in precisely one symbol. Notably, a q 1 is considered a single symbol, not two separate symbols.
Table 2. Illustrating two 2 × 3 windows. The effect of instruction t a b is shown on the left, while the effect of instruction t a c is displayed on the right, with both illustrations read from top to bottom. Column indices range from j 1 to j + 1 . Each vertical arrow represents a change in precisely one symbol. Notably, a q 1 is considered a single symbol, not two separate symbols.
Preprints 169454 i001
Table 3. The effect of instruction t a b is shown on the left, while the effect of t a c is displayed on the right. In both illustrations, the rows are arranged sequentially from the top row, indexed as 3 l 2 , to the bottom row, indexed as 3 l + 1 . Each vertical arrow represents a change in precisely one symbol. If label t a b is stored in c e l l [ 3 l , j ] of the tableau—with row index 3 l and column index j, where l and j are natural numbers—then we denote this with propositional variable x 3 l , j , t a b . Source: [2](p. 15).
Table 3. The effect of instruction t a b is shown on the left, while the effect of t a c is displayed on the right. In both illustrations, the rows are arranged sequentially from the top row, indexed as 3 l 2 , to the bottom row, indexed as 3 l + 1 . Each vertical arrow represents a change in precisely one symbol. If label t a b is stored in c e l l [ 3 l , j ] of the tableau—with row index 3 l and column index j, where l and j are natural numbers—then we denote this with propositional variable x 3 l , j , t a b . Source: [2](p. 15).
Preprints 169454 i002
Table 4. A conversion from symbol a (marked in the top row) to b (marked in the bottom row). Row indices range from 3 l 2 to 3 l + 4 . Column indices from 2 to 5. Source: [2](p. 6, original emphasis]).
Table 4. A conversion from symbol a (marked in the top row) to b (marked in the bottom row). Row indices range from 3 l 2 to 3 l + 4 . Column indices from 2 to 5. Source: [2](p. 6, original emphasis]).
Preprints 169454 i003
Table 5. Here, the proposition a q 5 @ ( i , j ) holds, with i = 3 l 2 . In words: c e l l [ i , j ] in the extended tableau contains the tape-state symbol a q 5 .
Table 5. Here, the proposition a q 5 @ ( i , j ) holds, with i = 3 l 2 . In words: c e l l [ i , j ] in the extended tableau contains the tape-state symbol a q 5 .
× × × × ×
× × × × × ×
 
 
× × × × × × × ×
 
 
i × × × × × × × × a q 5 × ×
 
 
× × × × × × × ×
 
 
× × × × × ×
 
 
× × × × ×
 
 
× × × ×
 
 
× × ×
 
 
× ×
 
 
×
j
Table 6. Here, the proposition a q 5 @ ( i * , j ) B holds. This means that the symbol a q 5 appears at position ( i * , j ) in the basic tableau (shown below), with i * = i + 2 3 = 3 l 2 + 2 3 = l .
Table 6. Here, the proposition a q 5 @ ( i * , j ) B holds. This means that the symbol a q 5 appears at position ( i * , j ) in the basic tableau (shown below), with i * = i + 2 3 = 3 l 2 + 2 3 = l .
× × × × ×
× × × × × ×
× × × × × × × ×
i * × × × × × × × × a q 5 × ×
× × × × × × × ×
× × × × × ×
× × × × ×
× × × ×
× × ×
× ×
×
j
Table 7. Filling cells (in the extended tableau) with additional crosses as a logical consequence of the state of affairs depicted in Table 5 and ψ e x t r a 1 .
Table 7. Filling cells (in the extended tableau) with additional crosses as a logical consequence of the state of affairs depicted in Table 5 and ψ e x t r a 1 .
× × × × ×
× × × × ×
× × × × ×
× × × × × ×
× × × × × ×
× × × × × ×
× × × × × × × ×
× × × × × × × ×
× × × × × × × ×
i × × × × × × × × a q 5 × ×
× × × × × × × ×
× × × × × × × ×
× × × × × × × ×
× × × × × ×
× × × × × ×
× × × × × ×
× × × × ×
× × × × ×
× × × × ×
× × × ×
× × × ×
× × × ×
× × ×
× × ×
× × ×
× ×
× ×
× ×
×
×
×
j
Table 8. Basic tableau: crossing out 113 cells out of 16 × 16 = 256 cells.
Table 8. Basic tableau: crossing out 113 cells out of 16 × 16 = 256 cells.
1 ×
× × ×
× × × × ×
× × × × × × ×
5 × × × × × × × × ×
× × × × × × × × × × ×
× × × × × × × × × × × × ×
8 × × × × × × × × a q 5 × × × × × × ×
× × × × × × × × × × × × ×
10 × × × × × × × × × × ×
× × × × × × × × ×
× × × × × × ×
× × × × ×
× × ×
15 ×
16
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Table 9. Basic tableau: a second intervention in row 4 amounts to crossing out 41 boldfaced cells out of 63 cells in rows 1–7. The depicted scenario is ultimately unsatisfiable, as the downward movement from b q 9 and the upward movement from a q 5 will fail to converge harmoniously in the same cell.
Table 9. Basic tableau: a second intervention in row 4 amounts to crossing out 41 boldfaced cells out of 63 cells in rows 1–7. The depicted scenario is ultimately unsatisfiable, as the downward movement from b q 9 and the upward movement from a q 5 will fail to converge harmoniously in the same cell.
1 × × × × × × × × ×
× × × × × × × × × × ×
× × × × × × × × × × × × ×
× × × × × × × × × × × b q 9 × × × ×
5 × × × × × × × × × × × × × ×
× × × × × × × × × × × × × ×
× × × × × × × × × × × × × ×
8 × × × × × × × × a q 5 × × × × × × ×
× × × × × × × × × × × × ×
10 × × × × × × × × × × ×
× × × × × × × × ×
× × × × × × ×
× × × × ×
× × ×
15 ×
16
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Table 10. The coin-tossing section of the basic tableau, also known as the basic mini tableau [1]: tossing four coins from left to right, resulting in the truth values for variables x 4 , x 3 , x 2 , and x 1 . The row  r ^ corresponds to the state attained after all l = 4 coins have been tossed and when the rightmost column c ^ = 6 is reached. In this case, l is even. Consequently, r ^ is odd and belongs to the set r ^ { 5 , 7 , 9 , 11 , 13 } = { l + 1 , l + 3 , , 3 l + 1 } .
Table 10. The coin-tossing section of the basic tableau, also known as the basic mini tableau [1]: tossing four coins from left to right, resulting in the truth values for variables x 4 , x 3 , x 2 , and x 1 . The row  r ^ corresponds to the state attained after all l = 4 coins have been tossed and when the rightmost column c ^ = 6 is reached. In this case, l is even. Consequently, r ^ is odd and belongs to the set r ^ { 5 , 7 , 9 , 11 , 13 } = { l + 1 , l + 3 , , 3 l + 1 } .
1 # #
2
3
4
5 ·
6 ·
7 ·
8 ·
9 ·
10
11 ·
12
13
1 2 3 4 5 6
Table 11. The basic mini tableau: tossing l = 3 coins from left to right, resulting in the truth values for x 3 , x 2 , and x 1 . The row  r ^ corresponds to the state attained after all coins have been tossed and when the rightmost column c ^ = 5 is reached. In this case, l is odd. Consequently, r ^ is even and belongs to the set r ^ { 4 , 6 , 8 , 10 } = { l + 1 , l + 3 , , 3 l + 1 } .
Table 11. The basic mini tableau: tossing l = 3 coins from left to right, resulting in the truth values for x 3 , x 2 , and x 1 . The row  r ^ corresponds to the state attained after all coins have been tossed and when the rightmost column c ^ = 5 is reached. In this case, l is odd. Consequently, r ^ is even and belongs to the set r ^ { 4 , 6 , 8 , 10 } = { l + 1 , l + 3 , , 3 l + 1 } .
1 # #
2
3
4
5 ·
6 ·
7
8 ·
9
10
1 2 3 4 5
                                                      
Table 12. Illustrating ( x 2 = 1 ) @ [ β 2 ] on the left and ( x 1 = 0 ) @ [ β 1 ] on the right.
Table 12. Illustrating ( x 2 = 1 ) @ [ β 2 ] on the left and ( x 1 = 0 ) @ [ β 1 ] on the right.
1 # # 1 # #
2 2
3 ( x 2 = 1 ) @ [ 1 ] 3
4 4
5 ( x 2 = 1 ) @ [ 2 ] 5 ( x 1 = 0 ) @ [ 1 ]
6 · 6 ·
7 7 ( x 1 = 0 ) @ [ 2 ]
8 · 8 ·
9 9 ( x 1 = 0 ) @ [ 3 ]
10 10
11 # # 11 # #
1 2 3 4 5 1 2 3 4 5
Table 13. Two distinct computations in the making: one on the left and the other on the right.
Table 13. Two distinct computations in the making: one on the left and the other on the right.
1 # # 1 # #
2 2
3 ( x 2 = 1 ) @ [ 1 ] 3
4 1 4 1
5 ( x 1 = 0 ) @ [ 1 ] 5 ( x 2 = 1 ) @ [ 2 ]
6 · 6 ·
7 7 ( x 1 = 0 ) @ [ 2 ]
8 · 8 ·
9 9
10 10
11 # # 11 # #
1 2 3 4 5 1 2 3 4 5
Table 14. Two distinct computations taking shape, with only the left one remaining potentially satisfiable.
Table 14. Two distinct computations taking shape, with only the left one remaining potentially satisfiable.
1 # # 1 # #
2 1 2
3 1 ( x 2 = 1 ) @ [ 1 ] 3
4 1 4 1
5 1 ( x 1 = 0 ) @ [ 1 ] 5 1 ( x 2 = 1 ) @ [ 2 ]
6 1 · 6 1 ·
7 7 1 ( x 1 = 0 ) @ [ 2 ]
8 · 8 1 ·
9 9
10 10
11 # # 11 # #
1 2 3 4 5 1 2 3 4 5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated