5.1. The Relativity of Computational Complexity
It is necessary to recall the definitions of P-type and NP-type problems[
2,
6]: “Let P be the class of languages recognizable in polynomial time by one-tape deterministic Turing machines, and let
be the class of languages recognizable in polynomial time by one-tape nondeterministic Turing machines.”. Additionally, a theorem regarding NP class problems states[
6]: “
P if and only if
is accepted by a nonderministic Turing machine which operates in polynomial time.”.
From these definitions and the theorem, for NP-class problems, the computational complexity is polynomial when evaluated on nondeterministic Turing machines. However, when evaluated on deterministic Turing machines, their complexity is characterized as nondeterministic polynomial time. This demonstrates that the computational complexity of NP-class problems is relative and depends on the underlying computational model(different types of Turing machines have different computational capabilities).
Although NP-type problems exhibit divergent computational complexity characteristics under different measurement criteria, such disparities are not always evident for a specific problem. Fortunately, defining two distinct benchmarks for measuring the intersection of and is relatively straightforward.
Existing research confirms that computing the intersection of two sets is unequivocally a P-class problem. However, it is critical to emphasize the relativity of computational complexity in the context of set intersection computation.
For instance, polynomial-time algorithms exist to determine the intersection of and based on their cardinalities. However, if a different benchmark for measuring algorithmic complexity is employed, then by the Corollary, when the number of elements in both and is on the order of , the computational complexity of finding their intersection is with respect to and .
5.2. The Role of
in Complexity Analysis
If
, then
Analyze
using Stirling’s formula:
It can be observed that for the computational complexity , when is either extremely small or extremely large, exhibits approximately linear behavior, as demonstrated in Equation . Conversely, when approaches the median of , grows exponentially, as shown in Equation . As varies, the complexity transitions from polynomial to non-polynomial.
In discussions of algorithmic complexity, the focus is typically on the worst-case scenario. Although both cases and are valid, computational complexity should be assessed based on case unless explicit constraints on are imposed—such as or being a constant.
Furthermore,typically denotes the cardinality of the power set of a set with elements, or the number of all vertices of in an -dimensional space. Its practical significance often relates to complete enumeration. Although Equation includes a term in the denominator, implying an incomplete enumeration. However, this expression closely approximates complete enumeration for large .
5.3. Admits No Polynomial-Time Solution
The scale of is generally characterized by the number of its variables and constraints, implying that the existence of polynomial time solutions for hinges on and , rather than . The core focus is whether admits a polynomial-time solution. Hence, unless explicitly stated otherwise, all computational complexity discussions hereinafter in this paper refer to the parameters and , where is a parameter that geometrically represents the scale of a linear equation with 0-1 variables.
Equation and Equation represent and , respectively, and solving yields the intersection of and . In other words, solving is equivalent to computing the intersection of and . The specific representation of the sets and does not affect the essence of the fact that and are sets. This scenario aligns with Proposition 1 and its Corollary, from which it follows directly that the computational complexity of solving is . This immediately implies that does not have a polynomial-time solution.
To strengthen the persuasiveness, the computational complexity of problem
can be further elaborated as follows, as illustrated in
Figure 1.
Sets are typically represented in two canonical forms: enumeration and description, commonly termed Roster Notation and Set Builder Notation, respectively. When discussing algorithms for computing the intersection of two sets earlier, the sets in question are implicitly assumed to be represented in Roster Notation by default.
Given the two canonical set representation methods, there are two distinct approaches to computing the intersection of sets
and
(depicted in
Figure 1):
Approach Ⅰ: Directly compute the intersection of and using their Roster Notation representations, as outlined in Procedure ①.
Approach Ⅱ: First convert the set representations from Roster Notation to Set Builder Notation (corresponding to Procedure ②), then solve the associated 0-1 integer programming (corresponding to Procedure ③) to derive the intersection of and . This method can be succinctly denoted as “②+③”.
Although Approach Ⅰ and Approach Ⅱ share the same objective, they differ in their intermediate steps, representing distinct pathways to achieve the same result.
For Approach Ⅱ, first, not all sets admit both Roster and Set-Builder representations. However, this paper explicitly focuses on a class of sets that can be characterized by 0-1 linear equations. Second, Procedure ② entails constructing corresponding 0-1 linear equations and based on the elements (points in -dimensional space) of and , respectively. Procedure ③ involves solving the 0-1 integer programming .
In -dimensional space, a hyperplane is uniquely defined by affinely independent points and can be algebraically represented as a linear equation. The process of determining such a hyperplane from points is equivalent to solving a system of linear equations. While the cardinalities of sets and are on the order of , the affine independence of these points (as they correspond to vertices of hypercube ) ensures that only points from each set are required to construct the hyperplane. Notably, the hyperplane can also be generated using a constant multiple of elements (e.g., 2 or 3), with its computational complexity remaining within polynomial time.
The constraint that variables are 0-1 variables manifests primarily in two aspects: first, the coefficients of the resulting linear equations are all integers; second, when a variable can assume multiple non-zero values, it is consistently set to 1. By appending the 0-1 variable constraints to the two linear equations solved above, the sets and are defined in Set Builder Notation, thereby completing Procedure ②.
Approach Ⅰ involves directly computing the intersection of sets and in their Roster Notation representations, as outlined in Procedure ①. By the Corollary, the computational complexity of determining the intersection of and via this approach is . Since there is no constraint specifying to be a constant or significantly smaller than , the computational complexity of basic operations with magnitude cannot be categorized as polynomial time.
When solving for the intersection of and using Approach Ⅱ, the process also begins with sets in Roster Notation and culminates in deriving their intersection. By the Corollary, the computational complexity of the combined procedures “②+③” is .
Given that solving a system of linear equations is a known P-type problem, the computational complexity of Procedure ② is polynomial time. Consequently, the non-polynomial complexity of the combined procedure “②+③” must originate from Procedure ③. In computational complexity analysis, polynomial terms are negligible compared to non-polynomial terms, meaning the overall complexity of “②+③” is dominated by Procedure ③. This implies that solving inherently requires operations, regardless of the specific algorithm employed for solving 0-1 integer programming. Therefore, no polynomial-time algorithm exists for computing the intersection of and with respect to and via solving the 0-1 integer programming .
Thus, 0-1 integer programming admits no polynomial-time solution.