Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Frank Vega

Abstract: The Maximum Independent Set (MIS) problem, a core NP-hard problem in graph theory, seeks the largest subset of vertices in an undirected graph $G = (V, E)$ with $n$ vertices and $m$ edges such that no two vertices are adjacent. We present a hybrid approximation algorithm that combines a vertex-cover complement approach with greedy selections based on minimum and maximum degrees, plus a low-degree induced subgraph heuristic, implemented using NetworkX. The algorithm preprocesses the graph to handle trivial cases and isolates, computes exact solutions for bipartite graphs via Hopcroft-Karp matching and K\"{o}nig's theorem, and, for non-bipartite graphs, iteratively processes connected components by computing $2$-approximate minimum vertex covers whose complements serve as independent set candidates, refined by a gadget-graph technique and extended greedily to maximality. It also constructs independent sets by selecting vertices in increasing and decreasing degree orders and by restricting to a low-degree induced subgraph, returning the largest of the four candidates. An efficient $O(m)$ independence check ensures correctness. The algorithm provably achieves a $2$-approximation ratio for bipartite graphs (optimal) and for graphs where $\mathrm{OPT} \geq 2n/3$ (via a tight vertex-cover complement bound), and attains a maximum observed ratio of $1.833$ across all $37$ DIMACS benchmark instances, well within the $2$-approximation guarantee. Its time complexity is $O(nm)$, making it suitable for small-to-medium graphs. Its simplicity, correctness, and robustness make it ideal for scheduling, network design, and combinatorial optimization education, with potential for enhancement via parallelization.

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Nedelcho Ganchovski

,

Oscar Smith

,

Christopher Rackauckas

,

Lachezar Tomov

,

Alexander Traykov

Abstract: Modified Anderson-Björck’s method [1] is a new robust and efficient bracketing root finding algorithm. It combines bisection with Anderson-Björk’s method to achieve both fast performance and worst-case optimality. It relies on linearity check criteria for switching methods and uses Anderson-Björk corrections to overcome the fixed endpoint issue of false-position. Initial benchmarks of this method have shown certain performance advantages compared to other methods like Ridders, Brent and ITP. In this paper, we propose further improvements of the method and perform some additional analysis and benchmarks of its behavior and performance.

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Kittipol Wisaeng

,

Thongchai Kaewkiriya

Abstract: This study examines the interrelationships among AI Competency (AIC), Soft-Skill Competency (SSC), Strategic Intelligence (SI), and Innovative University Competency (IUC) within the context of Thailand’s higher education transformation. Grounded in Dynamic Capability Theory (DCT), Human Capital Theory (HCT), and the Strategic Intelligence Framework (SIF), the study explores how technological and human-centered capabilities collectively enhance institutional innovation. A quantitative explanatory research design was employed, and data were collected from 475 academic and administrative staff across six faculties at Mahasarakham University. Structural Equation Modeling (SEM) was used to test the hypothesized causal relationships and mediating effects among the constructs. The findings reveal that all proposed hypotheses were supported. AI Competency exerted the strongest total effect on IUC, indicating its pivotal role in driving innovation both directly and indirectly through Strategic Intelligence and Soft-Skill Competency. SSC also demonstrated a significant total effect, underscoring the importance of collaboration, communication, adaptability, and problem-solving in fostering innovative ecosystems. Strategic Intelligence emerged as a key mediating mechanism, transforming technological and human capabilities into innovative outcomes through analytical foresight, evidence-based judgment, and organizational agility. The model demonstrated excellent goodness-of-fit indices, confirming both theoretical rigor and empirical robustness. The study contributes to the literature by integrating digital capability, human adaptability, and strategic cognition into a unified framework for university innovation. In practice, the results emphasize that sustainable innovation in higher education requires the synergistic development of AI literacy, soft skills, and strategic foresight.

Essay
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Qixiang Nie

,

Guangxun Wang

,

Xinxing Shi

,

Xuechen Liang

Abstract: To address the issues of insufficient convergence performance and high sensitivity to local optima in the traditional Whale Optimization Algorithm (WOA) when handling 3D path planning tasks for unmanned aerial vehicles (UAVs), this paper proposes an improved UAV path planning algorithm based on the Whale Optimization Algorithm (R*WOA). Firstly, the global search capability and path optimisation mechanism of the Rapidly Expanding Random Tree Star (RRT*) algorithm are utilised to generate a high-quality initial population, thereby enhancing population diversity and the algorithm’s global exploration capability; Secondly, the linear convergence factor of the traditional WOA is adjusted to a non-linear dynamic adjustment strategy based on the cosine function, enhancing global search capability in the early stages of iteration and local search capability in the later stages; simultaneously, a non-linear inertial weight is employed to modulate the position update mechanism of individuals, further enhancing the algorithm’s local optimisation accuracy and convergence stability in the later stages of iteration. Finally, comparative experimental results on a basic test function set and in scenarios constructed using Digital Elevation Models (DEMs) demonstrate that R*WOA exhibits stable optimisation performance, capable of planning safer paths that are shorter in length and smoother in trajectory.

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Zlatko Pangarić

Abstract: This paper introduces the formal framework of Symbolic Structures of Differences (SSD) as a novel approach to the analysis of seismic time series, aiming to provide early warning prior to the occurrence of a main shock. Unlike classical early warning systems based on P-wave detection, the SSD methodology identifies changes in the local geometry of geological deformation through the symbolic encoding of three-point differential structures. Each sample triplet (xk,xk+1,xk+2) is assigned a symbolic structure based on the signs of the first and second differences, generating a space of 27 possible local geometries. From the distribution of these structures, the following metrics are derived: SSD entropy (Esds), symbolic space activity (κ), transition entropy (ε), and the Relational SSD Coefficient (RSC). Preliminary retrospective analysis of data for five significant seismic events — Parkfield 2004 (M6.0), L'Aquila 2009 (M6.3), Tohoku 2011 (M9.0), and the Ridgecrest 2019 sequence (M6.4/M7.1) — shows statistically significant changes in SSD parameters within a time window of 47 to 89 seconds before the arrival of the P-wave. Hybrid systems combining SSD detection with classical P-wave analysis potentially offer superior warning time and accuracy compared to traditional approaches. We caution that the presented numerical results are based on a preliminary analysis of a small sample and require validation on an expanded dataset before any potential operational application.

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Salvador Bermúdez Gómez

Abstract: Estimating the fractal dimension Df of a complex network is fundamental to understanding its self-similar structure, yet existing methods based on boxcovering are sensitive to hub concentration and provide no way to incorporate node-property information. We introduce AFRACT (Autocorrelation-aware Fractal dimension for Complex neTworks), a ball-mass scaling algorithm that addresses both limitations. The core idea is simple: instead of counting nodes within a growing ball, AFRACT weights each node by a function of its property value and its spatial autocorrelation with the centre, capturing how local order decays with topological distance. We establish a complete axiomatic framework for the weighting function, prove that property-weighted and unweighted estimates converge to the same Df asymptotically (power-law preservation theorem), and show that the weight matrix is circulant on vertex-transitive graphs, enabling an exact FFT-based computation that achieves a 471× speedup over the direct method. On four deterministic networks with analytically known Df (Sierpi«ski gasket and carpet, 2D and 3D lattices, N up to 3375), AFRACT achieves R2 ≥ 0.998. We derive a universal nite-size correction law Df ≈ ˆDf + cG/N1/Df confirmed across three graph classes with R2 > 0.99. Against Compact Box Burning on five diverse networks, AFRACT achieves higher goodness-of-fit, 4.7× better noise robustness, and avoids the hub-induced overestimation that causes CBB to return ˆDf = 3.98 on BarabásiAlbert networks.

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Yosef Akhtman

Abstract: The Finite Ring Continuum (FRC) models structure through finite arithmetic shells. At symmetry-complete prime checkpoints, a quadratic extension introduces a second involution alongside spatial reversal, and the coexistence of these symmetries generates a finite shell-transfer problem. This paper studies that problem in terms of existential admissibility: the question is which structural updates are internally allowed, not how an external agent might operate on the shells. We prove that every nondegenerate admissible innovation decomposes into four-element Klein packets, so primitive shell updates occur in multiples of four and have plus four as their elementary admissible increment. We then define orbit-level invariant extraction and show that every finite invariant family admits an explicit code in the elementary receiving shell with a minimal symbol count, namely the least number of receiving-shell symbols needed to encode the invariant alphabet. In addition, every finite extractor admits an exact fiber decomposition that records how many primitive states consolidate to each invariant value. A fully explicit example at prime 13 exhibits a four-element innovation packet, a one-symbol recoding into the receiving shell of order 17, and the norm profile with one singleton fiber and twelve fibers of size fourteen. In this paper the norm is used only as an exact algebraic packet-invariant; any identification of such invariants with specific physical observables is deferred to follow-up work. The result reframes innovation-consolidation in FRC as a finite problem of packet formation, invariant extraction, consolidation profile, and shell coding, with larger updates understood as coarse-grained bundles of the same primitive admissible step.

Essay
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Ruixue Zhao

Abstract: This paper presents a polynomial-time algorithm for 0-1 matrix isomorphism. Since 0-1 matrix isomorphism is equivalent to graph isomorphism, this algorithm can solve graph isomorphism in polynomial time with a time complexity of O(n4).I also prove that counting the number of mappings between two graphs is a #P-complete problem. In addition, this paper proposes a general algorithm for rapidly generating all N × N Latin squares, together with its precise counting framework, a polynomial-time algorithm for (quasigroup) isomorphism, and a method for the polynomial-time reduction of Latin square isomorphism to 0-1 matrix isomorphism. Efficient algorithms for solving Latin square filling problems are also introduced. Numerous combinatorial isomorphism problems, including Steiner triple systems, Mendelsohn triple systems, 1-factorization, networks, affine planes, and projective planes, can be reduced to Latin square isomorphism. Since groups are proper subsets of quasigroups and group isomorphism is a subproblem of quasigroup isomorphism, group isomorphism naturally becomes a P-problem. A Latin square of order N is an N×N matrix where each row and column contain exactly N distinct symbols, with each symbol appearing only once. A matrix derived from such a multiplication table forms an N-order Latin square. In contrast, a binary operation derived from an N-order Latin square as a multiplication table constitutes a pseudogroup over the Q set. I discovered four new algebraic structures that remain invariant under permutation of rows and columns, known as quadrilateral squares. All N×N Latin squares can be constructed using three or all four of these quadrilateral squares. Leveraging the algebraic properties of quadrilateral squares that remain unchanged by permutation, we designed an algorithm to generate all N×N Latin squares without repetition when permuted, resulting in the first universal and nonrepetitive algorithm for Latin square generation. Building on this, we established a precise counting framework for Latin squares. The generation algorithm further reveals deeper structural aspects of Latin squares (pseudogroups). Through studying these structures, we derived a crucial theorem: two Latin squares are isomorphic if their subline modularity structures are identical.Based on this important and key theorem, and combined with other structural connections discussed in this paper, a polynomial-time algorithm for Latin square isomorphism has been successfully designed. This algorithm can also be directly applied to solving quasigroup isomorphism, with a time complexity of 5/16(n5 − 2n4 − n3 + 2n2) + 2n3 Furthermore, more symmetrical properties of Latin squares (pseudogroups) were uncovered. The problem of filling a Latin grid is a classic NP-complete problem. Solving a fillable Latin grid can be viewed as generating grids that satisfy constraints. By leveraging the connections between parametric group algebra structures revealed in this paper, we have designed a fast and accurate algorithm for solving fillable Latin grids. I believe the ultimate solution to NP- complete problems lies within these connections between parametric group algebra structures, as they directly affect both the speed of solving fillable Latin grids and the derivation of precise counting formulas for Latin grids.

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Piotr Masierak

Abstract: We study the canonical string-based Assembly Index (ASI), defined as the minimum number of binary concatenations needed to construct a target word under full reuse. NP-completeness of ASI-DEC over general finite alphabets and an equivalence between ASI plans and straight-line programs (SLPs) under the same size convention has been established. We emphasize that all transfers between decision variants are effected by explicit polynomial-time mappings and (where needed) an explicit reparameterization of the threshold by an absolute constant or a simple affine function. The remaining technical obstacle for the binary alphabet is that a naive encoding reduction may allow an optimizer to exploit “cross-boundary” substrings created by overlaps of codewords. We give a fully self-contained binary-alphabet proof: we construct an explicit self-synchronizing (comma-free) codebook of 17 fixed-length binary codewords and prove a boundary-normalization lemma showing that optimal plans can be assumed aligned to codeword boundaries. This yields a polynomial reduction from fixed-alphabet ASI-DEC to binary ASI-DEC, proving NP-completeness over {0, 1}. Using the recalled ASI–SLP equivalence (with a short proof for completeness), we obtain NP-completeness of binary SLP-DEC. We additionally provide an explicit, fully formal translation between our binary-rule counting convention and the standard SGP size measure (sum of right-hand side lengths), showing that the NP-completeness classification transfers to common one-string SGP/SLP decision variants over {0, 1}.

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Vincenzo De Leo

,

Michelangelo Puliga

,

Martina Erba

,

Cesare Scalia

,

Andrea Filetti

,

Alessandro Chessa

Abstract: In this work, we inspected the friendship network on Twitter (recently rebranded as X), concentrating on individuals and organizations intertwined with the energy field. We particularly focus on seasoned professionals, corporate entities, and domain specialists, all connected through `following’ relationships. By meticulously examining these ties, we uncover several distinct groupings within the network, each defined by the unique roles its members occupy. Our analysis demonstrates that the natural emergence of such clusters on social platforms exerts a profound influence on public discourse regarding energy and other critical matters, including climate change. Furthermore, we reveal that the ever-changing interplay of misleading information catalyzes the formation of ideologically divided factions, which often leads to reduced engagement in online conversations. These emergent clusters, characterized by their shared communication styles, form relatively compact communities where the exchange of information is infrequent compared to larger networks and is usually confined to accounts created for specific commercial objectives. Additionally, by leveraging a machine learning approach, we are able to pinpoint pivotal actors within these niche segments and elucidate the mechanisms that sustain their connectivity. This method provides novel insights into how corporate communication unfolds on social media, offering a refreshed perspective on professional networking. Ultimately, our findings highlight the ways in which companies within the energy sector take advantage of Twitter to coordinate their initiatives, with key institutions serving as central nodes in maintaining the organization of these networks.

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Frank Vega

Abstract: We present a polynomial-time algorithm for Minimum Vertex Cover achieving an approximation ratio strictly less than 2 for any finite undirected graph with at least one edge, thereby disproving the Unique Games Conjecture. The algorithm reduces the problem to a minimum weighted vertex cover on a degree-1 auxiliary graph using weights \( 1/d_v \), solves it optimally via Cauchy-Schwarz-balanced selection, and projects the solution back to a valid cover. Correctness and the strict sub-2 ratio are rigorously proved. Runtime is \( O(|V|+|E|) \), confirming practical scalability and opening avenues for revisiting UGC-dependent hardness results across combinatorial optimization.

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Frank Vega

Abstract: The triangle finding problem is a cornerstone of complex network analysis, serving as the primitive for computing clustering coefficients and transitivity. This paper presents \( \texttt{Aegypti} \), a practical algorithm for triangle detection and enumeration in undirected graphs. By combining a descending degree-ordered vertex-iterator with a hybrid strategy that adapts to graph density, \( \texttt{Aegypti} \) achieves a worst-case runtime of \( \mathcal{O}(m^{3/2}) \) for full enumeration, which matches the bound established by Chiba and Nishizeki for arboricity-based listing algorithms. For the detection variant (\( \texttt{first_triangle}=\text{True} \)), we prove that sorting by non-increasing degree enables early termination in \( \mathcal{O}(n\log n + d_{\max}^2) \) worst-case time when the maximum-degree vertex participates in a triangle, where the quadratic factor in \( d_{\max} \) reduces to \( \mathcal{O}(d_{\max}/C(v_{\max})) \) in expectation when the local clustering coefficient \( C(v_{\max}) > 0 \). Experiments on complement graphs of DIMACS maximum-clique benchmark instances confirm that detection terminates sub-millisecond on the majority of instances, while the matrix-multiplication baseline requires substantially more time on the same inputs.

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Frank Vega

Abstract: We present the Hvala algorithm, an ensemble approximation method for the Minimum Vertex Cover problem that combines graph reduction techniques, optimal solving on degree-1 graphs, and complementary heuristics (local-ratio, maximum-degree greedy, minimum-to-minimum). The algorithm processes connected components independently and selects the minimum-cardinality solution among five candidates for each component. \textbf{Empirical Performance:} Across 233+ diverse instances from four independent experimental studies---including DIMACS benchmarks, real-world networks (up to 262,111 vertices), NPBench hard instances, and AI-validated stress tests---the algorithm achieves approximation ratios consistently in the range 1.001--1.071, with no observed instance exceeding 1.071. \textbf{Theoretical Analysis:} We prove optimality on specific graph classes: paths and trees (via Min-to-Min), complete graphs and regular graphs (via maximum-degree greedy), skewed bipartite graphs (via reduction-based projection), and hub-heavy graphs (via reduction). We demonstrate structural complementarity: pathological worst-cases for each heuristic are precisely where another heuristic achieves optimality, suggesting the ensemble's minimum-selection strategy should maintain approximation ratios well below $\sqrt{2} \approx 1.414$ across diverse graph families. \textbf{Open Question:} Whether this ensemble approach provably achieves $\rho < \sqrt{2}$ for \textit{all possible graphs}---including adversarially constructed instances---remains an important theoretical challenge. Such a complete proof would imply P = NP under the Strong Exponential Time Hypothesis (SETH), representing one of the most significant breakthroughs in mathematics and computer science. We present strong empirical evidence and theoretical analysis on identified graph classes while maintaining intellectual honesty about the gap between scenario-based analysis and complete worst-case proof. The algorithm operates in $\mathcal{O}(m \log n)$ time with $\mathcal{O}(m)$ space and is publicly available via PyPI as the Hvala package.

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Costas Panagiotakis

Abstract: In this paper, we propose a metaheuristic optimization algorithm called Adaptive Gold Rush Optimizer (AGRO), a substantial evolution of the original Gold Rush Optimizer (GRO). Unlike the standard GRO, which relies on fixed probabilities in the strategy selection process, AGRO utilizes a novel adaptive mechanism that prioritizes strategies improving solution quality. This adaptive component, that can be applied to any optimization algorithm with fixed probabilities in the strategy selection, adjusts the probabilities of the three core search strategies of GRO (Migration, Collaboration, and Panning), in real-time, rewarding those that successfully improve solution quality. Furthermore, AGRO introduces fundamental modifications to the search equations, eliminating the inherent attraction towards the zero coordinates, while explicitly incorporating objective function values to guide prospectors towards promising regions. Experimental results demonstrate that AGRO outperforms ten state-of-the-art algorithms on the twenty-three classical benchmark functions, the CEC2017, and the CEC2019 datasets. The source code of AGRO algorithm is publicly available at https://sites.google.com/site/costaspanagiotakis/research/agro.

Concept Paper
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

José Vicente Quiles Feliu

Abstract: We present Model G, a mathematical formalization of information spaces where coherence is an intrinsic property guaranteed by algebraic construction. We define the global space G through a triaxial structure (Attribute, Key, Connection) and a coherence operator Φ that filters the managed universe Ω. Four fundamental axioms establish existence by coherence, location uniqueness, acyclicity of the dependency graph, and determinism through the propagation vector Π and the determinant δ. We extend relational normal forms with five semantic-temporal normal forms (SRGD-FN1 to FN5). The SRGD implementation materializes the model through a three-layer stateless architecture. Experimental validation confirms impossibility of incoherent states and O(|Π|) complexity in operations.This work was initiated in December 2025 and the initial version was published on January 6, 2026, temporally coinciding with independent advances such as DeepSeek’s Engram (January 12, 2026).

Review
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Soponloe Sovann

Abstract: Modern versions of the game use the ”7-bag” randomization technique to soften the sense of bias inherent in extreme pieces “droughts,” where a required tetromino is not generated for a long time. This study compares uniform random number generation (RNG) and 7-bag randomization for two experiments. In the first experiment, we inspected 100,000 produced numbers to severity and homogeneity of distribution. The findings revealed that 7-bag randomization resulted in decreases in maximum Ipiece droughts of 71outcomes. Both techniques maintained the fixed frequencies of pieces in the long run. In the second test, the effect of the game via controlled single-player experiments. Games employing 7-bag randomness raised the average number of cleared squares by 14fewer lines and slightly lower variance, although this was not found to be statistically significant because of the small data set size. In general, the results do prove the quantitative effectiveness of 7-bag randomization in new designs for Tetris.

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Manolya Kavakli-Thorne

Abstract: Digital Twin (DT) technology has become a critical component of smart city evolution, providing real-time analytics, predictive modelling, and operational efficiency enhancements. The goal of this paper is to explore the opportunities and barriers for a city-wide data exchange platform to establish the principles for a Federated DT (FDT) development to serve the integration of sector-specific DT applications to create a cohesive urban intelligence framework. The paper investigates the topic of federated data exchange in a smart city context and how interoperability among the use cases of DTs can be achieved. Two system architectures for a Data Exchange Platform have been explored, including layered and composable FDT approaches. The composable architecture has been chosen for the platform implementation to ensure interoperability, scalability, and security in real-time data exchange. The composable architecture is essentially a microservices-driven framework with self-contained components that have clear functionalities and provides the greatest flexibility for future development of the FDT Data Exchange Platform. By employing a range of microservices, the composable architecture can ensure modularity, scalability, and flexibility, making it easier to manage, update, and extend the platform to accommodate additional DTs for evolving city needs, urban management and decision-making. However, this comes at the cost of increased issues around security and governance of interfaces. The platform has been tested by 5 DTs designed by 4 Universities located in Birmingham, UK and Ulsan, South Korea. For the design of the platform, nine common elements have been identified as “building blocks” analysing the DT use cases in a sandbox environment called the Diatomic Azure DT Development Platform. These common building blocks are 3D Visualisation, Asset Management, Predictive Analytics, Artificial Intelligence, Machine Learning, Authorisation Methods, Access Control, API, and Sensors. These building blocks have been later validated by the use cases presented by 8 SMEs developing DTs. The initial results confirm that the identified building blocks are sufficient for the development of DTs to create a generic city-wide data exchange platform. These results provide insights for DT adoption by de-risking investment and targeted resources required for smart city development. The scope of this paper is limited with smart city applications but the proposed FDT system architecture should be applicable to any domain.

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Gerald Baulig

,

Jiun-In Guo

Abstract: This study introduces a hybrid point cloud compression method that transfers from octree-nodes to voxel occupancy estimation to find its lower-bound bitrate by using a Binary Arithmetic Range Coder. In previous attempts, we've shown that our entropy compression model based on index convolution achieves promising performance while maintaining low complexity. However, our previous model lacks an autoregressive approach, which is apparently indispensable to compete with the current state-of-the-art of compression performance. Therefore, we adapt an autoregressive grouping method that iteratively populates, explores, and estimates the occupancy of 1-bit voxel candidates in a more discrete fashion. Furthermore, we refactored our backbone architecture by adding a distiller layer on each convolution, forcing every hidden feature to contribute to the final output. Our proposed model extracts local features using lightweight 1D convolution applied in varied ordering and analyzes causal relationships by optimizing the cross-entropy. This approach efficiently replaces the voxel convolution techniques and attention models used in previous works, providing significant improvements in both time and memory consumption. The effectiveness of our model is demonstrated on three datasets, where it outperforms recent deep learning-based compression models in this field.

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Yu-Min Wei

Abstract: Cognitive bias introduces structural imbalance in exploration and exploitation within adaptive decision systems, yet existing approaches emphasize outcome accuracy or bias reduction while offering limited explanation of how internal decision structures regulate distortion during iterative search. This study develops a metaheuristic-based bias intervention module as a computational artifact for examining symmetry regulation at the process level of biased decision-making. Using controlled computational experiments, the study compares baseline, conventional metaheuristic, and intervention configurations through structural indicators that characterize decision accuracy, convergence stability, symmetry regulation, and bias reduction. The results show that adaptive decision coherence emerges through regulated structural adjustment rather than symmetry maximization. Across evaluated configurations, systems that maintain intermediate symmetry exhibit stable convergence and effective bias regulation, whereas configurations that preserve higher symmetry display structural rigidity and weaker regulation despite high outcome accuracy. These findings reposition cognitive bias as a structural force shaping adaptive rationality in algorithmic decision systems and advance design science research by expressing cognitive balance as measurable computational indicators for process-level analysis of regulated decision dynamics.

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Erlan Zhaparov

,

Burul Shambetova

Abstract:

The Minimum Vertex Cover (MVC) problem is NP-hard even on unit disk graphs (UDGs), which model wireless sensor networks and other geometric systems. This paper presents an experimental comparison of three greedy algorithms for MVC on UDGs: degree-based greedy, edge-based greedy, and the classical 2-approximation based on maximal matching. Our evaluation on randomly generated UDGs with up to 500 vertices shows that the degree-based heuristic achieves approximation ratios between 1.636 and 1.968 relative to the maximal matching lower bound, often outperforming the theoretical 2-approximation bound in practice. However, it provides no worst-case guarantee. In contrast, the matching-based algorithm consistently achieves the proven 2-approximation ratio while offering superior running times (under 11 ms for graphs with 500 vertices). The edge-based heuristic demonstrates nearly identical performance to the degree-based approach. These findings highlight the practical trade-off between solution quality guarantees and empirical performance in geometric graph algorithms, with the matching-based algorithm emerging as the recommended choice for applications requiring reliable worst-case bounds.

of 11

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated