Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Computational Mathematics

Kumudu Harshani Samarappuli

,

Iman Ardekani

,

Mahsa Mohaghegh

,

Abdolhossein Sarrafzadeh

Abstract: This paper presents a framework for synthesizing bee bioacoustic signals associated with hive events. While existing approaches like WaveGAN have shown promise in audio generation, they often fail to preserve the subtle temporal and spectral features of bioacoustic signals critical for event-specific classification. The proposed method, MCWaveGAN, extends WaveGAN with a Markov Chain refinement stage, producing synthetic signals that more closely match the distribution of real bioacoustic data. Experimental results show that this method captures signal characteristics more effectively than WaveGAN alone. Furthermore, when integrated into a classifier, synthesized signals improved hive status prediction accuracy. These results highlight the potential of the proposed method to alleviate data scarcity in bioacoustics and support intelligent monitoring in smart beekeeping, with broader applicability to other ecological and agricultural domains.
Article
Computer Science and Mathematics
Computational Mathematics

Zhisong Li

Abstract: The solution of the incompressible Navier-Stokes equations with pressure Poisson equation has been an area of intensive research and extensive applications. Nevertheless, the proper formulation of pressure boundary conditions for solid walls has remained theoretically unestablished, since the existing specifications are unable to generate a truly solenoidal velocity field (i.e., a velocity field with zero divergence). Some prior researches have claimed divergence-free boundary conditions but lack rigorous validity. In this study, we clarify a longtime misconception and point out that the regular pressure boundary condition for solid walls will make the system ill-posed and non-divergence-free, because the system doesn’t need any pressure boundary condition in theory. However, solving the pressure Poisson equation still requires a boundary closure. To resolve this contradiction, we propose a new pressure Poisson equation and associated Neumann boundary condition based on the corrective pressure. They are derived in detail, examined for the pressure compatibility, analyzed for numerical stability with the normal mode method, and validated successfully with test problems. The new formulation produces strictly divergence-free velocity solutions with enhanced accuracy and stability due to its theoretical soundness.
Article
Computer Science and Mathematics
Computational Mathematics

Jean Chien

,

Lily Chuang

,

Eric Lee

Abstract: Nanoimprint lithography (NIL) master fidelity is governed by coupled variations beginning with resist spin-coating, proceeding through electron-beam exposure, and culminating in anisotropic etch transfer. We present an integrated, physics-based simulation chain. First, it includes a spin-coating thickness model that combines Emslie–Meyerhofer scaling with a Bornside edge correction. The simulated wafer-scale map at 4000 rpm exhibits the canonical center-rise and edge-bead profile with a 0.190–0.206 μm thickness range, while the locally selected 600 nm × 600 nm tile shows <0.1 nm variation, confirming an effectively uniform region for downstream analysis. Second, it couples an e-beam lithography (EBL) module in which column electrostatics and trajectory-derived spot size feed a hybrid Gaussian–Lorentzian proximity kernel; under typical operating conditions ( ≈ 2–5 nm), the model yields low CD bias (ΔCD = 2.38/2.73 nm), controlled LER (2.18/4.90 nm), and stable NMSE (1.02/1.05) for isolated versus dense patterns. Finally, the exposure result is passed to a level-set reactive ion etching (RIE) model with angular anisotropy and aspect-ratio-dependent etching (ARDE), which reproduces density-dependent CD shrinkage trends (4.42% versus 7.03%) consistent with transport-limited profiles in narrow features. Collectively, the simulation chain accounts for stage-to-stage propagation—from spin-coating thickness variation and EBL proximity to ARDE-driven etch behavior—while reporting OPC-aligned metrics such as NMSE, ΔCD, and LER. In practice, mask process correction (MPC) is necessary rather than optional: the simulator provides the predictive model, metrology supplies updates, and constrained optimization sets dose, focus, and etch set-points under CD/LER requirements.
Article
Computer Science and Mathematics
Computational Mathematics

Huda Naji Hussein

,

Dhiaa Halboot Muhsen

Abstract: Due to their complexity and non-linearity, metaheuristic algorithms have become the gold standard in problem solving for those problems that cannot be solved by standard computational solutions. However, the global performance of these algorithms is strongly linked to the population structuring and the mechanism of replacing the worst solutions within the population. In this paper, an Adaptive Artificial Hummingbird Algorithm (AAHA), a new version of the basic AHA, is introduced and designed to enhance performance by studying the impacts of different population initialization methods within a broad and continual migration form. For the initialization phase, four methods: the Gaussian chaotic map, the Sinus chaotic map, the Opposite-based learning (OBL), and the Diagonal Uniform Distribution (DUD) are proposed as an alternative to the random population initialization method. A new strategy is proposed for replacing the worst solution in the migration phase. The new strategy used the best solution as an alternative to the worst solution with simple and effective local search. The proposed strategy stimulates exploitation and exploration when using the best solution and local search, respectively. The proposed AAHA algorithm is tested through various benchmark functions with different characteristics under many statistical indices and tests. Additionally, the AAHA results are benchmarked against other optimization algorithms to assess their effectiveness. The proposed AAHA algorithm outperformed alternatives in both speed and reliability. DUD-based initialization enabled the fastest convergence and optimal solutions. These findings underscore the significance of initialization in metaheuristics and highlight the efficacy of the AAHA algorithm for complex continuous optimization problems.
Article
Computer Science and Mathematics
Computational Mathematics

Klaus Röbenack

,

Daniel Gerbet

Abstract: Bounds for positive definite sets such as attractors of dynamic systems are typically characterized by Lyapunov-like functions. These Lyapunov functions and their time derivatives must satisfy certain definiteness conditions, whose verification usually requires considerable experience. If the system and a Lyapunov-like candidate function are polynomial, the definiteness conditions lead to Boolean combinations of polynomial equations and inequalities with quantifieres that can be formally solved using quantifier elimination. Unfortunately, the known algorithms for quantifier elimination require considerable computing power, meaning that many problems cannot be solved within a reasonable amount of time. In this context, it is particularly important to find a suitable mathematical formulation of the problem. This article develops a method that reduces the expected computational effort required for the necessary verification of definiteness conditions. The approach is illustrated using the example of the Chua system with cubic nonlinearity.
Article
Computer Science and Mathematics
Computational Mathematics

Ricardo Adonis Caraccioli Abrego

Abstract: This paper presents an analytic and computational framework to study noise in the distribution of prime numbers through layers based on multi-step prime gaps. Given the n-th prime pn, we consider the differences g(k,n) := p(n+k) − p(n), group these distances for different values of k and normalize them by an appropriate logarithmic scale. This produces, for each k, a discrete-time signal that can be analysed with tools from signal processing and control theory (Fourier analysis, resonators, RMS sweeps, and linear combinations). The main conceptual point is that all these layers should inherit the same underlying “noise” coming from the nontrivial zeros of the Riemann zeta function, up to deterministic filtering factors depending on k. We make this idea precise in a heuristic spectral inheritance hypothesis, motivated by the explicit formula and the change of variables t = log x. We then implement the construction computationally for hundreds of thousands of primes and several layers k, and we measure the response of bandpass resonators as a function of their central frequency. The numerical results show three phenomena: (i) the magnitude spectra of the layers share prominent peaks at the same frequencies, (ii) the RMS responses of resonators for different layers display aligned “hills” in the frequency domain, and (iii) a nearly perfect flattening is obtained by taking an appropriate linear combination of layers obtained via singular value decomposition. These findings support the picture that the same spectral content is present across layers and that the differences lie mainly in deterministic gain factors. The framework is deliberately modest from the point of view of number theory: we do not attempt to prove new theorems about primes, but rather to provide a coherent signal- processing view in which standard tools from electrical engineering can be used to explore the noisy side of the prime distribution in a structured way.
Article
Computer Science and Mathematics
Computational Mathematics

Efthimios Providas

,

Ioannis N. Parasidis

,

Jeyhun E. Musayev

Abstract: Boundary value problems for systems of integrodifferential equations appear in all branches of science and engineering. Accuracy in modeling complex processes requires the specification of nonlocal boundary conditions, including multipoint and integral conditions. These kinds of problems are even harder to solve. In this paper, we present solvability criteria and a direct operator method for constructing the exact solution to systems of linear ordinary integrodifferential equations with general nonlocal boundary conditions. Several examples are solved to demonstrate the effectiveness of the method. The results equally apply to nonlocal boundary value problems for systems of ordinary differential, loaded differential, and loaded integrodifferential equations.
Article
Computer Science and Mathematics
Computational Mathematics

Yueshui Lin

Abstract: This paper proposes a novel reversible arithmetic framework that addresses the irreversible information loss problem in traditional arith metic systems caused by specific operations such as multiplication by zero and division by zero. The system extends each numerical value into a mathematical object containing both the current value and com plete computational history, redefining the four fundamental arith metic operations to ensure traceability of computational processes. We establish the axiomatic foundation of this system, prove its key properties, and demonstrate its practical application value through er ror source tracing in differential equation solving, financial modeling, and machine learning. Theoretical analysis shows that while maintain ing computational reversibility, the system controls space complexity within acceptable limits through optimization techniques.
Article
Computer Science and Mathematics
Computational Mathematics

Piotr Sulewski

,

Damian Stoltmann

Abstract: The first (main) aim of the article is to define and practically apply the parameterized Kolmogorov-Smirnov goodness-of-fit test for normality. These modifications consist in varying a formula for calculating the empirical distribution function (EDF). The second contribution is to expand the EDF family with four new proposals. The third contribution is to create a family of alternative distributions, consisting of both older and newer distributions that, thanks to their flexibility, belong to all groups of skewness and kurtosis signs. Critical values are obtained using 106 order statistics for sample sizes n = 10, 20 and at a significance level a = 0.05. The fourth contribution is to calculate the power of the analyzed tests for alternative distributions based on 105 values of test statistics, with parameters selected so that they are similar to the normal distribution in various ways. The effectiveness of the analyzed tests is illustrated by analyzing real datasets.
Article
Computer Science and Mathematics
Computational Mathematics

Venelin Todorov

,

Ivan Dimov

Abstract: This paper presents a comprehensive stochastic framework for solving Fredholm integral equations of the second kind, combining biased and unbiased Monte Carlo approaches with modern variance-reduction techniques. Starting from the Liouville-Neumann series representation, several biased stochastic algorithms are formulated and analyzed, including the classical Markov Chain Monte Carlo (MCM), Crude Monte Carlo (CMCM), and quasi-Monte Carlo methods based on modified Sobol sequences (MCA-MSS-1, MCA-MSS-2, and MCA-MSS-2-S). Although these algorithms achieve substantial accuracy improvements through stratification and symmetrisation, they remain limited by the systematic truncation bias inherent in finite-series approximations. To overcome this limitation, two unbiased stochastic algorithms are developed and investigated: the classical Unbiased Stochastic Algorithm (USA) and a new enhanced version, the Novel Unbiased Stochastic Algorithm (NUSA). The NUSA method introduces adaptive absorption control and kernel-weight normalization, yielding significant variance reduction while preserving exact unbiasedness. Theoretical analysis and numerical experiments confirm that NUSA provides superior stability and accuracy, achieving uniform sub-10−3 relative errors in both one- and multi-dimensional problems, even for strongly coupled kernels with ∥K∥L2≈1. Comparative results show that the proposed NUSA algorithm combines the robustness of unbiased estimation with the efficiency of quasi-Monte Carlo variance reduction. It offers a scalable, general-purpose stochastic solver for high-dimensional Fredholm integral equations, making it well-suited for advanced applications in computational physics, engineering analysis, and stochastic modeling.
Article
Computer Science and Mathematics
Computational Mathematics

Ricardo Adonis Caraccioli Abrego

Abstract: We present a framework to implement Boolean logic using only smooth functions (sums, products, and low-cost nonlinearities) without hard thresholds or explicit comparators, while exactly preserving truth tables on the Boolean vertices {0,1}. The core is a family of polynomial gate primitives (AND, OR, XOR, NAND, etc.) that are exact at the vertices and are interleaved with a smooth booleanizer Bn(x) = xn / (xn + (1−x)n) that stabilizes trajectories toward the attractors 0 and 1 without discretization. We prove (i) exactness on the Boolean vertices; (ii) existence of attractors with local contraction for Bn; (iii) a block-contraction condition that yields stability in deep cascades; and (iv) a phase/harmonic variant via the encoding s = 2b − 1∈ {−1, +1} (equivalently s = cos(π(1 − b))). We synthesize a continuous half-adder and a continuous SR latch (smooth ODEs), and we outline an analog-hardware route using standard op-amp adders/scalers, analog multipliers, and exponential (log-antilog) cells. We discuss sensitivity, cost, speed, and robustness under variation, and we identify research directions linking continuous logic to neuromorphic and AI-centric analog-digital co-design.
Article
Computer Science and Mathematics
Computational Mathematics

Anshuman Sahoo

,

Raghunath Rout

Abstract:

Reconstructing the continuous, three-dimensional (3D) distribution of intracranial electric potential from sparse, non-invasive scalp electroencephalography (EEG) is a central inverse problem in computational neuroimaging. This work introduces the Electro- Diffusion Physics-Informed Neural Network (ED-PINN), a coordinate-based neural representation that enforces the governing quasi-static Maxwellian electro-diffusion equation, ∇ · (σ(x)∇ϕ(x)) = −I(x), as a soft constraint during training. By parameterizing the potential field ϕ(x) as a continuous function, ED-PINN integrates sparse electrode measurements, Dirichlet/Neumann boundary conditions, and collocation-based PDE residuals into a single, unified objective function. This mesh-free approach enables the reconstruction of physically consistent, differentiable volumetric fields without the need for explicit domain meshing required by traditional methods like FEM or BEM. We demonstrate the efficacy of this approach on a canonical three-layer spherical head model with realistic tissue conductivities and synthetic Gaussian sources. We present a quantitative and qualitative evaluation, analyze primary error sources, and outline a clear roadmap for extensions to anatomically realistic geometries and anisotropic conductivity tensors derived from diffusion MRI. The experiments show that ED-PINN produces smooth, differentiable potential fields and localizes sources to sub-centimeter accuracy under the studied conditions. The paper includes detailed implementation notes, training recipes suitable for Colab/CPU environments, and a curated bibliography to ensure reproducibility.

Article
Computer Science and Mathematics
Computational Mathematics

Dongxing Wang

,

Xiaoxiao Qian

Abstract: To improve the classification accuracy about K-means model for Coal-Gangue-Image, this paper proposed an Coal-Gangue-Image Classification Method (AF-WPOA-Kmeans-CCM) based on K-means and an improved wolf-pack-optimization using Adaptive Adjustment Factor.Firstly, this paper proposed an improved wolf-pack-optimization algorithm(AF-WPOA) using Adaptive Adjustment Factor Strategy(AF) aimed to dynamically adjust the update amount of the population for maintaining the genetic diversity of the population while accelerating convergence speed as well as ASGS devoted to enhance the algorithm's global exploration capability in hunting mechanism and local exploitation power in siege mechanism.Secondly,different weights for each feature dimension were adopted to more accurately reflect the varying importance of different feature dimensions in the classification results while Hilbert space was used to eliminate the drawback of low feature dimensions of the images from this paper resulted to achieving higher classification accuracy in high-dimensional space.Finally,AF-WPOA-Kmeans-CCM was formed by adopting the AF-WOPA to optimize K-means clustering model dedicated to accurate classification of Coal-Gangue-Image. Experimental results show that AF-WPOA outperforms GA, PSO and LWCA, while AF-WPOA-Kmeans-CCM performs better than Kmeans-CCM, GA-Kmeans-CCM, PSO-Kmeans-CCM, and LWCA-Kmeans-CCM—achieving 93.03% and 91.55% classification ac-curacy on the training and testing sets, respectively. Unfortunately, Coal-Gangue-Image often have diverse sources and significant feature differences. This makes it hard for fixed cluster centers to fully represent the numerous, multi-source, and highly variable unknown samples to be predicted.
Communication
Computer Science and Mathematics
Computational Mathematics

Philippe Sainty

Abstract:

In this article the proof of the binary Goldbach conjecture via Chen’s weak conjecture are established (any integer greater than three is the sum and the difference of two positive primes). To this end, a "localised" algorithm is developed for the construction of two recurrent sequences of extreme Goldbach decomponents (U2n and (V2n), ((U2n) dependent of (V2n)) verifying: for any integer \( n \ge 2 \)(U2n) and (V2n) are positive primes and U2n + V2n = 2n. To form them, a third sequence of primes (W2n) is defined for any integer \( n \ge 3 \) by W2n = Sup \( (p \in P : p \le 2n - 3) \), \( P \)denoting the set of positive primes. The Goldbach conjecture has been proved for all even integers 2n between 4 and 4.1018 and in the neighbourhoodof 10100,10200and 10300 for intervals of amplitude 109. The table of extreme Goldbach decomponents, compiled using the programs in Appendix 15 and written with the Maxima and Maple scientific computing software, as well as files from ResearchGate, Internet Archive, and the OEIS, reaches values of the order of 2n = 105000. Algorithms for locating Goldbach's decomponentss for very large values of 2n are also proposed. In addition, a global proof by strong recurrence "finite ascent and descent method" on all the Goldbach decomponents is provided by using sequences of primes (Wq2n) defined by: Wq2n = Sup \( (p \in P : p \le 2n - q) \) for any odd positive prime q, and a further proof by Euclidean divisions of 2n by its two assumed extreme Goldbach decomponents is announced by identifying uniqueness, coincidence and consistency of the two operations. Next, a majorization of U2n by n0.525, 0.7 ln2.2(n) with probability one and 5 ln1.3(n) on average for any integer n large enough is justified. Finally, the Lagrange-Lemoine-Levy (3L) conjecture and its generalization called "Bachet-Bézout-Goldbach"(BBG) conjecture are proven by the same type of method. In Aditional notes, we provide heuristic estimates for Goldbach's comet and presented a graphical synthesis using a reversible Goldbach tree (parallel algorithm).

Article
Computer Science and Mathematics
Computational Mathematics

Khidir Shaib Mohamed

,

Ibrahim M.A. Suliman

,

Abdalilah Alhalangy

,

Alawia Adam

,

Muntasir Suhail

,

Habeeb Ibrahim

,

Mona Ahmed Mohamed

,

Sofian A. A. Saad

,

Yousif Shoaib Mohammed

Abstract: Although wavelet neural networks (WNNs) combine the expressive capability of neural models with multiscale localization, there are currently few theoretical guarantees for their training. We investigate the weight decay (L2 regularization) optimization dynamics of gradient descent (GD) for WNNs. Using explicit rates controlled by the spectrum of the regularized Gram matrix, we first demonstrate global linear convergence to the unique ridge solution for the feature regime when wavelet atoms are fixed and only the linear head is trained. Second, for fully trainable WNNs, we demonstrate linear rates in regions satisfying a Polyak–Łojasiewicz inequality and establish convergence of GD to stationary locations under standard smoothness and boundedness of wavelet parameters; weight decay enlarges these regions by suppressing flat directions. Third, we characterize the implicit bias in the over-parameterized (NTK) regime: GD converges to the minimum-RKHS-norm interpolant associated with the WNN kernel with L2. In addition to an assessment process on synthetic regression, denoising, and ablations across λ and stepsize, we supplement the theory with useful recommendations on initialization, stepsize schedules, and regularization scales. Together, our findings give a principled prescription for dependable training that has broad applicability to signal processing applications and shed light on when and why L2-regularized GD is stable and quick for WNNs.
Article
Computer Science and Mathematics
Computational Mathematics

Yuri Dimitrov

,

Slavi Georgiev

,

Radan Miryanov

,

Venelin Todorov

Abstract: Difference schemes for the numerical solution of fractional differential equations rely on discretizations of the fractional derivative. In earlier work \cite{d24}, we constructed approximations of the first derivative and applied them to fractional derivatives. In this paper, we extend the method from \cite{d24} to develop parameter-dependent approximations of the second derivative and second-order approximations of the fractional derivative based on the weights of the L1 scheme. We derive the second-order expansion formula of the L1 approximation and show that the coefficient of the second derivative is asymptotically equal to a value of the zeta function, as suggested by the generating function. Using this expansion, we construct a second-order approximation of the fractional derivative and the corresponding asymptotic approximation by a suitable choice of parameter. Examples illustrating the application of these approximations to the numerical solution of ordinary differential equations and fractional differential equations are presented. Both approximations of the fractional derivative are shown to yield second-order numerical methods. Numerical experiments are also provided, confirming the theoretical predictions for the accuracy order of the methods.
Article
Computer Science and Mathematics
Computational Mathematics

Clara Dupont

,

Hugo Bernard

,

Saidi Kareem

,

Emilie Garnier

Abstract: Establishing a unified representational ground between vision and language remains one of the most persistent challenges in artificial intelligence. Existing image–text alignment paradigms primarily depend on explicit instance-level correspondences derived from paired data, which capture surface associations but neglect the deeper web of conceptual regularities that guide human understanding. In human cognition, interpretation is not a direct reflection of observation but a synthesis of shared experience—a consensus of how objects, actions, and contexts interrelate. The absence of such cognitive consensus in current systems hinders robustness, interpretability, and adaptability across domains. To address this limitation, we propose \textbf{SYMBOL} (\textbf{S}emantic–s\textbf{Y}nthesis via \textbf{M}ultimodal \textbf{B}ehavioral kn\textbf{O}wledge \textbf{L}inking), a new framework that aligns visual and linguistic semantics through collective concept integration. SYMBOL constructs a \textit{Cognitive Consensus Graph (CCG)} derived from large-scale multimodal corpora, encoding co-occurrence regularities that emerge from collective human annotations and narrative descriptions. By propagating conceptual relationships across this graph, SYMBOL enriches conventional instance-level representations with consensus-driven knowledge priors. The resulting embedding space jointly models explicit perceptual alignment and implicit conceptual reasoning, enabling more resilient and cognitively coherent multimodal understanding. Comprehensive evaluations on multiple retrieval benchmarks demonstrate that SYMBOL significantly outperforms contemporary systems in bidirectional retrieval and cross-domain adaptation. Beyond quantitative gains, SYMBOL illustrates a new principle: integrating consensus-based semantic synthesis can transform multimodal learning from mere correlation fitting into cognitively grounded reasoning.
Article
Computer Science and Mathematics
Computational Mathematics

Rafael Garcia-Sandoval

Abstract: In this article, the logical concepts that underpin the translation of logic circuits will be defined and presented. The intention is to apply these concepts directly to hardware and to the management of language codes in computer systems. The Quaternary System has a wide range of applications, including data transmission, for example the 2B1Q code utilized in ISDN devices, and the analysis of Hilbert curves. Notably, the quaternary system plays a crucial role in the digitization of the human genome due to its direct relationship with the binary system, which is analogous to the pairs of digits found in DNA. The pair [0, 3] corresponds to the paired nucleotides {AT}, and the pair [1, 2] corresponds to the paired nucleotides {CG}. The Base four System bears a certain relation to the Ternary System (0, 1, 2) , the balance ternary system had been taking more advanced, particularly with regard to the significant advancements made in NMAX, NMIN logic gates similar to binary NOR and NAND gates respectively. The Balanced Ternary System (-1, 0, +1) has played a substantial role in the development of computers. This claim is evidenced by its application in the Setun computer at Moscow State University in 1958 and 1970, and more recently at the Peking University, Beijing, China, where research was conducted on the application the Ternary System in digital gates CNT- SGTs transistors in MVL architecture and in TNN ternary neural network application. So, it is necessary to continue researching materials such as CNT carbon nanotube and new program languages. This article describes technical proposals for the base-four system and introduces a new approach to a mixed-balanced form quaternary system and its application to a new coding system based on the theory of Banach field functions. This approach complements intermediate values between true and false and has potential applications in quantum computing. The mixed-balanced Quaternary system is an appropriate tool for achieving this, as the collapse of a wave function can be any intermediate value, not just zero or one. As a result of the investigation of this topic, the development of this research has revealed the Quinary number system, which has great potential in this area. This system is introduced in the final pages of this study.
Article
Computer Science and Mathematics
Computational Mathematics

Mengyu Zhang

,

Qiang Ma

,

Qinglin Tang

Abstract: This work presents a comprehensive framework for the multiscale characterization of advanced composite structures, focusing on Mindlin plates and Timoshenko beams with periodic microstructures. Using a second-order two-scale (SOTS) method, we uniformly derive asymptotic expansions for deflection and rotation to formulate effective homogenized models. The developed finite element procedures combine techniques to overcome shear locking, enabling efficient and accurate structural application. Numerical simulations verified the convergence of the model and demonstrated its superiority in capturing the rapid oscillation of local stress distribution, an important aspect for the design of reliable composites. By varying the thickness parameter, our model is shown to connect thin and thick theories for composite beam and plate problems. It thus provides a practical tool for the design and analysis of tailored composite structures, contributing to the development of innovative materials.
Review
Computer Science and Mathematics
Computational Mathematics

John Abela

,

Ernest Cachia

,

Colin Layfield

Abstract: We revisit the P ? = NP question through the joint lenses of time–relative description complexity and automated discovery. Our premise is epistemic rather than ontological: even if polynomial–time algorithms for NP–complete problems exist, they may have very high Kolmogorov (description) complexity and thus be undiscoverable by unaided humans. Formal barriers (relativization, natural proofs, and algebrization) already suggest that familiar techniques are insufficient to separate or collapse P and NP. Regardless of the ultimate truth of P ? = NP, we argue that systematic search in high–K code spaces is valuable today: it may yield stronger heuristics, tighter exponential bases, improved approximation schemes, and fixed–parameter runtimes. To make such artifacts scientifically credible, we advocate a certificate–first workflow ([9,78]) that couples (i) polytime–by–construction skeletons with (ii) machine–checkable evidence (e.g., DRAT/FRAT logs; LP/SDP duals) and (iii) non–uniform search distilled into uniform algorithms. We also note empirical motivation from large language models: scaling laws and energy budgets indicate that high capacity often unlocks new emergent behaviors, while internal mappings remain complex and opaque. The overarching message is pragmatic: capacity (high descriptive complexity) plus certification may provide a principled path to better algorithms and clearer limits without presuming a resolution of P ? = NP. This paper is best described as an position/expository essay. We synthesize existing work from complexity theory, Kolmogorov complexity, and algorithmic discovery, and offers a rational justification for a shift in emphasis: from the elusive goal of discovering polynomial-time algorithms for NP-complete problems, to the tractable and fruitful pursuit of discovering high-performance heuristics and approximation methods via automated search and learning.

of 18

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated