Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Computational Mathematics

Samiksha B. C.

,

Mohammad I. Merhi

,

Eric Raymond

,

Tika Puri

Abstract: We show that quantum phase estimation (QPE) resolution is a linear rescaling of the spectral gap of a unitary operator, making the correlation between these two quantities athematically guaranteed. Using this observation, we demonstrate that spectral properties alone can reliably predict quantum algorithm performance without executing full circuit simulations. Through analysis of 400 randomly generated two-qubit unitaries and six standard quantum gates, we confirm that the spectral gap and QPEresolution exhibit a perfect Pearson correlation (r = 1.0000, p < 10−8), a relationship that persists across diverse gate families and extends to three-qubit systems. Building on this foundation, weintroduce four novel spectral metrics and develop a machine learning framework that predicts algorithmic performance with 99.48% accuracy while achieving thousand-fold speedups over traditional simulation. Statistical validation includes bootstrap confidence intervals and Bonferroni correction for multiple comparisons. These results establish spectral analysis as an efficient and generalizable approach to quantum gate characterization, with immediate applications in gate library optimization, quantum compiler design, and algorithm–hardware co-design for near-term quantum devices.

Article
Computer Science and Mathematics
Computational Mathematics

Abdelmajid Benahmed

Abstract: Prabhakar fractional PDEs require discretization of singular Mittag-Leffler kernels. This paper establishes three results: (1) Lemma proving singularity extraction preserves O(∆x 4 ) convergence via analytical log-integral and Simpson quadrature. (2) Theorem with composite error bound O(∆t 2 + ∆x 4 + exp(−ρnmax)). (3) Stability validation across 125 parameter combinations (α × β × γ grid). Tests include grid refinement (Zeng comparison < 0.1%), discontinuous data (2nd-order), and 20 manufactured solutions across parameter space. Independent verification of Karimov et al. (2025) Green’s function via classical heat equation limit (agreement 4.0 × 10−10) confirms construction.

Article
Computer Science and Mathematics
Computational Mathematics

Serge T. Rwego

Abstract: Separable polynomial constraints arise naturally in numerous engineering applications, including sensor positioning systems, robotic kinematics, and regularized optimization. While general-purpose iterative solvers are widely available, their computational overhead and non-deterministic convergence behavior can be limiting factors in real-time and resourceconstrained environments. This paper presents the Parametric K-Formula (PK-Formula), a straightforward closed-form method for solving a specific but practically important class of separable polynomial equations. The method achieves O(n) computational complexity through parametric decomposition, offering 50–120× speedups over Newton-Raphson methods in benchmark tests while maintaining numerical accuracy comparable to iterative approaches. We provide complete implementation details, compare performance against standard solvers (MATLAB’s fsolve, Python’s scipy.optimize), and demonstrate practical applications in trilateration systems, regularized optimization, and trajectory control. The method’s simplicity enables straightforward implementation on embedded systems and microcontrollers, where computational resources are limited. Open-source implementations in MATLAB, Python, and C are provided in an accompanying repository. This work demonstrates that for the specific problem class of separable polynomial constraints, substantial practical benefits can be achieved through direct analytical approaches rather than generalpurpose iterative methods.

Article
Computer Science and Mathematics
Computational Mathematics

Khidir Shaib Mohamed

,

Sofian A. A. Saad

,

Osman Osman

,

Naglaa Mohammed

,

Mona A. Mohamed

,

Alawia Adam

,

Yousif Shoaib Mohammed

Abstract: The convergence characteristics of an online gradient learning technique for training Pi–Sigma higher-order neural networks under a smoothed L1 regularization framework with adaptive momentum are examined in this research. Although Pi-Sigma networks can effectively represent high-order nonlinear interactions, nonconvexity, parameter coupling, and noise sensitivity in online contexts make training them difficult. In order to overcome these problems, we avoid the nondifferentiability of the traditional L1 norm while promoting sparsity by using a differentiable approximation of the L1 penalty. Furthermore, an adaptive momentum term is added to speed up convergence and stabilize weight updates. We construct important lemmas that describe the behavior of the smoothed regularizer and momentum dynamics, make modest assumptions on the activation functions, data sequence, and learning parameters, and create a unified mathematical model for the suggested learning rule. These findings allow us to demonstrate that the online approach guarantees the weight sequence's boundedness, the gradients' convergence to zero, and the corresponding energy function's monotonic reduction. In comparison to existing models, numerical studies show that the suggested approach produces stable convergence, enhanced sparsity, and decreased gradient oscillation. Empirical plots of loss evolution, gradient norms, and weight norms are used to validate the theoretical results. The robustness of the suggested learning framework and its applicability for nonlinear function approximation and online learning tasks in higher-order neural networks are highlighted by the agreement between theoretical results and actual data.

Article
Computer Science and Mathematics
Computational Mathematics

Asif Ullah

,

Muhammad Shuaib

,

Bakhtawar Shah

Abstract: Artificial neural networks (ANNs) are powerful models inspired by the structure and function of the human brain. They are widely used for tasks such as classification, prediction, and model recognition. This study examines the stability of fractional-order neural networks with neuronal conditions, dynamic behavior, synchronization, and delays of time. Synchronization and stability for delayed neural network models are two important aspects of dynamic behavior. For a calculated fractionalorder, the state of the state variable wi(t) are synchronized with each other. Weight synchronization of wi (i = 1, 2, 3, . . . ,6) provides coherent updates during training, helping neural networks to study stable models. The incommensurate fractional-orders are linked to a system where each dynamic component develops with a different value, i.e. qi ≠ qj (i ≠ j) is inconsistent. These fractionalorders are calculated for the system’s eigenvalues and their singular points within the stability region defined by the Matignon-based stability. As the time delay decreases, more activation functions are induced, and the variable state of w3(t) requires longer relaxation times to be more stable than the variable state of w4(t). The Grunwald-Letnikov method is used to solve a fractional neural network system numerically and effectively handle fractional derivatives. This approach helps to more accurately simulate memory in neural networks.

Article
Computer Science and Mathematics
Computational Mathematics

Zhazgul Ablakeeva

,

Burul Shambetova

Abstract: This extensive review provides a thorough examination of first-order ordinary differential equations (ODEs), covering fundamental theoretical concepts, diverse analytical solution techniques, stability analysis methods, numerical approximation algorithms, and interdisciplinary applications. The paper systematically explores classical models including exponential growth, logistic dynamics, and cooling laws, while extending to advanced topics such as bifurcation analysis, stochastic exten- sions, and modern computational approaches. Special attention is given to the in- terplay between analytical and numerical methods, with practical examples drawn from ecology, physics, engineering, and biomedical sciences. The work serves as both an educational resource for students and a reference for researchers and prac- titioners working with dynamical systems across scientific domains.

Article
Computer Science and Mathematics
Computational Mathematics

Mahmoud Almasri

,

Tareq Alhmiedat

,

Mohammed Namazi

,

Mohamed Ershath

Abstract: Hand planting is slow, variable, and risky in windy, rocky coastal or bushland terrain. We reframe aerial seeding as an environment-aware optimization that selects drone height, airspeed, and heading to maximize per-cell hit rate while maintaining throughput over large areas. Central to our approach is a physics-based drop model: we compute the exact fall time under quadratic drag, predict mean horizontal drift by coupling vehicle motion with wind at release height, and model random spread using an Ornstein-Uhlenbeck turbulence process whose time scale increases with altitude. We then couple the latter with an error budget that accounts for timing, heading, and wind-direction uncertainty, yielding finite-time dispersion that is operationally realistic. Compared with altitude-only rules or heading-agnostic heuristics common in the restoration literature, our formulation makes the environment explicit-wind speed, wind direction, and turbulence intensity enter transparently-and produces reproducible, parameter-tuned operating points rather than ad-hoc settings. We provide equation-level details for local calibration and a lightweight dashboard that visualizes recall, drift distance, and recommended settings balancing hit rate and coverage speed in real time. In this scheme, the optimizer selects height, speed, and heading of the drone to center the expected impact on the target while reducing variance under the prevailing turbulence and relative wind angle.

Article
Computer Science and Mathematics
Computational Mathematics

Bouchaib Bahbouhi

Abstract: Goldbach’s conjecture asserts that every even integer greater than two can be expressed as the sum of two prime numbers. Despite its elementary formulation and extensive numerical verification, the conjecture has resisted proof for nearly three centuries. Classical analytic approaches have focused primarily on prime density and average distribution, beginning with the Prime Number Theorem and culminating in refined correlation results. While these methods successfully describe global behavior, they have struggled to guarantee simultaneous primality in symmetric additive configurations, a difficulty often associated with parity and covariance obstructions.In this work, we introduce a dynamic and variational framework that reformulates the problem in terms of symmetric motion rather than static inspection. The approach is based on a two-ball model in which symmetric integers around a central point are explored dynamically within an invariant admissible window. This window is shown empirically and structurally to persist under the growth of prime gaps, either as a continuous interval or as a symmetric split domain whose total extent remains invariant.A central role is played by a logarithmic density proxy, denoted λ, which induces a symmetry defect measuring imbalance between symmetric configurations. The evolution of this defect under motion defines a variational landscape whose stable minima correspond to prime–prime configurations, while composite–composite configurations are shown to be dynamically unstable. This mechanism provides a structural explanation for why composite obstructions cannot persist and why symmetric prime pairs are dynamically selected.The results integrate classical analytic insights on prime density and gaps with ideas from dynamical systems and stability theory. Goldbach’s statement emerges not as an assumed condition but as a consequence of symmetry, invariance, and variational stability. While the work does not claim a traditional formal proof in the axiomatic sense, it clarifies the underlying architecture of the problem and offers a coherent pathway toward its analytic resolution.

Review
Computer Science and Mathematics
Computational Mathematics

Bouchaib Bahbouhi

Abstract: Goldbach’s strong conjecture, asserting that every even integer greater than two can be expressed as the sum of two prime numbers, remains one of the oldest unresolved problems in mathematics. Despite overwhelming numerical verification and powerful partial results, a complete analytic proof has remained elusive. At the same time, extensive computations have revealed a striking empirical phenomenon known as Goldbach’s comet: the rapidly growing number of Goldbach representations as a function of the even integer E, forming a characteristic comet-like structure when plotted.This review article provides a comprehensive synthesis of classical analytic number theory, modern distributional results on primes, and recent structural insights in order to explain the existence, shape, and persistence of Goldbach’s comet. We introduce and develop a unified framework based on three complementary quantities: the dominance ratio Ω(E), measuring the growth of available prime density relative to local obstructions; the density field λ, encoding the smooth asymptotic behavior of primes; and the obstruction constant Κ, bounding the maximal effect of local gaps and covariance.We show that Ω(E) diverges, reflecting a fundamental scale separation between global prime density and local irregularities, and that λ-weighted obstructions remain bounded while density grows without bound. This framework explains why no gap-based or covariance-based mechanism can suppress Goldbach representations and why the number of representations necessarily increases. We argue that Goldbach’s conjecture is thereby reduced to a single, well-identified uniform realization problem within existing analytic methods.Rather than claiming a final proof, this article aims to clarify the conceptual structure underlying Goldbach’s conjecture, explain Goldbach’s comet as a necessary consequence of prime density dominance, and position the conjecture within a sharply defined analytic frontier. The result is a coherent, literature-grounded explanation of why Goldbach’s conjecture must be true and what precise technical step remains to complete its proof.

Article
Computer Science and Mathematics
Computational Mathematics

Valery Y. Glizer

,

Vladimir Turetsky

Abstract: An infinite-horizon H linear-quadratic control problem is considered. This problem has the following features: (i) the quadratic form of the control in the integrand of the cost functional is with a positive small multiplier (small parameter), meaning that the control cost is much smaller than the state cost; (ii) the current cost of the fast state variable in the cost functional is a positive semi-definite (but non-zero) quadratic form. These features require developing a significantly novel approach to asymptotic solution of the matrix Riccati algebraic equation associated with the considered H problem by solvability conditions. Using this solution, an asymptotic analysis of the H problem is carried out. This analysis yields parameter-free solvability conditions for this problem and a simplified controller solving this problem. An illustrative example is presented.

Article
Computer Science and Mathematics
Computational Mathematics

Bouchaib Bahbouhi

Abstract:

This work develops an analytic framework for Goldbach’s strong conjecture based on symmetry, modular structure, and density constraints of odd integers around the midpoint of an even number. By organizing integers into equidistant pairs about , a tripartite structural law emerges in which every even integer admits representations as composite–composite, prime–composite, or prime–prime sums. This triadic balance acts as a stabilizing mechanism that prevents the systematic elimination of prime–prime representations as the even number grows. The analysis introduces overlapping density windows, DNA-inspired mirror symmetry of primes, and modular residue conservation to show that destructive configurations cannot persist indefinitely. As a result, the classical obstruction known as the covariance barrier is reduced to a narrowly defined analytic condition. The paper demonstrates that Goldbach’s conjecture is structurally enforced for all sufficiently large even integers and that the remaining difficulty is confined to a minimal analytic refinement rather than a combinatorial or probabilistic gap. This places the conjecture within reach of a complete unconditional resolution.

Review
Computer Science and Mathematics
Computational Mathematics

Jiri Kroc

Abstract: The position paper serves scientists from all scientific disciplines to get a quick, concise, and easy-to-understand primer that describes the basic principles of design and applications of massively-parallel computations and models. The thesis of the position paper is, “What is the estimated direction of development of massively-parallel computing techniques utilizing emergents as observed in all scientific disciplines?” The birth of massively-parallel computations (MPCs) in the 1940s is closely related to the development of both early computers and simulations of nuclear processes that are operating within matter. It would be demonstrated that MPCs are becoming front and center in many research areas after a long delay that was forced by the previous inaccessibility of MPC computers. Another impetus to the development of MPCs came in the form of generalization of mathematical descriptions of observed natural phenomena using differential equations. More specifically, discretization schemes of differential equations got implemented in MPC simulations. Those achievements opened doors to the development of advanced descriptions of natural phenomena. Currently, there exist a number of MPC techniques that are gaining an increasing influence on the description of self-organization, emergence, replication, self-replication, and error-resilience within living and nonliving systems. A promising class of cellular automata, which is capable of describing a wide range of processes observed within natural phenomena, is called emergent information processing (EIP). The EIP approach is opening doorstowards future descriptions of many observed biological phenomena that are notoriously resisting mathematical and computational descriptions. Fixed and ad hoc networks, which are observed in living and non-living systems, could be interpreted using EIP methodology; this offers an opportunity for computational studies of emergent processes operating within them. Demonstrated approaches represent a toolkit that could be applied in all areas of research that are dealing with living systems. The usefulness of the EIP approach is demonstrated on a special case of emergent synchronization simulation, which could be applied in the design of artificial, biocompatible pacemaker implants and computers utilizing emergent computations.

Article
Computer Science and Mathematics
Computational Mathematics

Ta Van Tu

Abstract: Finding all possibly efficient solutions of an interval multiple objective linear programming (IMOLP) problem with interval coefficients in the objective functions, the constraint matrix and the right-hand side vector is dealt with. Up to now, there are no known methods that can find all possibly efficient solutions of an IMOLP problem with interval coefficients in the objective functions and the right-hand side vector. In this paper, we propose a method to find all possibly efficient solutions of an IMOLP problem with interval coefficients in the objective functions and the right-hand side vector. Some sufficient conditions to obtain all possibly efficient solutions of an IMOLP problem in a general case are also given.

Article
Computer Science and Mathematics
Computational Mathematics

Pedro Brandão

,

Oscar Garcia Pañella

,

Carla Silva

Abstract: Detecting anomalous events in high-dimensional behavioral data is a fundamental challenge in modern cybersecurity, particularly in scenarios involving stealthy advanced persistent threats (APTs). Traditional anomaly-detection techniques rely on heuristic notions of distance or density, yet rarely offer a mathematically coherent description of how sparse events can be formally separated from the dominant behavioral structure. This study introduces a density-metric manifold framework that unifies geometric, topological, and density-based perspectives into a single analytical model. Behavioral events are embedded in a five-dimensional manifold equipped with a Euclidean metric and a neighborhood-based density operator. Anomalies are formally defined as points whose local density falls below a fixed threshold, and we prove that such points occupy geometrically separable regions of the manifold. The theoretical foundations are supported by experiments conducted on openly available cybersecurity datasets, including ADFA-LD and UNSW-NB15, where we demonstrate that low-density behavioral patterns correspond to structurally rare attack configurations. The proposed framework provides a mathematically rigorous explanation for why APT-like behaviors naturally emerge as sparse, isolated regions in high-dimensional space. These results offer a principled basis for high-dimensional anomaly detection and open new directions for leveraging geometric learning in cybersecurity.

Article
Computer Science and Mathematics
Computational Mathematics

Kumudu Harshani Samarappuli

,

Iman Ardekani

,

Mahsa Mohaghegh

,

Abdolhossein Sarrafzadeh

Abstract: This paper presents a framework for synthesizing bee bioacoustic signals associated with hive events. While existing approaches like WaveGAN have shown promise in audio generation, they often fail to preserve the subtle temporal and spectral features of bioacoustic signals critical for event-specific classification. The proposed method, MCWaveGAN, extends WaveGAN with a Markov Chain refinement stage, producing synthetic signals that more closely match the distribution of real bioacoustic data. Experimental results show that this method captures signal characteristics more effectively than WaveGAN alone. Furthermore, when integrated into a classifier, synthesized signals improved hive status prediction accuracy. These results highlight the potential of the proposed method to alleviate data scarcity in bioacoustics and support intelligent monitoring in smart beekeeping, with broader applicability to other ecological and agricultural domains.

Article
Computer Science and Mathematics
Computational Mathematics

Zhisong Li

Abstract: The solution of the incompressible Navier-Stokes equations with pressure Poisson equation has been an area of intensive research and extensive applications. Nevertheless, the proper formulation of pressure boundary conditions for solid walls has remained theoretically unestablished, since the existing specifications are unable to generate a truly solenoidal velocity field (i.e., a velocity field with zero divergence). Some prior researches have claimed divergence-free boundary conditions but lack rigorous validity. In this study, we clarify a longtime misconception and point out that the regular pressure boundary condition for solid walls will make the system ill-posed and non-divergence-free, because the system doesn’t need any pressure boundary condition in theory. However, solving the pressure Poisson equation still requires a boundary closure. To resolve this contradiction, we propose a new pressure Poisson equation and associated Neumann boundary condition based on the corrective pressure. They are derived in detail, examined for the pressure compatibility, analyzed for numerical stability with the normal mode method, and validated successfully with test problems. The new formulation produces strictly divergence-free velocity solutions with enhanced accuracy and stability due to its theoretical soundness.

Article
Computer Science and Mathematics
Computational Mathematics

Jean Chien

,

Lily Chuang

,

Eric Lee

Abstract: Nanoimprint lithography (NIL) master fidelity is governed by coupled variations beginning with resist spin-coating, proceeding through electron-beam exposure, and culminating in anisotropic etch transfer. We present an integrated, physics-based simulation chain. First, it includes a spin-coating thickness model that combines Emslie–Meyerhofer scaling with a Bornside edge correction. The simulated wafer-scale map at 4000 rpm exhibits the canonical center-rise and edge-bead profile with a 0.190–0.206 μm thickness range, while the locally selected 600 nm × 600 nm tile shows <0.1 nm variation, confirming an effectively uniform region for downstream analysis. Second, it couples an e-beam lithography (EBL) module in which column electrostatics and trajectory-derived spot size feed a hybrid Gaussian–Lorentzian proximity kernel; under typical operating conditions ( ≈ 2–5 nm), the model yields low CD bias (ΔCD = 2.38/2.73 nm), controlled LER (2.18/4.90 nm), and stable NMSE (1.02/1.05) for isolated versus dense patterns. Finally, the exposure result is passed to a level-set reactive ion etching (RIE) model with angular anisotropy and aspect-ratio-dependent etching (ARDE), which reproduces density-dependent CD shrinkage trends (4.42% versus 7.03%) consistent with transport-limited profiles in narrow features. Collectively, the simulation chain accounts for stage-to-stage propagation—from spin-coating thickness variation and EBL proximity to ARDE-driven etch behavior—while reporting OPC-aligned metrics such as NMSE, ΔCD, and LER. In practice, mask process correction (MPC) is necessary rather than optional: the simulator provides the predictive model, metrology supplies updates, and constrained optimization sets dose, focus, and etch set-points under CD/LER requirements.

Article
Computer Science and Mathematics
Computational Mathematics

Huda Naji Hussein

,

Dhiaa Halboot Muhsen

Abstract: Due to their complexity and non-linearity, metaheuristic algorithms have become the gold standard in problem solving for those problems that cannot be solved by standard computational solutions. However, the global performance of these algorithms is strongly linked to the population structuring and the mechanism of replacing the worst solutions within the population. In this paper, an Adaptive Artificial Hummingbird Algorithm (AAHA), a new version of the basic AHA, is introduced and designed to enhance performance by studying the impacts of different population initialization methods within a broad and continual migration form. For the initialization phase, four methods: the Gaussian chaotic map, the Sinus chaotic map, the Opposite-based learning (OBL), and the Diagonal Uniform Distribution (DUD) are proposed as an alternative to the random population initialization method. A new strategy is proposed for replacing the worst solution in the migration phase. The new strategy used the best solution as an alternative to the worst solution with simple and effective local search. The proposed strategy stimulates exploitation and exploration when using the best solution and local search, respectively. The proposed AAHA algorithm is tested through various benchmark functions with different characteristics under many statistical indices and tests. Additionally, the AAHA results are benchmarked against other optimization algorithms to assess their effectiveness. The proposed AAHA algorithm outperformed alternatives in both speed and reliability. DUD-based initialization enabled the fastest convergence and optimal solutions. These findings underscore the significance of initialization in metaheuristics and highlight the efficacy of the AAHA algorithm for complex continuous optimization problems.

Article
Computer Science and Mathematics
Computational Mathematics

Klaus Röbenack

,

Daniel Gerbet

Abstract: Bounds for positive definite sets such as attractors of dynamic systems are typically characterized by Lyapunov-like functions. These Lyapunov functions and their time derivatives must satisfy certain definiteness conditions, whose verification usually requires considerable experience. If the system and a Lyapunov-like candidate function are polynomial, the definiteness conditions lead to Boolean combinations of polynomial equations and inequalities with quantifieres that can be formally solved using quantifier elimination. Unfortunately, the known algorithms for quantifier elimination require considerable computing power, meaning that many problems cannot be solved within a reasonable amount of time. In this context, it is particularly important to find a suitable mathematical formulation of the problem. This article develops a method that reduces the expected computational effort required for the necessary verification of definiteness conditions. The approach is illustrated using the example of the Chua system with cubic nonlinearity.

Article
Computer Science and Mathematics
Computational Mathematics

Ricardo Adonis Caraccioli Abrego

Abstract: This paper presents an analytic and computational framework to study noise in the distribution of prime numbers through layers based on multi-step prime gaps. Given the n-th prime pn, we consider the differences g(k,n) := p(n+k) − p(n), group these distances for different values of k and normalize them by an appropriate logarithmic scale. This produces, for each k, a discrete-time signal that can be analysed with tools from signal processing and control theory (Fourier analysis, resonators, RMS sweeps, and linear combinations). The main conceptual point is that all these layers should inherit the same underlying “noise” coming from the nontrivial zeros of the Riemann zeta function, up to deterministic filtering factors depending on k. We make this idea precise in a heuristic spectral inheritance hypothesis, motivated by the explicit formula and the change of variables t = log x. We then implement the construction computationally for hundreds of thousands of primes and several layers k, and we measure the response of bandpass resonators as a function of their central frequency. The numerical results show three phenomena: (i) the magnitude spectra of the layers share prominent peaks at the same frequencies, (ii) the RMS responses of resonators for different layers display aligned “hills” in the frequency domain, and (iii) a nearly perfect flattening is obtained by taking an appropriate linear combination of layers obtained via singular value decomposition. These findings support the picture that the same spectral content is present across layers and that the differences lie mainly in deterministic gain factors. The framework is deliberately modest from the point of view of number theory: we do not attempt to prove new theorems about primes, but rather to provide a coherent signal- processing view in which standard tools from electrical engineering can be used to explore the noisy side of the prime distribution in a structured way.

of 19

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated