Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Computational Mathematics

Ibar Federico Anderson

Abstract: This paper consolidates, corrects, and extends a research programme on the shifted-prime problem p = q + r - 1 with p, q, r prime, and its connections to the binary Goldbach conjecture and the non-trivial zeros of the Riemann zeta function zeta(s). We collect unconditionally proved results, computationally verified findings with correct methodology, and explicit corrections of errors identified in prior versions. The principal corrections are: (i) the k=3 existence problem is equivalent to the binary Goldbach conjecture and is therefore open, not proved as originally claimed; (ii) scripts 6.py through 8.py contained a critical permutation-test bug that artificially inflated detection counts from 199/200 to the corrected value of 116/200 zeros at p < 0.01 using 709,562 primes in [10^6, 1.2 x 10^7]. New results established in this work include: (iii) the transfer operator ratio lambda1/lambda2 = 182.63 with n = 892,206 primes, nearly three times the value of 62.86 reported previously, confirming systematic spectral strengthening with sample size; (iv) a corrected decay slope of -1.1262 approximately -1 (theoretical prediction under RH), measured consistently across nine independent sample sizes up to n = 1,310,763 primes in [10^6, 2.2 x 10^7], with 129/200 non-trivial Riemann zeros detected at p < 0.01; (v) three new spectral methods, namely autocorrelation in delta-log-p with fundamental period T = 2*pi/gamma1 approximately 0.4445, the Lomb-Scargle periodogram adapted to non-uniform prime spacing, and principal eigenvector analysis of the transfer operator; (vi) three independent analyses, p-adic correlation rho = +0.729, optimal subsequence amplification, and RSA spectral classification, all identify l=3 as the uniquely special prime in the singular factor S_infinity; (vii) the Generalised Amplification Conjecture (GAC) is not yet verifiable in the computational range explored (n <= 1.07 x 10^6 primes), with residual error near 60%. The analytical core, covering absolute convergence of S_infinity, the explicit formula for Psi*(x), equivalence of k=3 to binary Goldbach, and proof of existence for k >= 4 via Helfgott's theorem, remains unconditionally valid and unaffected by the computational corrections. None of these results constitutes a proof of RH. They constitute empirical evidence from an arithmetic direction independent of all classical zero-verification approaches.

Article
Computer Science and Mathematics
Computational Mathematics

Ricardo Adonis Caraccioli Abrego

Abstract: We describe a decimal–hexadecimal block encoding for primality over a finite stored range. Since every prime greater than 5 must lie in one of the residue classes 1, 3, 7, 9 (mod 10), each decimal block of size ten can be encoded by a 4-bit word indicating which of the candidates 10k + 1, 10k + 3, 10k + 7, and 10k + 9 are prime. This yields a nibble-based storage scheme supporting exact primality queries and exact recovery of the prime-counting function π(x) by cumulative popcount. We then establish a structural theorem arising from the congruence 10 ≡ 1 (mod 3): for k ≡ 0 (mod 3) the candidates 10k + 3 and 10k + 9 are always composite, and for k ≡ 2 (mod 3) the candidates 10k + 1 and 10k + 7 are always composite. This partitions the nibble alphabet into three classes of sizes 4, 16, and 4, reducing the Shannon entropy from 4 bits to 2.42 bits per nibble and yielding a lossless compression of 39.4% over the original encoding with O(1) decode complexity. We present data structures, Python routines, and experimental validation up to 300,000.

Review
Computer Science and Mathematics
Computational Mathematics

John Constantine Venetis

Abstract: The numerical simulation of incompressible viscous flows remains a central pillar of modern computational fluid dynamics (CFD). Over the past decades, a wide spectrum of numerical methodologies has been developed, reflecting fundamentally different mathematical formulations and discretization philosophies. Among these, domain-based approaches—such as finite difference, finite element, finite volume, and meshfree methods—have emerged as versatile and general-purpose frameworks, while boundary element methods provide efficient alternatives for problems governed by linear physics, particularly in unbounded domains. This review presents a comprehensive examination of the historical development, mathematical foundations, and computational characteristics of these approaches for Newtonian incompressible flows. Emphasis is placed on the conceptual distinctions between boundary-integral and domain-based formulations, their applicability to internal and external flow regimes, and their compatibility with turbulence modeling strategies, including Reynolds-averaged Navier–Stokes (RANS), large-eddy simulation (LES), and direct numerical simulation (DNS). The intention is to provide a unified perspective that clarifies the strengths and limitations of the principal CFD methodologies and offers guidance on their suitability for different classes of flow problems.

Article
Computer Science and Mathematics
Computational Mathematics

Ibar Federico Anderson

Abstract: The classical Goldbach conjecture asks whether every even integer n >= 4 can be written as n = q + r with q, r prime. This paper studies a structurally distinct variant: for a prime p >= 2, let N(p) = #{ (q, r) subset of primes: q <= r, q + r = p + 1 } count the Goldbach representations of p + 1 when p itself is prime. The additional triple‑primality constraint – p, q, r all simultaneously prime – produces an arithmetic profile governed by a new constant S_∞ != 2 C_2. Proved (unconditional). The Euler product: S_∞ = ∏_{ℓ > 2, ℓ prime} ( 1 + 1/((ℓ-1)(ℓ-2)) ) = 1.74272535...converges absolutely and equals the limiting Cesàro mean of S(p+1) over shifted primes (Theorem 3.5). Two congruence theorems for Mirror and Anchor‑3 primes are proved (Theorems 4.3 and 4.6). The equivalence α_∞ = 1 / S_∞ ⇔ Ĉ(x) → 2 C_2 is established unconditionally (Proposition 5.4). Three analytic gaps in the Goldbach‑Riemann bridge for Ψ*(x) are closed unconditionally (Theorems 6.5, 6.6, 6.9), yielding the explicit formula (Theorem 6.10). Conditional (GRH, GRH+HL‑B): Under the Generalised Riemann Hypothesis the convergence rate |S̄(x) – S_∞| = O( S_∞ log x / √x ) is established (Theorem 3.6). Under GRH and Hardy‑Littlewood Conjecture B, N(p) >= 1 for every prime p > 11 is proved completely (Theorem 10.1). The parity obstruction is identified as the precise barrier to proving N(p) >= 2 by current sieve methods (Proposition 10.3).Computationally verified: N(p) >= 2 for every prime 11 < p < 6.79×10^7 (4,000,000 primes, zero violations, exhaustive Sieve of Eratosthenes). Probabilistically extended to 1000 randomly sampled 127‑bit primes p ~ 10^38 via Miller‑Rabin (k = 10 rounds), and independently to 100 samples at 512 bits (p ~ 10^154). Discrete Mellin transform experiments detect 72 of the first 100 non‑trivial Riemann zeros in the residuals ε(p) = (N(p) – N̂₃(p)) / N̂₃(p) at significance level p < 0.01 (range p in [10^6, 2×10^6], n = 70,435 primes, 200 permutations), improving the previous 21/50 result by a factor of 3.4. Class‑fraction experiments at RSA‑1024 (p ~ 10^309) and RSA‑2048 (p ~ 10^617) confirm Orphan fractions of 98% and 100% respectively, consistent with predicted density‑zero behaviour of Mirror and Anchor‑3 primes. Epistemic status: All claims carry explicit labels: [PROVED], [COND. PROVED], [COMP. VERIF.], [CONJECTURE], [NEW], [CORRECTED]. No claim is presented without its status. The central conjectures are open.

Article
Computer Science and Mathematics
Computational Mathematics

Basker Palaniswamy

,

Paolo Palmieri

Abstract: Cryptographic security proofs are the invisible backbone of modern digital systems, yet they remain fragmented across multiple paradigms—game-based proofs, Universal Composability (UC), formal verification, and ad hoc insecurity arguments—each with its own language, assumptions, and limitations. This article introduces the \textbf{Market-Theoretic Security Framework (MTSF)}, a unified paradigm that reinterprets all security proofs as economic markets. In this view, the defender acts as a seller offering \emph{security goods} (such as confidentiality or unforgeability), while the adversary acts as a buyer bidding computational resources to break them. Security emerges naturally as \emph{market equilibrium}, where no efficient adversary can afford to win, while insecurity is characterized as \emph{market collapse}, where attacks succeed at negligible cost. For cryptographers, MTSF provides a rigorous and expressive framework that unifies four major proof paradigms into a single formal language. It introduces key technical innovations such as the \textbf{extended difference lemma} for handling multiple simultaneous failure events, \textbf{bidding-based reductions} that explicitly model adversarial strategies, a \textbf{dual methodology that treats proofs and disproofs symmetrically within the same structure}, and a \textbf{session pinging mechanism} for unbounded session verification. The framework seamlessly extends to classical and post-quantum primitives, real-world protocols (including TLS~1.3 and Signal), and even quantum-adversarial settings, while preserving quantitative security bounds and composability guarantees.MTSF offers an intuitive, accessible, and powerful meta model: security is like a marketplace where attackers try to ``buy'' a break, and defenders ensure the price is prohibitively high. Each proof becomes a sequence of small price adjustments, and each attack corresponds to a failed or successful bid. By combining mathematical rigor with economic intuition, MTSF transforms security proofs from opaque technical artifacts into transparent, auditable, and universally understandable arguments, enabling both experts and practitioners to reason about security with clarity and confidence.

Article
Computer Science and Mathematics
Computational Mathematics

Dmytro Topchyi

Abstract: In this paper, we consider the properties of the following objects: plafal and geo-space (a general overview). As an application of the created theory, the proof of the equality of complexity classes P and NP will be given. The geo-plafal is a kernel (computational template) of the proof; constructive theory of serendipity approximations, Stepanets' school and the Bogolyubov principle of the decay of correlations for an infinite systems (dim=3) is a shell.

Article
Computer Science and Mathematics
Computational Mathematics

Nguyet Nguyen

Abstract: Insurance pricing plays a central role in risk management and financial decision-making, 2 as accurate premium estimation directly impacts portfolio stability and profitability. This 3 study investigates insurance pure premium estimation by integrating classical actuar- 4 ial models with modern machine learning techniques. We compare the traditional fre- 5 quency–severity decomposition framework with direct modeling approaches, including 6 XGBoost and Tweedie models. For claim frequency, we evaluate Poisson-based models, 7 generalized additive models, and XGBoost. For claim severity, we compare a Gamma gen- 8 eralized linear model with XGBoost. The results show that XGBoost significantly improves 9 predictive performance for both components. Within the decomposition framework, the 10 XGBoost–XGBoost model achieves the best overall prediction accuracy. However, lift-based 11 analysis reveals that the XGBoost–Gamma model provides superior risk segmentation, 12 highlighting a trade-off between prediction accuracy and risk ranking. Direct modeling 13 approaches, while competitive, do not outperform the decomposition framework. Overall, 14 the findings demonstrate that machine learning enhances predictive performance, but its 15 effectiveness is maximized within the frequency–severity framework. The results further 16 indicate that claim frequency is the primary driver of risk differentiation, while claim sever- 17 ity contributes more to prediction accuracy. These findings have important implications for 18 risk management and pricing strategies in insurance portfolios.

Article
Computer Science and Mathematics
Computational Mathematics

Montchai Pinitjitsamut

Abstract: Natural rubber price forecasting is inherently difficult due to nonlinear, non-stationary dynamics driven by supply fundamentals, cross-market signals, exchange rate movements, and speculative trading. This study proposes VMD–Hybrid BiLSTM–Transformer, a dual-pathway framework integrating Variational Mode Decomposition (VMD) with a Bidirectional LSTM encoder and a Transformer encoder for daily RSS3 FOB price change forecasting. Rather than forecasting each intrinsic mode function independently, all five VMD components are appended directly to the economic feature matrix — preserving multi-scale frequency information within a single forward pass and avoiding the variance collapse observed in conventional decomposition forecast approaches (StdR = 0.04 - 0.15). On a 237-observation held-out test set (September 2025–February 2026), the model achieves Pearson correlation of 0.812, directional accuracy of 67.1%, and StdR of 0.819, outperforming ARIMA by 0.662 in correlation and 37.3% in MAE, with predictive skill confirmed up to five days. These results demonstrate that directional accuracy alone is insufficient for evaluating difference commodity price models, and that jointly integrating multi-scale decomposition, bidirectional learning, and global attention is essential for reliable agricultural price forecasting.

Article
Computer Science and Mathematics
Computational Mathematics

Muhammad Bilal

,

Muhammad Sabeel Khan

Abstract: The flow past two cylinders in tandem arrangement is of fundamental importance in engineering applications such as heat exchangers, offshore structures, and power transmission lines. This study presents a complete open‑source simulation pipeline using Gmsh for mesh generation and OpenFOAM for the finite‑volume solver, combined with a long short‑term memory (LSTM) neural network surrogate for fast predictions. A distance‑based refinement strategy resolves the flow accurately, with characteristic mesh sizes as low as around the cylinders. The methodology is validated against the classical Schäfer–Turek single‑cylinder benchmark at (Re=100), showing satisfactory agreement for force coefficients and Strouhal number. The main analysis focuses on a tandem configuration at \( Re=1.0\times10^5 \) with unequal diameters (\( D_1=0.1\;\mathrm{m} \), \( D_2=0.15\;\mathrm{m} \)) spaced \( 1.0\;\mathrm{m} \) centre‑to‑centre. The results reveal strong wake interaction: the downstream cylinder experiences higher mean drag \( (\overline{C}_D=0.997) \) and significantly larger lift fluctuations \( (C_L'=0.340) \) than the upstream cylinder \( (\overline{C}_D=0.947 \), \( C_L'=0.129 \)). Both cylinders shed vortices at the same frequency \( f=2.041\;\mathrm{Hz} \), yielding Strouhal numbers \( St_A=0.204 \) and \( St_B=0.306 \). An LSTM neural network trained on the force coefficient time series achieves near‑perfect predictions of the downstream lift and correctly reproduces the shedding frequency, providing a fast and accurate surrogate model. The fully reproducible open‑source workflow, including all CFD setup files and the neural network training code, is made available, enabling future studies on bluff‑body interactions and facilitating the adoption of data‑driven methods in fluid mechanics.

Article
Computer Science and Mathematics
Computational Mathematics

Zhiqiang Luo

Abstract: This paper applies physics-informed neural networks (PINNs) to laterally excited liquid sloshing in a two-dimensional rectangular tank, where near-resonant forcing (ωe/ω1 = 0.9) produces a multi frequency beating response with a period of approximately 10T1. Linearised potential flow theory governs the problem; the network learns the velocity potential φ(x,z,t) while the free-surface elevation η is injected analytically. Two training obstacles specific to forced sloshing are analysed. First, a zero solution trap arises because the trivial solution φ̂=0 satisfies all equations except the free-surface conditions, whose residuals are roughly 10−4 times smaller than the Laplace residual; characteristic scale normalisation combined with loss weighting (λD = λK = 100) breaks this trap. Second, spectral bias prevents standard MLPs from resolving the three co-existing frequencies (ω1, ωe, ∆ω); a Fourier time embedding that augments the input from 3 to 9 dimensions overcomes this limitation. Two additional techniques further reduce errors: a hard wall boundary condition enforced exactly via a cos(πx/B) spatial embedding, which eliminates wall collocation points; and a gradient-enhanced Laplace regulariser (∥∇(∇2φ̂)∥2) that constrains velocity smoothness through third-order automatic differentiation. An ablation study shows that these four techniques progressively reduce the horizontal velocity error from εu = 12.46% to 0.84%. Results are validated against a viscous finite-difference benchmark. Over one beating cycle the errors are εη = 0.15%, εu = 0.84%, and εw = 1.65%. Afrequency parameter study across ωe1 = 0.5–1.1 gives εη < 0.25% and εu < 2.3% for all near-resonance cases. For long-time simulation, a time-domain decomposition strategy with transfer learning partitions the domain into one-beat windows; extending to five beating cycles (50T1) yields εu = 3.43% and εη = 0.30% with no monotonic error accumulation across windows. The methodology is then extended to a three-dimensional rectangular tank (B × W × H) with bi-directional lateral excitation. The 3-D formulation introduces the y-dimension into the Laplace equation (∇2φ = φxx + φyy + φzz = 0), adds transverse wall boundary conditions (φ/∂y = 0) enforced exactly via a cos(πy/W) embedding, and extends the Fourier time embedding from 9 to 16 dimensions to accommodate six physical frequencies. The bi-directional excitation excites both (m,0) and (0,n) modal families, producing a genuinely three dimensional beating response. Results demonstrate that the proposed techniques transfer effectively to 3-D, with errors εη = 0.24%, εu = 1.31%, εv = 1.78%, and εw = 2.32% over one beating cycle (2,499 s training time).

Article
Computer Science and Mathematics
Computational Mathematics

Júlio Rocha

,

Salviano Soares

,

António Valente

,

Filipe Cabral Pinto

Abstract: This study presents an approach to forecasting water consumption using the ARIMA (Autoregressive Integrated Moving Average) method, with an additional comparison to the Holt- Winters method [1], [ 2 ], [ 3], [ 4], [ 5 ]. The work was based on a set of historical data representing the monthly water consumption of a specific area in the parish of Cambra, municipality of Vouzela, Portugal, covering a period of five years (2018-2022). Initially, the natural logarithmic transformation was applied to normalise the data [ 6], followed by the Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test to check the stationarity of the time series [ 7]. Differentiation was applied to achieve the necessary stationarity. The Auto-ARIMA method was used to determine the optimal parameters (p,d,q) based on the Akaike Information Criterion (AIC) [ 8], [9]. In addition, the Holt-Winters method was implemented directly, taking advantage of its ability to deal with non-stationary and non-normally distributed series. This method was applied with additive components and Box-Cox transformation [10 ], automatically incorporating the transformation and adjustment processes for seasonality and trend. Both methods were used to forecast water consumption for the 12 months to 2023. After applying Auto-ARIMA, the series was reversed, i.e. differentiated, and exponentially transformed to return to the original values. The performance of both methods was assessed comparatively, using the Mean Absolute Error as a metric [ 11 ], [12]. This study contributes to the efficient management of water resources by providing a robust methodology for forecasting water consumption, with an emphasis on the detailed application of ARIMA and a complementary comparison with Holt-Winters. Throughout this study, both ARIMA and Holt-Winters will be approached as statistical methods that generate models for forecasting data.

Article
Computer Science and Mathematics
Computational Mathematics

Rakhal Das

,

Satyendra Narayan

Abstract: In this paper, we introduce and systematically study the class of interpolative Geraghty-type contractive mappings within the framework of complete bicomplex valued metric spaces (bi-CVMS). We prove seven new results: (i) a fixed point theorem for a single interpolative Geraghty contraction; (ii) a common fixed point theorem for a pair of such mappings; (iii) a fixed point theorem for interpolative Reich–Rus–Ćirić type contractions in bi-CVMS; (iv) a coincidence point and common fixed point theorem for weakly compatible maps; (v) a fixed point theorem for Jaggi-type hybrid contractions in bi-CVMS; (vi) a stability result for the Picard iteration associated with the main contraction; and (vii) an application theorem establishing the existence and uniqueness of solutions to a boundary value problem governed by a Caputo fractional differential equation. All results are furnished with complete proofs and non-trivial illustrative examples. Several well-known theorems — including those of Banach, Kannan, Reich, Geraghty, and their complex-valued analogues — follow as special cases. The paper significantly advances the fixed point theory in bicomplex valued metric spaces.

Article
Computer Science and Mathematics
Computational Mathematics

Thien Binh Nguyen

,

Nguyen Minh Hieu Pham

Abstract: This article proposes a novel conservative ConRKDG method for one-dimensional hyperbolic conservation laws with applications in computational fluid dynamics simulations. A DG local solution is reconstructed over each element based on the sub-cell solution averages with a newly proposed set of shape functions. In this virtue, the conservation property of the problem is naturally imposed for the numerical DG solution. In addition, the availability of finite-volume sub-cell solution averages without any DG-to-FV transformation or vice versa facilitates a direct and robust technique for detecting troubled elements, in which the unlimited DG local solution is deemed unstable. A new WENO-type smoothness measurement based on sub-cell solution averages is introduced to assess whether a DG local solution is admissible or unstable, thereby determining whether an element is good or troubled. For the latter case, a secondary finite-volume WENO method is invoked in an \emph{a posteriori} phase to recalculate the sub-cell averages to sustain numerical stability by essentially suppressing non-physical spurious oscillations in the vicinity of shocks or discontinuities at troubled elements. The performance of the ConRKDG method with different secondary finite-volume WENO methods is compared for both problems with smooth solutions and those with shocks and discontinuities.

Article
Computer Science and Mathematics
Computational Mathematics

Mohammad Abu-Ghuwaleh

Abstract: We develop a new calculus for improper integrals in which the decisive object is not the meromorphic structure of the integrand in the physical variable, but a spectral measure μ generating the kernel through a Laplace orbit and a Mellin symbol Γ(z)Zμ(z;ω). The first main result is an exact phaselifted solver for oscillatory and non-oscillatory improper integrals. The second is the Abu-Ghuwaleh hyper-residue theorem, which yields meromorphic continuation for gapped spectra and explicit pole coefficients at the negative integers. The third is a finite-part regularization theorem that turns divergent endpoint integrals into regular values of the same symbol. The framework then extends to discrete entire kernels, branch and logarithmic kernels, multiscale analytic germs, fractional lattices, continuous spectral densities, and annihilator theorems that convert spectral support constraints into ordinary or infinite-order differential equations. A benchmark section evaluates hard examples one by one, including essential-singularity kernels, oscillatory u−1 problems, finite-part u−2 integrals, branch kernels, nonlinear phases, Mittag–Leffler and Airy kernels, polylogarithmic kernels, and algebraic continuous-spectrum kernels. The resulting method is exact, constructive, and specifically adapted to a large family of improper integrals for which contour geometry is not the natural language.

Article
Computer Science and Mathematics
Computational Mathematics

Xuan Deng

,

Yuepeng Wang

Abstract: Sparse regularization methods play an important role in inverse problems for extracting key features of underlying parameters and have attracted increasing attention in meteorological data assimilation. However, when the condition number of the background error covariance matrix is extremely large (e.g., 1012), the instability of the inverse problem makes accurate reconstruction difficult. To address this issue, a gradient operator is incorporated into the sparse regularization term of the cost function, and a Kalman filter (KF) algorithm is developed within a majorization–minimization (MM) framework to solve the resulting optimization problem. The problem is reformulated as a weighted least-squares problem via the MM strategy and further decomposed into two subproblems in the null space and its oblique complementary space through oblique projection, which are then solved using the KF method. This approach avoids the use of an adjoint model typically required in four-dimensional variational data assimilation (4D-Var). In addition, a modified f-slope strategy with a constrained search interval is introduced to adaptively select the regularization parameter during computation. Numerical experiments on the initial-condition inversion of the advection–diffusion equation demonstrate that the proposed method achieves more accurate reconstruction of key features than the l1-norm regularized 4D-Var method. The inversion errors remain low even when the condition number ranges from 108 to 1014,with relative MSE and MAE below 0.01 and relative bias below 0.005, indicating improved robustness and reconstruction accuracy.

Article
Computer Science and Mathematics
Computational Mathematics

Yazeed Mohammed Al-Olofi

Abstract: We present a unified hierarchical theory of brain dynamics derived entirely from first principles. The foundation is a geometric principle: any self‑similar hierarchical system seeking maximal harmony must satisfy Euclid's equation, whose unique solution is the golden ratio Φ ≈ 1.618. This geometric principle is embodied biologically in an efficiency functional balancing information transfer, spectral interference, and dynamical stability, which also yields Φ as the optimal frequency spacing between adjacent bands. From this single seed we sequentially derive eleven theorems that together form a complete mathematical pyramid. Theorem 0 establishes the Euclidean geometric principle. Theorem 1 proves the optimality of Φ in the biological context. Theorem 2 determines the number of frequency bands N = 7 from the biological range (0.5–200 Hz) and stability analysis. Theorem 3 introduces the control parameter β ∈ [0,1] regulating information flow direction, with critical values Φ⁻¹ ≈ 0.618 and Φ⁻² ≈ 0.382 from bifurcation analysis. Theorem 4 derives the optimal coupling coefficients κ₀ = ½Φ⁻¹ ≈ 0.309 from an information‑energy trade‑off. Theorem 5 gives the optimal phase shifts ϕ↑ = π/4, ϕ↓ = –π/4 from time‑reversal symmetry and interference minimization. Theorem 6 reveals 28 attractors (4 per band) with elementary geometric forms (cube, hexagon, pentagon, square, triangle, spiral, point) via group‑theoretic analysis. Theorem 7 provides analytical phase‑amplitude coupling (PAC) values as simple functions of Φ. Theorem 8 establishes the linear correlation between mean PAC and Φ‑coherence. Theorem 9 derives the temporal decrease of PA‑FCI before acute events from critical transition theory. Theorem 10 yields the universal warning threshold PA‑FCIₜₕ = 0.55 from critical slowing‑down analysis. Theorem 11 gives the linear PA‑FCI formula with theoretically derived weights. Numerical simulations of the full nonlinear system confirm all derivations with deviations below 0.3%. The model unifies geometry, physics, and biology, demonstrating that the brain's hierarchical organization follows from geometric principle.

Article
Computer Science and Mathematics
Computational Mathematics

Miguel Angel Larruz-Medina

,

José Rigoberto Gabriel-Argüelles

,

Eloisa Benitez-Mariño

,

Alberto Santamaria-Pang

Abstract: In this paper, we present a scheme for approximating the support and optimum value of an optimal measure for the Monge-Kantorovich (MK) mass transfer problem. Obtaining exact solutions to the MK problem is difficult; such solutions are only found in a few specific cases. Using an algorithm to approximate the optimum value is computationally expensive, particularly in high-dimensional or large-scale scenarios. To address this challenge, we developed an innovative method that integrates wavelet theory and multiresolution analysis. This method uses wavelet-based techniques to approximate the support of an optimal measure, further reducing the number of variables in the linear programs and thus decreasing the dimensionality and computational complexity of each step of the scheme. This method generates a sequence of transport problems with optimal measures of finite support. We then demonstrate that the optimum values ​​of the transport problems converge to the optimum value of the MK problem and that the supports of the finite optimal measures converge to the support of an optimal measure for the MK problem. We present some numerical experiments to demonstrate the efficiency of the scheme. We observe that the method has potential applications in various fields, such as image processing, economics, resource allocation, and machine learning, where finding efficient solutions to large-scale optimal transportation problems is essential.

Article
Computer Science and Mathematics
Computational Mathematics

Mohammad Abu-Ghuwaleh

,

Samir Brahim Belhaouari

Abstract: We study learnable spectral layers whose feature family is generated by an entire function, motivated by the Master Integral Transform (MIT) of [1]. In a periodic discrete model on T, we define an oversampled multi-β analysis operator A(M) g,K built from an entire generator g and show that, on the one-sided bandlimited subspace PWK (T), it is a tight frame with an explicit inverse, sharp frame bounds, and noise-stability constants governed solely by the Taylor coefficients of g. For general (non-bandlimited) signals we derive exact aliasing identities and quantify two de-aliasing mechanisms: a deterministic multi-resolution cancellation scheme and a randomized rotation estimator with unbiasedness, MSE = O(1 /m), and high-probability bounds. We extend the discrete theory to Td, allowing general multivariate entire generators G(z) = ∑α∈Nd cαzα, and obtain exact inversion and conditioning bounds on tensor bands PW(d) K (Td) with explicit constants. To connect discrete layers to continuum MIT injectivity, we formalize density control of active Taylor indices. We prove that bounded gaps imply positive one-sided interior Beurling–Malliavin density (hence, in particular, positive lower counting density), closing an end-to-end bridge from gap-regularized learning to the injectivity theorem of [1]. For bounded-gap sequences we also give a weighted-series characterization of strong a-regularity, yielding computable surrogate penalties. Finally, we prove two injectivity mechanisms that do not rely on density: (i) a β-analytic injectivity theorem (access to multiple β-channels near 0) for any nonconstant entire kernel, and (ii) a finite-band generic-shift result ensuring invertibility on PWK for nonpolynomial generators. Full-data experiments illustrate conditioning collapse without coefficient floors and confirm the predicted 1 /m de-aliasing variance decay. Contributions (take-home). (i) Certified band-invertible spectral layers: exact inversion + frame bounds + noise stability on PWK (T) and PW(d) K (Td) with constants controlled by Taylor coefficients; (ii) Provable de-aliasing: deterministic multi-resolution cancellation and randomized rotation Monte Carlo with unbiasedness, MSE = O(1 /m), and high-probability bounds; (iii) A closed discrete-to-continuum bridge: bounded-gap activity ⇒ DBM > 0 and hence the lower counting density required by MIT injectivity; (iv) Beyond-density injectivity + nonharmonic structure: two density-free injectivity mechanisms and a monomial rigidity principle. Applications. These results yield MIT-certified modules for machine learning and scientific computing: invertible FFT-like embeddings for grid data, learnable positional encodings, and drop-in replacements for Fourier blocks in neural operators with explicit conditioning control and provable de-aliasing.

Article
Computer Science and Mathematics
Computational Mathematics

Basker Palaniswamy

Abstract: In 1972, computer scientist Richard Karp made a remarkable discovery: twenty-one very different problems—from routing networks and planning schedules to packing items efficiently—are all equally difficult in a deep mathematical sense. These problems are now called NP-complete, and for more than fifty years researchers have shown their connection by carefully transforming one problem into another step by step. While this approach proves that the problems are related, it often hides the bigger picture of why they share the same level of difficulty. This paper proposes a new way of understanding these problems through geometry. We introduce the Karp Algebraic Reduction Manifold Architecture (KARMA), aframework that places all 21 problems inside a single mathematical “landscape.” In this landscape, each problem describes a different region of the same terrain of computational difficulty, and moving from one problem to another becomes like traveling smoothly across this terrain. The framework naturally groups the problems into three families—graph-theoretic, set-theoretic, and number-theoretic problems. In this geometric interpretation, distances represent how difficult it is to transform one problem into another, while the curvature of the landscape reflects their inherent computational hardness. By revealing this hidden geometric structure, the KARMA framework provides a new perspective on computational complexity. Instead of studying hard problems individually, researchers can explore the entire landscape of computational difficulty at once, potentially inspiring new algorithms, better hardness predictions, and intelligent systems that can automatically reason about problem transformations.

Article
Computer Science and Mathematics
Computational Mathematics

Basem Ajarmah

,

Saber Syouri

Abstract: Managing inventory for perishable goods remains a persistent operational challenge, largely because conventional exponential decay models struggle to capture the irregular deterioration patterns observed in practice. This paper develops the Reliable Fractional Derivative (RFD) framework, which incorporates memory effects into the modeling of product decay through a time-shifted kernel. Unlike standard approaches that assume constant deterioration, this formulation accommodates both accelerating and decelerating patterns depending on product characteristics and storage conditions. We derive closed-form expressions for optimal ordering quantities under both deterministic and stochastic demand, then test the framework's performance through numerical experiments spanning two thousand parameter combinations. The analysis reveals that RFD models deliver the greatest improvements when deterioration rates are steep, holding costs are substantial, or storage horizons are extended—conditions under which switching from conventional methods yields average cost reductions approaching nineteen percent, with substantially larger gains in certain cases. A pharmaceutical application confirms savings between 3.6 and 9.1 percent relative to misspecified traditional models. These findings connect with recent industry movements toward more sophisticated safety-stock practices, offering managers a principled basis for selecting inventory policies aligned with actual product behavior rather than assuming decay conforms to simpler theoretical forms.

of 21

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated