Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Mathematics

Rômulo Damasclin Chaves dos Santos

,

Jorge Henrique de Oliveira Sales

Abstract: We develop a unified and mathematically rigorous framework for Fractional Spectral Degeneracy Operators (FSDOs), a broad class of anisotropic non-local operators that combine fractional diffusion with spatially dependent degeneracy of variable strength. This formulation generalizes classical Spectral Degeneracy Operators by allowing degeneracy exponents θi ∈ (0, 2), thereby capturing a continuum of diffusion regimes ranging from mildly singular behavior to near-critical ultra-degeneracy. Motivated by applications in anomalous transport, intermittent turbulence, and heterogeneous or fractal media, we introduce weighted fractional Sobolev spaces tailored to the anisotropic metric generated by the degeneracy. Within this setting, we establish fractional Hardy and Poincaré inequalities that guarantee coercivity and control of the associated bilinear forms. Building on these foundations, we prove essential self-adjointness, compact resolvent, and a complete spectral decomposition for FSDOs. A detailed heat kernel parametrix is constructed using anisotropic pseudo-differential calculus, yielding sharp small-time asymptotics and, through a Tauberian argument, a fractional Weyl law whose exponent depends explicitly on the dominant degeneracy direction. We further obtain Bessel-type expansions for eigenfunctions near the singular locus and derive fractional Landau inequalities that encode an uncertainty principle adapted to the weighted fractional geometry. As an application, we introduce Fractional SDO-Nets, a class of neural operators whose layers incorporate the inverse FSDO. These architectures inherit stability, non-locality, and anisotropic scaling from the underlying operator and provide a principled mechanism for learning fractional and degenerate diffusion phenomena from data.
Article
Computer Science and Mathematics
Mathematics

Junyan Huang

Abstract: In recent years, deep convolutional neural networks (DCNNs) have demonstrated remarkable success in approximating functions with multiple features. However, several challenges remain unresolved, including the approximation of target functions in Sobolev spaces defined on the unit sphere, and the extension of the types for intrinsic functions. To address these issues, we propose a DCNNs architecture with multiple downsampling layers to approximate multi-feature functions in Sobolev spaces on the unit sphere. Our method facilitates automatic feature extraction without requiring prior knowledge of the underlying composite structure and alleviates the curse of dimensionality in function approximation by extracting general smooth and spherical polynomial features. Compared with previous approaches, the proposed DCNNs architecture is more effective in capturing a variety of features.
Article
Computer Science and Mathematics
Mathematics

Cheng-Ting Wang

Abstract: In this paper, we show that if a, b, c are numbers such that c = a + b, gcd(a, b, c) = 1 and rad(abc) < c, then c can’t be square free, at most one of a and b is square-free, and the square-free factor must be the smallest factor of the triple, we also showed an explicit upper bound for the quality of abc triples. We also discuss some basic properties of the radical function in general.
Article
Computer Science and Mathematics
Mathematics

Agon Mahmuti

,

Amela Muratović Ribić

,

Xhevdet Thaqi

Abstract: This study investigates the teaching and learning of numerical sequences in upper secondary education by addressing students’ difficulties in identifying patterns, constructing general terms, and linking abstract ideas to meaningful contexts. Conducted with 48 informatics-profile students at IAAP “Andrea Durrsaku” in Kamenica, the research evaluates whether integrating real-life contextual tasks, Python programming, and AI-assisted code generation can enhance conceptual understanding. A quasi-experimental mixed-methods design was used, involving an experimental group (n = 25) that engaged with contextual problems supported by Python visualizations and AI tools, and a control group (n = 23) that followed traditional textbook-based instruction. Data from pre-tests and post-tests were analyzed using SPSS, applying Paired Samples t-tests to measure within-group progress, Independent Samples t-tests to compare the two groups, and ANCOVA to control for baseline differences and examine the effect of the intervention. Results indicate that traditional teaching often limits students’ reasoning due to the abstract presentation of sequences. In contrast, students in the experimental group showed significant improvement in recognizing patterns, verifying results, and understanding sequence structures. They also reported higher motivation, clearer visualization, and increased confidence when supported by programming and AI. Overall, the findings demonstrate that combining contextualized tasks with computational tools provides an effective approach for strengthening mathematical understanding while developing essential digital competencies.
Article
Computer Science and Mathematics
Mathematics

Sanjar M. Abrarov

,

Rehan Siddiqui

,

Rajinder Kumar Jagpal

,

Brendan M. Quine

Abstract: In this work, we consider four theorems that can be used to prove the irrationality of $\pi$. These theorems are related to nested radicals with roots of $2$ of kind $c_k = \sqrt{2 + c_{k - 1}} $ and $c_0 = 0$. Sample computations showing how the rational approximation tend to $\pi$ with increasing the integer $k$ are presented.
Article
Computer Science and Mathematics
Mathematics

Yue Guan

,

Yaoshun Fu

,

Xiangtao Meng

Abstract: Formal verification has achieved remarkable outcomes in both theory advancement and engineering practice, with the formalization of mathematical theories serving as its foundational cornerstone—making this process particularly critical. Axiomatic set theory underpins modern mathematics, providing the rigorous basis for constructing almost all theories. Landau's Foundations of Analysis starts with pure logical axioms from set theory, does not rely on geometric intuition, strictly constructs number systems, and is a benchmark for axiomatic analysis in modern mathematics. In this paper, we first develop a machine proof system for axiomatic set theory rooted in the Morse-Kelley (MK) system. This system encompasses proof automation, scale simplification, and specialized handling of the Classification Axiom for ordered pairs. We then prove the Transfinite Recursion Theorem, leveraging it to further prove the Recursion Theorem for natural numbers the key result for defining natural number operations. Finally, we detail the implementation of a machine proof system for analysis, which adopts MK as its description language and adheres to Landau’s Foundations of Analysis. This formalization can be relatively seamlessly ported to type theory-based real analysis. Implemented using the Rocq proof assistant, the formalization has undergone verification to ensure completeness and consistency. This work holds broader applicability, and it can be extended to the formalization of point-set topology and abstract algebra, while also serving as a valuable resource for teaching axiomatic set theory and mathematical analysis.
Article
Computer Science and Mathematics
Mathematics

Yuxuan Zhang

,

Weitong Hu

,

Wei Zhang

Abstract: We construct a finite-dimensional Z3-graded Lie superalgebra of dimensions (12,4,3), featuring a grade-2 sector that obeys a cubic bracket relation with the fermionic sector. This induces an emergent triality symmetry cycling the three components. The full set of graded Jacobi identities is verified analytically in low dimensions and numerically in a faithful 19-dimensional matrix representation, with residuals ≤ 8 × 10−13 over 107 random tests. Explicit quadratic and cubic Casimir operators are computed, with proofs of centrality, and the adjoint representation is shown to be anomaly-free. The algebra provides a minimal, closed extension beyond conventional Z2 supersymmetry and may offer an algebraic laboratory for models with ternary symmetries.
Article
Computer Science and Mathematics
Mathematics

Victor Ilyutko

,

Dmitrii Kamzolkin

,

Vladimir Ternovski

Abstract: We study a minimum-time (time-optimal) control problem for a nonlinear pendulum-type oscillator, in which the control input is the system’s natural frequency constrained to a prescribed interval. The objective is to transfer the oscillator from a given initial state to a prescribed terminal state in the shortest possible time. Our approach combines Pontryagin’s maximum principle with Bellman’s principle of optimality. First, we decompose the original problem into a sequence of auxiliary problems, each corresponding to a single semi-oscillation. For every such subproblem, we obtain a complete analytical solution by applying Pontryagin’s maximum principle. These results allow us to reduce the global problem of minimizing the transfer time between the prescribed states to a finite-dimensional optimization problem over a sequence of intermediate amplitudes, which is then solved numerically by dynamic programming. Numerical experiments reveal characteristic features of optimal trajectories in the nonlinear regime, including a non-periodic switching structure, non-uniform semi-oscillation durations, and significant deviations from the behavior of the corresponding linearized system. The proposed framework provides a basis for the synthesis of fast oscillatory regimes in systems with controllable frequency, such as pendulum and crane systems and robotic manipulators.
Article
Computer Science and Mathematics
Mathematics

Raoul Bianchetti

Abstract: We propose a mathematical framework for Viscous Time Theory (VTT), exploring how spacetime geometry could emerge from discrete informational events. We formulate a variational principle for an informational coherence field Φ, constrained by discrete informational anchors, and derive a Lagrangian governing coherence propagation and dissipation. From its second variation, we introduce an informational coherence tensor DeltaC_{mu nu} that is symmetric and positive-semidefinite, enabling the definition of a candidate emergent metric g_{mu nu}^{(C)} without presupposing geometry. Within this framework, curvature is interpreted as a response to coherence gradients, and a Raychaudhuri-type focusing identity is obtained, parameterized by informational viscosity η. A preliminary discrete-to-continuum numerical demonstration supports convergence of the coherence field and the appearance of non-trivial curvature patterns under sparse informational constraints. These results suggest that classical geometric structure may be understood as a stable organizational property of informational interactions. Possible implications – including informational interpretations of curvature phenomena and future coupling to biological and physical systems – are outlined as directions for further investigation.
Article
Computer Science and Mathematics
Mathematics

Kazuhito Owada

Abstract:

The Collatz Conjecture remains one of the most enduring unsolved problems in mathematics, despite being based on an extraordinarily simple rule. Given any natural number n, the conjecture posits that repeatedly applying the operation—dividing by 2 if even, or multiplying by 3 and adding 1 if odd—will eventually result in the number 1.This paper develops a structural perspective by proposing the Collatz Tree as a framework to organize and visualize natural numbers. Each branch is the geometric ray {k·2^b} for an odd core k, and the trunk is the ray from 1. We introduce a trunk–branch indexing that bijects N with Z≥0 × Z≥0.Algebraically, we encode Collatz steps as affine maps and prove the absence of nontrivial finite cycles for a three-way map T. Through a bridge theorem, this implies the same for the standard accelerated map A(n) = (3n+1)/2^ν₂(3n+1) on odd integers. Thus, the global Collatz convergence reduces to an independent pillar: the coverage (reachability) of the inverse tree rooted at 1, isolating cycle-freeness from coverage and reducing the conjecture to the remaining reachability problem.This framework provides a unified algebraic and graph-theoretic foundation for future Collatz research.

Review
Computer Science and Mathematics
Mathematics

Shizhan Lu

Abstract: Hesitant fuzzy set theory serves as a valuable framework that has been extensively applied across various domains, including decision-making, attribute reduction, linguistic perception, among others. Hesitant fuzzy elements are discrete arrays, and the intersection and union operations for hesitant fuzzy sets differ from those defined for fuzzy sets. Consequently, certain erroneous propositions have emerged in the literature on hesitant fuzzy sets. This review examines some incorrect propositions found in studies related to hesitant fuzzy topological spaces, hesitant fuzzy approximation spaces and hesitant fuzzy algebra, and provides corresponding counterexamples in each incorrect proposition. The advancement of a mathematical knowledge system must be free from errors, as inaccuracies can compromise the integrity of the theoretical framework. It is essential that researchers rigorously scrutinize the flawed propositions identified in this work when further investigating hesitant fuzzy sets and their mathematical structures, thereby promoting the robust development of hesitant fuzzy set theory.
Article
Computer Science and Mathematics
Mathematics

Michael Cody

Abstract: Horizontal monotonicity of the Riemann ξ-function is established for almost every ordinate. The principal result (Theorem A) proves, unconditionally, that for any fixed ε ∈ (2/3, 1), the modulus |ξ(σ + it)| attains its global minimum at σ = 1 2 within a corridor of width c/( log t)1−ε for all sufficiently large t. Under a thin-strip zero-density hypothesis, the corridor narrows to the microscopic scale c/ log t while maintaining density-one coverage (Theorem B). Under the stronger DZ(α) hypothesis, monotonicity extends globally, and by the Sondow–Dumitrescu equivalence this entails the Riemann Hypothesis (Theorem C). The underlying mechanism is a persistent sign barrier in ∂σ log |ξ|: on-line zeros generate drift ≍ ∆ log t, whereas off-line zeros contribute O(∆( log t)1−κ) conditionally or are suppressed by e−A(log t)ε unconditionally. High-precision computations at 80 digits confirm that all analytic predictions hold across the tested range.
Article
Computer Science and Mathematics
Mathematics

Noboru Sagae

Abstract: We present a geometric formulation of entropic free-energy minimization as Riemannian gradient descent on Lie-group orbits endowed with the Fisher information metric. This approach reveals how symmetry structures constrain the dynamics of information and entropy reduction, linking variational inference to geometric thermodynamics. We establish well-posedness, Lyapunov monotonicity, and convergence theorems, and derive a second-variation criterion explaining entropic symmetry breaking and bifurcations. Examples on Gaussian families under translations and rotations illustrate the interplay between group invariance and adaptive stability. The results provide a unified view connecting information geometry, thermodynamics, and the Free Energy Principle through a group-theoretic lens.
Article
Computer Science and Mathematics
Mathematics

Michael Aaron Cody

Abstract: The divisibility of integer shifts by multiplicative functions has long been a central topic in analytic number theory, originating from Lehmer’s study of φ(n) | n − 1 and later refinements on φ(n) | n + a. Alford, Granville, and Pomerance (1994) noted that divisibility relations of the form λ(n) | n + a for fixed a > 1 remained beyond the reach of existing methods. This paper addresses this open problem, presenting strong computational and partial analytic evidence for density-zero unconditionally, and proving finiteness under standard equidistribution assumptions (e.g., the Elliott–Halberstam conjecture). For every fixed integer a ≥ 2, the relation λ(n) | n + a holds for only finitely many positive integers n under these assumptions. The proof combines analytic and structural techniques, using the Bombieri–Vinogradov theorem to control the distribution of small prime powers in pi − 1, together with a valuation obstruction forcing non-divisibility for all sufficiently large ω(n). A novel exceptional schema describes the only remaining possibility, where all primes pi satisfy pi ≡ 1 (mod M ) with M | (a + 1) and pi − 1 squarefree, which is proven to yield at most one prime for each modulus. Together these results yield a complete classification of λ(n) | n + a across integer shifts, including the contrasting abundance of the a = 1 case and the existence of infinite families for negative shifts. The methods extend naturally to other multiplicative functions, suggesting a broader framework for shifted divisibility phenomena.
Article
Computer Science and Mathematics
Mathematics

B. B. Upadhyay

,

Arnav Ghosh

,

I. M. Stancu-Minasian

,

Andreea Mădălina Rusu-Stancu

Abstract: In this article, we investigate nonsmooth multiobjective mathematical programming problems with equilibrium constraints (NMMPEC) in the framework of Hadamard manifolds. Corresponding to (NMMPEC), the generalized Guignard constraint qualification (GGCQ) is introduced in the Hadamard manifold setting. Further, Karush-Kuhn-Tucker (KKT) type necessary criteria of Pareto-efficiency are derived for (NMMPEC). Subsequently, we introduce several (NMMPEC)-tailored constraint qualifications. We establish several interesting interrelations between these constraint qualifications. Moreover, we deduce that these constraint qualifications are sufficient conditions for (GGCQ). We have furnished non-trivial numerical examples in the setting of some well-known manifolds to illustrate the significance of our results. To the best of our knowledge, constraint qualifications and optimality conditions for (NMMPEC) have not yet been studied in the Hadamard manifold setting.
Article
Computer Science and Mathematics
Mathematics

Yueshui Lin

Abstract: This paper introduces Jiuzhang Constructive Mathematics (JCM), a novel mathematical framework that systematically incorporates finite approximation and computational realizability as foundational principles. The framework addresses the disconnect between classical mathematics with its reliance on actual infinity and computational practice with its finite resource constraints. JCM is built upon three carefully formulated axioms: the Finite Approximation Axiom ensuring effective Cauchy convergence, the Computable Operations Axiom requiring uniform polynomial-time computability with consistent encoding schemes, and the Categorical Realizability Axiom providing semantic interpretation in a rigorously constructed enriched realizability topos. We construct the JCM universe J as a locally Cartesian closed category supporting intuitionistic higher-order logic, with detailed proofs of all categorical properties. A key technical contribution is the resolution of encoding size consistency between approximation sequences and complexity classes through careful design of finite structure representations. The framework provides faithful embeddings of Bishop’s constructive analysis while maintaining explicit computational content. We establish comprehensive complexity theory with precise relationships between JCM complexity classes and their classical counterparts, and discuss limitations regarding non-polynomial-time computable functions and classical non-constructive principles.
Article
Computer Science and Mathematics
Mathematics

Michael Aaron Cody

Abstract: Lehmer’s Totient Conjecture states that no composite integer n satisfies φ(n) | n − 1. This paper establishes the conjecture’s closure in two stages. The first stage eliminates 14-prime composites through 2-adic valuation analysis, verified through explicit computation across all tested primes below one million. The second stage introduces the bounded-catch framework, a q-adic inequality describing overflow thresholds for higher prime clusters. Across more than two thousand fifteen-to-twenty-prime configurations, 89 percent triggered overflow by q ≤ 97, and all random tuples overflowed by q ≤ 13. The bounded-catch framework is new to the literature. Under the Generalized Riemann Hypothesis, the same inequality proves that the proportion of surviving configurations tends to zero as prime size increases. Lehmer’s condition holds for 14-prime composites with odd parity unconditionally, and for almost all larger constructions under GRH. The remaining safe-prime survivors form a finite and measure zero set, identified explicitly within the data, without affecting the general truth of the conjecture. All proofs are self-contained and rely only on standard analytic number theory.
Article
Computer Science and Mathematics
Mathematics

Julio Rives

Abstract: The Newcomb-Benford Law (NBL) suggests that the smaller digits of significands represented in place-value notation are more likely to appear in real-life numerical datasets. We propose that similar laws exist regarding the prime factorization of these significands. By the fundamental theorem of arithmetic, we can express a natural number as an ordinal-ascending sequence of ordinal-multiplicity pairs representing the prime factors by which N is divisible. We refer to this as the Standard Ordinal-Exponent representation (SOE). The costs of the positional and SOE representations interconnect through the double logarithmic scale; the size of a number written in positional notation has the same order of growth as the exponential of the SOE sequence length. Based on the SOE representation, we submit a battery of laws exhibiting the prevalence of the minor prime powers across the natural numbers, to wit, the probability of a prime relative to the factorization set, the probability and possibility of the smallest prime ordinal, the probability of the number of participants in an interaction (regarding and disregarding multiplicity), the probability and possibility of a prime divisor with multiplicity, the probability of a prime exponent, and the probability of the largest prime exponent. Then, we factorize two NBL-compliant datasets to investigate key properties of primality: a 300-entry dataset comprising mathematical and physical constants (CT), and another containing 1,080 entries of world population data (WP). For both, we examine the energy function E(N)=p_N/N, the omega functions ω(N) (number of distinct prime factors) and Ω(N) (total number of prime factors), the divisor functions d(N) (number of divisors) and σ(N) (sum of divisors), as well as the share of rough-smooth numbers, the growth of highly composite numbers, and the prime-counting π(N) and totient φ(N) functions. Besides, we confirm compliance with the laws above and analyze the internal count of primes, the density of the largest prime ordinal, the internal growth of totatives and non-totatives, the density of k-almost primes, and the distribution of the pairwise greatest common divisor. CT and WP are chunks of nature. Indeed, we can identify natural datasets by testing their conformance to NBL or to any of the criteria we postulate. We also emphasize that the artanh function prominently appears throughout our analysis, suggesting that the concept of conformality governs our perception of the external world, bridging information between the harmonic scale (global) and our logarithmic scale (local).
Article
Computer Science and Mathematics
Mathematics

Amir Hameed Mir

Abstract: This paper introduces Concentric Number Theory (CNT), a novel mathematical framework that provides geometric interpretations of number theoretical concepts through concentric ring systems. We establish rigorous axiomatic foundations for CNT and develop systematic methodologies for prime number analysis, including spatial distribution across rings, geometric complexity measures, and clustering patterns. The framework reveals fundamental mathematical properties including the Halving Principle, Perfect Prime Symmetry, and establishes a Prime Complexity Index that correlates with traditional cryptographic strength measures. Through extensive computational experiments analyzing the first 500 primes, we demonstrate that CNT offers new insights into prime distribution, classification, and geometric organization that complement traditional number theoretical approaches. The methodology formalized in this work enables systematic geometric analysis of prime numbers with applications in cryptography, computational mathematics, and prime number theory. All code and data are openly available under CC BY 4.0 license.
Article
Computer Science and Mathematics
Mathematics

Michael Cody

Abstract: The sum-of-divisors function σ(n) has been studied since antiquity, most often in connection with perfect and abundant numbers, yet its divisibility behavior under integer shifts has never been classified. The paper asks a simple question. For which integers a does σ(n) divide n + a? The result completes the trilogy on multiplicative divisibility, following the earlier analyses of φ(n) and λ(n). The proof establishes that for every fixed integer a ≥ 2, only finitely many positive integers n satisfy σ(n) | n + a. The argument combines a 2-adic wall limiting the number of odd prime factors, an overflow lemma that forces q-adic excess for large ω(n), and a finiteness schema constraining the remaining primes to a fixed modulus set. Special cases include a = 1, where σ(n) | n + 1 holds only for n = 1 and for prime n, a = 0, which yields the classical perfect numbers satisfying σ(n) = 2n, and a < 0, where no infinite families exist since σ(m) ∤ m for all m > 1. The result marks the terminus of multiplicative coherence, the point at which arithmetic structure collapses into absolute finiteness.

of 29

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated