Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Algebra and Number Theory

Frank Vega

Abstract: The binary Goldbach conjecture states that every even integer greater than 2 is the sum of two primes. We analyze a variant of this conjecture, positing that every even integer 2N ≥ 8 is the sum of two distinct primes P and Q. We establish a novel equivalence between this statement and a geometric construction: the conjecture holds if and only if for every N ≥ 4, there exists an integer M ∈ [1, N − 3] such that the L-shaped region N2 − M2 (between nested squares) has a semiprime area P · Q, where P = N − M and Q = N + M. We define the set DN of all such valid M values for a given N. The conjecture is equivalent to there existing an M ∈ DN with N − M prime. We conduct a computational analysis for N ≤ 214 and define a gap function G(N) = log2(2N) − ((N − 3) − |DN|). Our experimental results show that the minimum of G(N) is positive and increasing across intervals [2m, 2m+1]. This empirically-derived result, G(N) > 0, provides strong computational evidence that |DN| > (N − 3) − log2(2N). Under this computationally-supported bound, the pigeonhole principle on the cardinality of DN and the number of primes P < N (corresponding to squares SP) implies |DN| ≥ 1 for all N ≥ 4, yielding a conditional proof of the conjecture. While an analytical proof of this bound remains an open problem, our work establishes a novel geometric framework and demonstrates its viability through extensive computation.
Article
Computer Science and Mathematics
Algebra and Number Theory

Frank Vega

Abstract: The Riemann Hypothesis, one of the most celebrated open problems in mathematics, addresses the location of the non-trivial zeros of the Riemann zeta function and their profound connection to the distribution of prime numbers. Since Riemann’s original formulation in 1859, countless approaches have attempted to establish its truth, often by examining the asymptotic behavior of arithmetic functions such as Chebyshev’s function θ(x). In this work, we introduce a new criterion that links the hypothesis to the comparative growth of θ(x) and primorial numbers. By analyzing this relationship, we demonstrate that the Riemann Hypothesis follows from intrinsic properties of θ(x) when measured against the structure of primorials. This perspective highlights a striking equivalence between the distribution of primes and the analytic behavior of ζ(s), reinforcing the deep interplay between multiplicative number theory and analytic inequalities. Beyond its implications for the hypothesis itself, the result offers a fresh framework for understanding how prime distribution governs the analytic landscape of the zeta function, thereby providing new insight into one of mathematics’ most enduring mysteries.
Article
Computer Science and Mathematics
Analysis

B.P. Duggal

Abstract: Given Hilbert space operators A,B and X, let △A,B and δA,B denote, respectively, the elementary operators △A,B(X) = I − AXB and the generalised derivation δA,B(X) = AX − XB. This paper considers the structure of operators Dm d1,d2 (I) = 0 and Dm d1,d2 compact, where m is a positive integer, D =△ or δ, d1 =△A∗,B∗ or δA∗,B∗ and d2 = △A,B or δA,B. This is a continuation of the work done by C. Gu for the case △m δA∗,B∗, δA,B (I) = 0, and the author with I.H. Kim for the cases △m δA∗,B∗,δA,B (I) = 0 or △m δA∗,B∗,δA,B is compact, and δm △A∗,B∗,△A,B (I) = 0 or δm △A∗,B∗,δA,B is compact. Operators Dm d1,d2 (I) = 0 are examples of operators with finite spectrum, indeed the operators A,B have at most a two point spectrum, and if Dm d1,d2 is compact, then (the non-nilpotent operators) A, B are algebraic. Dm d1,d2 (I) = 0 implies Dn d1,d2 (I) = 0 for integers n ≥ m: the reverse implication, however, fails. It is proved that Dm d1,d2 (I) = 0 implies Dd1,d2 (I) = 0 if and only if of A and B (are normal, hence) satisfy a Putnam-Fuglede commutativity property.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Lin Wang,

Binjie Zhang,

Qinyan Tan,

Dejun Duan,

Yulei Wang

Abstract:

Foggy weather poses substantial challenges for unmanned aerial vehicle (UAV) object detection by severely degrading image contrast, obscuring object structures, and impairing small target recognition, often leading to significant performance deterioration in existing detection models. To address these issues, this work presents an enhanced YOLO11-based framework, called hazy aware-YOLO (HA-YOLO), which is specifically designed for robust UAV object detection in foggy weather. HA-YOLO incorporates wavelet convolution into its structure to suppress haze-induced noise and strengthen multi-scale feature fusion without introducing additional computational overhead. In addition, a novel context-enhanced hybrid self-attention (CEHSA) module is developed, which sequentially combines channel attention aggregation (CAA) and multi-head self-attention (MHSA) to simultaneously capture local contextual cues and mitigate global noise interference. Experimental results demonstrate that the proposed HA-YOLO and its variants achieve higher detection and precision with robustness compared to the baseline YOLO11, while maintaining model efficacy. In particular, in comparison with several state-of-the-art detectors, HA-YOLO exhibits a better balance between detection accuracy and complexity, offering a practical solution for real-time UAV perception tasks in adverse weather conditions.

Article
Computer Science and Mathematics
Analysis

Maëlys Dubois,

Yanis Lambert,

Elodie Fairchild,

Elise Berg

Abstract: The challenge of integrating external knowledge into visual reasoning frameworks has motivated a growing interest in models capable of bridging perceptual understanding with abstract, non-visual information. Unlike conventional visual question answering settings, knowledge-driven VQA demands a joint interpretation of visible cues and facts that are absent from the image itself. This paper introduces a new perspective on this task and proposes \textsc{KV-Trace}, a unified semantic tracing framework that emphasizes iterative knowledge refinement and structured visual interpretation. Instead of treating visual and knowledge modalities as homogeneous sources, our framework explicitly distinguishes their representational roles and organizes them into a progressive reasoning pipeline. Through a dynamic knowledge memory space and a query-sensitive semantic propagation mechanism, \textsc{KV-Trace} composes multi-stage reasoning steps that evolve according to the underlying question. Extensive experiments conducted on the KRVQR and FVQA benchmarks demonstrate that our model achieves improved reasoning depth and generalization capacity. Additional ablation studies further verify the contribution of each reasoning component and highlight the interpretability benefits gained from explicit knowledge structuring.
Article
Computer Science and Mathematics
Computer Vision and Graphics

Haya Monawwar,

Guoliang Fan

Abstract: Accurate six-degree-of-freedom (6-DoF) camera pose estimation is essential for augmented reality, robotics navigation, and indoor mapping. Existing pipelines often depend on detailed floorplans, strict Manhattan-world priors, and dense structural annotations, which may lead to failures in ambiguous, overlapping-room layouts (ambiguous? not overlapping). We present Render-Rank-Refine, a two-stage framework operating on coarse semantic meshes without requiring textured models or per-scene fine-tuning. First, panoramas rendered from the mesh enable global retrieval of coarse pose hypotheses. Then, perspective views from the top-$k$ candidates are compared to the query via rotation-invariant circular descriptors, which reranks the matches before final translation and rotation refinement. In general, our method reduces the translation and rotation error by an average of 40% and 29%, respectively, compared to the baseline while achieving more than $90\%$ improvement in cases with severe layout ambiguity. It sustains 25–27 queries per second (QPS), which is about 12 times faster than the existing state-of-the-art, without sacrificing accuracy. These results demonstrate robust, near-real-time indoor localization that overcomes structural ambiguities and heavy geometric assumptions.
Article
Computer Science and Mathematics
Computer Science

Panagiotis Karmiris

Abstract: ExecMesh introduces two fundamental innovations to blockchain infrastructure: compute credits as DeFi primitives and recursive proof composition for verifiable multi-agent workflows. By transforming verified computational output into tradeable, collateralizable assets and enabling cryptographic composition of work across multiple agents, ExecMesh creates the foundational layer for a compute-backed economy. Global compute markets exceed $500B annually; ExecMesh makes this value programmable.
Hypothesis
Computer Science and Mathematics
Algebra and Number Theory

Shane Drake

Abstract: This paper explains why the critical line sits at real part equal to one-half by treating it as an intrinsic boundary of a reparametrized complex plane (“z-space”), not a mere artifact of functional symmetry. In z-space the real part is defined by a geometric-series map that induces a rulebook for admissible analytic operations. Within this setting we rederive the classical toolkit—eta–zeta relation, Gamma reflection and duplication, theta–Mellin identity, functional equation, and the completed zeta—without importing analytic continuation from the usual s-variable. We show that access to the left half-plane occurs entirely through formulas written on the right, with boundary matching only along the line with real part one-half. A global Hadamard product confirms the consistency and fixed location of this boundary, and a holomorphic change of variables transports these conclusions into the classical setting.
Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Ibrahim Mammadov,

Pavel Loskot,

Thomas Honold

Abstract: Many data processing applications involve binary matrices for storing digital contents, and employ the methods of linear algebra. One of the frequent tasks is to invert large binary matrices. At present, there seem to be no documented algorithms for inverting such matrices. This paper fills the gap by reporting these three results. First, an efficient and provably correct recursive blockwise algorithm based on pivoted PLU factorization is reported to invert the binary matrices of sizes as large as several thousands of bits. Second, assuming the Bruhat matrix decomposition, a fast method is developed for effectively enumerating all elements of the general linear groups. Third, the minimum number of bit-flips is determined to make any singular binary matrix non-singular, and thus, invertible. The proposed algorithms are implemented in C++, and they are publicly available on Github. These results can be readily generalized to other finite fields, for example, to enable linear algebraic methods for matrices containing quantized values.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Iryna Pikh,

Vsevolod Senkivskyy,

Alona Kudriashova,

Oleksii Bilyk,

Liubomyr Sikora,

Nataliia Lysa

Abstract: The presence of duplicate bugs in defect tracking systems creates an additional burden on software engineering specialists, potentially causing delays in fixing critical bugs. The use of automated methods for detecting duplicates relieves this burden and re-duces the time and cost associated with their processing. Detecting duplicate bug re-ports in large databases is a challenging task that requires a balance between compu-tational efficiency and prediction accuracy. Traditional approaches either rely on re-source-intensive searches or use classification models that, while highly accurate, compromise performance. This paper proposes a new approach to automatic duplicate bug detection based on a two-level analysis of text features in reports. The first stage involves vectorising text data using BERT (Bidirectional Encoder Representations from Transformers), MiniLM (Miniature Language Model) and MPNet (Masked and Per-muted Pre-training for Language Understanding) transformer models, which deter-mine the semantic similarity between defect descriptions. This reduces the number of potential duplicates and the volume of reports that need to be compared. The second stage involves classifying pairs of potential duplicates using machine learning algo-rithms, including XGBoost (eXtreme Gradient Boosting), SVM (Support Vector Ma-chines) and logistic regression. The models are trained on vector representations of text to assess the degree of similarity between errors. The combination of transformer mod-els with classical classification algorithms ensures high accuracy in detecting dupli-cates while significantly reducing query processing time. The results of the experi-ments confirm the effectiveness of the approach, demonstrating its ability to reduce the number of required comparisons, cut the cost of analysing defect reports, and achieve sufficient accuracy in duplicate detection.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Alice Williams,

Boris Kovalerchuk

Abstract: Trust in machine learning models is critical for deployment by users, especially for high-risk tasks such as healthcare. Model trust involves more than just performance metrics such as accuracy, precision, or recall, it includes user readiness to let a model make decisions. Trust is commonly associated with model prediction stability under variations to training data, noise, parameters, explanations, etc. This paper expands on former model trust concepts with a proposed Model Sureness measure. Model Sureness in this work quantifies stability of model accuracy under variations to the training data for any model by a bidirectional active learning with Visual Knowledge Discovery method. This method iteratively retrains a model on varied training data until a user-defined criterion is met, e.g., 95% test data accuracy. This finds a smaller sufficient training data set for a model to meet the criterion. Then Model Sureness is the ratio of the number of unnecessary cases to all cases in training data. The grater ratio indicates high model sureness in accordance with this measure. Conducted case studies on three common benchmark datasets from biology, medicine, and handwriting recognition show well-preserved model accuracy and high sureness of the respective models. Specifically, removal of unnecessary cases was from 20% to 80% and on average about 50% of training data.
Article
Computer Science and Mathematics
Computer Science

Shandukani Thenga,

S. Arunmozhi Selvi

Abstract: Municipal governments are the custodians of large volumes of sensitive information, including personally identifiable information (PII), financial information, law enforcement intelligence, and control of essential infrastructure. Although external cyber threats are the most discussed threats to data security, deliberate insider threats—malicious actions of authorised personnel in other words—are an equally serious but underestimated threat to municipal data security. This paper presents the holistic formulation of a mitigation strategy specific to local government settings. The proposed solution, based on standard frameworks such as NIST SP 800-53, ISO/IEC 27001, and the CERT Insider Threat Model and incorporating socio-technical and risk management concepts, consists of a multi-layered defence. Focusing on active prevention, ongoing surveillance, and organised incident recovery and response, this model is a combination of governance policies, technical controls, behavioural monitoring, and organisational culture reforms. In addition to presenting the model, this paper will cover a number of important ethical and legal issues, particularly the question of how to strike a balance between the privacy of employees and the monitoring required. A gradual implementation scheme and performance indicators are then proposed to guarantee feasible implementation, which is based on municipal budget and regulatory factors. Our study builds on earlier findings that insider risk mitigation extends beyond technology, forming a complex and culture-entrenched challenge that requires an overhaul of present municipal operations in order to instil trust, provide accountability, and enhance resilience.
Short Note
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Xinhua Wang,

Caibo Feng,

Xiangjun Fu,

Chunxiao Liu

Abstract: In the domain of low-light image enhancement, both transformer- based approaches, such as Retinexformer and Mamba-based frame- works, such as MambaLLIE, have demonstrated distinct advantages alongside inherent limitations. Transformer-based methods, in com- parison with mamba-based methods, can capture local interactions more effectively, albeit often at a high computational cost. In con- trast, Mamba-based techniques provide efficient global information modeling with linear complexity, yet they encounter two significant challenges: (1) inconsistent feature representation at the margins of each scanning row and (2) insufficient capture of fine-grained local interactions. To overcome these challenges, we propose an innovative enhancement to the Mamba framework by increasing the Hausdorff dimension of its scanning pattern through a novel Hilbert Selective Scan mechanism. This mechanism explores the feature space more effectively, capturing intricate fine-scale details and improving overall coverage. As a result, it mitigates informa- tion inconsistencies while refining spatial locality to better capture subtle local interactions without sacrificing the model’s ability to handle long-range dependencies. Extensive experiments on publicly available benchmarks demonstrate that our approach significantly improves both the quantitative metrics and qualitative visual fidelity of existing Mamba-based low-light image enhancement methods, all while reducing computational resource consumption and short- ening inference time. We believe that this refined strategy not only advances the state-of-the-art in low-light image enhancement but also holds promise for broader applications in fields that leverage Mamba-based techniques.
Short Note
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Dinghong Song,

Jierui Xu,

Weichu Yang,

Pengfei Su,

Dong Li

Abstract: AI accelerators, customized to AI workloads, provide cost- effective and high-performance solutions for training and inference. Trainium, an AI accelerator recently developed by Amazon Web Services (AWS), provides an attractive op- tion for LLM training and inference through its heteroge- neous architecture. However, leveraging Trainium architec- ture for high performance can be challenging because of its systolic array architecture and special requirement on data layout. In this paper, we design high-performance ma- trix multiplication (matmul), a critical compute kernel, for LLM inference on Trainium. We introduce a series of tech- niques customized to Trainium based on kernel fusion and novel caching strategies to reduce data movement across the software-managed memory hierarchy, maximize SRAM bandwidth, and avoid expensive matrix transpose. Evalu- ating with nine datasets and four recent LLMs, we show that our system largely outperforms the state-of-the-art mat- mul implemented by AWS on Trainium: at the level of mat- mul kernel, it achieves an average 1.35× speedup (up to 2.22×), which translates to an average 1.66× speedup (up to 2.49×) for end-to-end LLM inference. Our code is released at https://github.com/dinghongsong/NeuronMM.
Article
Computer Science and Mathematics
Computer Vision and Graphics

Subhadyouti Bose,

Arpeet Chandane,

Tvisha Kapadia,

Neha Panwar,

Neeraj Srivastava

Abstract: The Imaging Infra-Red Spectrometer (IIRS) is the most advanced reflectance spectrometer currently orbiting the Moon. IIRS was launched on-board Chandrayaan-2 in 2019 to image the lunar surface in the wavelength range of 0.8 to 5.0 µm in 250 contiguous bands at a high spatial resolution of ~80 m/pixel and a spectral resolution of 20-25 nm. The IIRS strips are available in the PDS4-compliant QUB file format. However, the data lack inherent map-projection information. This study presents and implements a different approach to automatically seleno-reference the images obtained from IIRS. Using the SIFT (Scale-Invariant Feature Transform) algorithm, matching common points from the individual resampled pixels of IIRS and LRO-WAC (Lunar Reconnaissance Orbiter - Wide Angle Camera, which has been used as a reference image) are obtained. Our results show that SIFT is able to both identify and match corresponding pixels from both IIRS and WAC with RMS errors < the size of a single IIRS pixel. Hence, any user interested to work with IIRS data may refer to this technique to simplify the registration process of the IIRS strips to their actual ground coordinates.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Da Long,

Yabo Wang,

Tian Li,

Lifen Sun

Abstract: The integration of knowledge graphs (KGs) with Retrieval-Augmented Generation (RAG) has significantly advanced domain-specific question-answering systems. However, a critical limitation persists in existing KG-based RAG frameworks: the inability to efficiently handle localized updates within a dynamic document corpus. Current methods typically necessitate a complete KG rebuild for even minor changes, leading to prohibitive computational costs of large language model (LLM) token consumption and significant KG generation time expenditure. To address this, we propose a novel jigsaw-like methodology from subgraphs to global KG generation and maintenance. Our approach leverages document lifecycle states (new, modified, persistent, deleted) to isolate and process only the 'delta changes' within the corpus. By decomposing the KG into document-level subgraphs, we enable token-efficient, localized updates where LLM extraction is invoked solely for altered documents, while reusing subgraphs from unchanged content. We engineer and evaluate Jigsaw-LightRAG, an extension of the vanilla LightRAG framework that implements this algorithm. Extensive experiments on public datasets demonstrate that this new framework reduces LLM token consumption by orders of magnitude during incremental updates while maintaining the structural integrity of the KG and achieving performance parity with full-rebuild baselines on question answering (QA) tasks. This work provides a computationally efficient and robust solution for dynamic AI knowledge base management, offering substantial practical value for applications requiring frequent KG updates.
Article
Computer Science and Mathematics
Signal Processing

Bálint Maczák,

Adél Zita Hordós,

Gergely Vadai

Abstract: Actigraphy quantifies human locomotor activity by measuring wrist acceleration with wearable devices at relatively high rates and converting it into lower-temporal-resolution activity values; however, the computational implementations of this data compression differ substantially across manufacturers. Building on our previous work, where we ex-amined how dissimilarly the various activity determination methods we generalized can quantify the same movements through correlation analysis, we investigated here how these methods (e.g., digital filtering, data compression) influence nonparametric circadian rhythm analysis and sleep–wake scoring. In addition to our generalized actigraphic framework, we also emulated the use of specific devices commonly employed in such sleep-related studies by applying their methods to raw actigraphic acceleration data we collected to demonstrate, through concrete real-life examples, how methodological choices may shape analytical outcomes. Additionally, we assessed whether nonparametric indi-cators could be derived directly from acceleration data without compressing them into ac-tivity values. Overall, our analysis revealed that all these analytical approaches of the sleep-wake cycle can be substantially affected by the manufacturer dependent actigraphic methodology, with the observed effects traceable to distinct steps of the signal processing pipeline, underscoring the necessity of cross manufacturer harmonization from a clini-cally oriented perspective.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Keerthivasan Ramasamy Velliangiri,

Nathish Rajendran

Abstract: Reading disability (known as Dyslexia) is a common problem faced by many children and younger people around the globe, the symptoms are not known to the people in the initial stage, the various levels of dyslexia affected people behavior and learning style are not common among the people. There are various ways used to predict Dyslexia symptoms and habits with the help of Machine Learning algorithms and Artificial Intelligence, but the complexity of generating the results from the processed data, storing and retrieving of the results is a challenging task, there are various research that is going on to overcome the problems. By leveraging cloud computing technologies, the data generated to predict the Dyslexia problem from various people are stored using Cloud storage services. The algorithms used on the same data produces different results and leads to the creation of new algorithms.
Article
Computer Science and Mathematics
Algebra and Number Theory

Jacob Orellana

Abstract: The Riemann Zeta function ζ(s) lies at the heart of analytic number theory, encoding the distribution of primes through its non-trivial zeros. This paper introduces a direct computational framework for evaluating Z(t) = eiθ(t)ζ( 1/2 + it) at large imaginary parts t, employing a real-to-complex number conversion and a novel valley scanner algorithm. The method efficiently identifies zeros by tracking minima of |Z(t)|, achieving stability and precision up to t ≈ 10^20 with moderate computational cost, using AWS EC2 computation. Results are compared against known Andrew Odlyzko zero datasets, validating the method’s accuracy while simplifying the high-t evaluation pipeline.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Manaswini Bollikonda

Abstract: Enterprises rarely fail because their language model is incapable; they fail because the prompt pathway is opaque. We introduce prompt–centric observability (PCO), an operational discipline that treats inputs, routing, retrieval, templating, safety overlays, generation, and validation as first–class, measurable surfaces. PCO emits low–cardinality signals (coverage, support, freshness, p95, spend), binds drafts to the exact evidence spans they used, and routes decisions through a governed gate (release, rewrite, redact, escalate). We present a compact architecture, a minimal telemetry hook, and actionable controls for entitlements, privacy, and auditability. The goal is not a new leaderboard but predictable behavior under change: when data, policies, or templates shift, the system responds within guardrails and every outcome is replayable.

of 599

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated