Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Applied Mathematics

Hanbing Li,

Kexin Qiao,

Ye Xu,

Changhai OU,

An Wang

Abstract: Algebraic Persistent Fault Analysis (APFA) combines algebraic analysis with persistent fault analysis, providing a novel approach for examining block cipher implementation security. Since its introduction, APFA has attracted considerable attention. Traditionally, APFA has assumed that fault injection occurs solely within the S-box during the encryption process. Yet, algorithms like PRESENT and AES also utilize S-boxes in the key scheduling phase, sharing the same S-box implementation as encryption. This presents a previously unaddressed challenge for APFA. In this work, we extend APFA’s fault injection and analysis capabilities to encompass the key scheduling stage, validating our approach on PRESENT. Our experimental findings indicate that APFA continues to be a viable approach. However, due to faults arising during the key scheduling process, the number of feasible candidate keys does not converge. To address this challenge, we expanded the depth of our fault analysis without increasing the number of faulty ciphertexts, effectively narrowing the key search space to near-uniqueness. By employing a compact S-box modeling approach, we were able to construct more concise algebraic equations with solving efficiency improvements ranging from tens to hundreds of times for PRESENT, SKINNY and CRAFT block ciphers. The efficiency gains became even more pronounced as the depth of the fault leakage increased, demonstrating the robustness and scalability of our approach.
Article
Computer Science and Mathematics
Algebra and Number Theory

Triston Miller

Abstract: The search for a deterministic law governing prime number emergence has challenged mathematicians for centuries. In this study, we present a novel approach rooted in Symbolic Field Theory (SFT), which models irreducible numbers as emergent structures within a multidimensional symbolic curvature field. Using four key projection functions—Euler’s totient function, the Möbius function, the divisor count function, and the prime sum function—we define symbolic curvature, force, mass, and momentum to capture the structural dynamics underlying prime and non-prime numbers. By analyzing the collapse behavior of these projections across the integers, we identify emergent collapse zones that predict prime positions with high accuracy. A symbolic regression model, trained on multidimensional collapse scores, demonstrates over 97\% accuracy in discriminating primes from non-primes. This paper introduces the field-invariant collapse equation, which generalizes symbolic curvature across multiple arithmetic functions, offering a unified framework for understanding and predicting irreducible emergence in number theory. The method provides a new perspective on the deterministic dynamics governing the distribution of primes and other irreducible numbers.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Ebtesam Alomari

Abstract: Chronic diseases place a significant burden on healthcare systems due to the need for long-term treatment. Early diagnosis is critical for effective management and minimizing risk. The current traditional diagnostic approaches face various challenges in terms of efficiency and cost. Digitized healthcare demonstrates several opportunities for reducing human errors, increasing clinical outcomes, tracing data, etc. Artificial Intelligence (AI) has emerged as a transformative tool in healthcare. Subsequently, the evolution of Generative AI represents a new wave. Large Language Models (LLMs), such as ChatGPT, are promising tools for enhancing diagnostic processes, but their potential in this domain remains underexplored. This study represents the first systematic evaluation of ChatGPT's performance in chronic disease prediction, specifically targeting heart disease and diabetes. This study aims to compare the effectiveness of zero-shot, few-shot, and CoT reasoning with feature selection techniques and prompt formulations in disease prediction tasks. Two latest versions of GPT4 (GPT-4o and GPT-4o-mini) are tested. Then, the results are evaluated against the best models from literature. The results indicate that GPT-4o significantly beat GPT-4o-mini in all scenarios in terms of accuracy, precision and F1-score. Besides, using a 5-shot learning strategy demonstrates superior performance compared to zero-shot, few-shot (3-shot, 5-shot, 10-shot), zero-shot CoT reasoning, 3-shot CoT reasoning, and the proposed Knowledge-enhanced CoT. It achieved an accuracy of 77.07% in diabetes prediction using Pima Indian Diabetes Dataset and 75.85% using Frankfurt Hospital Diabetes Dataset as well as 83.65% in heart disease prediction. Subsequently, refining prompt formulations resulted in notable improvements, particularly for the heart dataset, emphasizing the importance of prompt engineering. The clarification of column names and categorical values contributed to a 5% performance increase when using GPT-4o. Besides, the proposed Knowledge-enhanced 3-shot CoT demonstrated notable improvements in diabetes prediction over CoT, while its effectiveness in heart disease prediction was limited. The reason could be that heart disease is influenced by a more complex combination of features, which indicates the importance of carefully designing reasoning strategies based on the specific characteristics of the disease. Even though ChatGPT does not outperform traditional machine learning and deep learning models, these findings highlight its potential as a complementary tool in disease prediction. Additionally, it demonstrates promising results, particularly with refined prompt designs and feature selection, providing insights for future research to improve the model’s performance.
Short Note
Computer Science and Mathematics
Probability and Statistics

Yudong Tang

Abstract:

This article studies the terminal distribution of multi-variate Brownian motion where the correlations are not constant. In particular, with the assumption that the correlation function is driven by one factor, this article developed PDEs to quantify the moments of the conditional distribution of other factors. By using normal distribution and moment matching, we found a good approximation to the true Fokker Planck solution and the method provides a good analytic tractability and fast performance due to the low dimensions of PDEs to solve. This method can be applied to model correlation skew effect in quantitative finance, or other cases where a non-constant correlation is desired in modelling multi-variate distribution.

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Frank Vega

Abstract: The Dominating Set problem, a fundamental challenge in graph theory and combinatorial optimization, seeks a subset of vertices such that every vertex in a graph is either in the subset or adjacent to a vertex in it. This paper introduces a novel 2-approximation algorithm for computing a dominating set in general undirected graphs, leveraging a bipartite graph transformation. Our approach first handles isolated nodes by including them in the solution set. For the remaining graph, we construct a bipartite graph by duplicating each vertex into two nodes and defining edges to reflect the original graph's adjacency. A greedy algorithm then computes a dominating set in this bipartite graph, selecting vertices that maximize the coverage of undominated nodes. The resulting set is mapped back to the original graph, ensuring all vertices are dominated. We prove the algorithm's correctness by demonstrating that the output set is a valid dominating set and achieve a 2-approximation ratio by adapting a charging scheme to our bipartite construction. Specifically, we show that each vertex in an optimal dominating set is associated with at most two vertices in our solution, guaranteeing the size bound. This method extends the applicability of approximation techniques to general graphs, offering a practical and theoretically sound solution for applications in network design, resource allocation, and social network analysis, where efficient domination is critical.
Communication
Computer Science and Mathematics
Mathematics

Frank Trefoily

Abstract: This study focused on the transformation of an exponentially growing divergent function sin(Rln(x)) into a convergent function by its complementary exponential function xt in such a manner that the sizes of positive and negative areas under sin would be the same. The transformation will provide the entire sin function with self-compensatory behavior. The exponent's value was compute and found that it equals to -1/2, which is the only exponent, which lets entire product of the function converge to zero (sum of area for positive real numbers and sum of products for natural numbers). The exponent -1/2 is algebraically and geometrically inevitable for the function xtsin(Rln(x)) converging to zero. The result directly affects the critical line position in the Euler-Riemann zeta function.
Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Satyadhar Joshi

Abstract: Large Language Models (LLMs) have revolutionized various domains, including finance, medicine, and education. This review paper provides a comprehensive survey of the key metrics and methodologies employed to evaluate LLMs. We discuss the importance of evaluation, explore a wide range of metrics covering aspects such as accuracy, coherence, relevance, and safety, and examine different evaluation frameworks and techniques. We also address the challenges in LLM evaluation and highlight best practices for ensuring reliable and trustworthy AI systems. This survey draws upon a wide range of recent research and practical insights to offer a holistic view of the current state of LLM evaluation.We surveyed a comprehensive evaluation framework integrating quantitative metrics like entropy-based stability measures and domain-specific scoring systems for medical diagnostics and financial analysis, while addressing persistent challenges including hallucination rates (28\% of outputs from current research) and geographical biases in model responses. The study proposes standardized benchmarks and hybrid human-AI evaluation pipelines to enhance reliability, supported by algorithmic innovations in training protocols and RAG architectures. Our findings underscore the necessity of robust, domain-adapted evaluation methodologies to ensure the safe deployment of LLMs in high-stakes applications. Through systematic analysis of 70+ studies, this paper revisits that while LLMs achieve near-human performance in structured tasks like certifications exams, they exhibit critical limitations in open-ended reasoning and output consistency. Our analysis covers foundational concepts in prompt engineering, evaluation methodologies from industry and academia, and practical tools for implementing these assessments. The paper examines key challenges in LLM evaluation, including bias detection, hallucination measurement, and context retention, while proposing standardized approaches for comparative analysis. We demonstrate how different evaluation frameworks can be applied across domains such as technical documentation, creative writing, and factual question answering. The findings provide practitioners with a structured approach to selecting appropriate evaluation metrics based on use case requirements, model characteristics, and desired outcomes.
Article
Computer Science and Mathematics
Computational Mathematics

Shang-Kuan Chen,

Gen-Han Wu,

Yu-Hsuan Wu

Abstract: In this study, twelve modified differential evolution algorithms with memory properties and adaptive parameters were proposed to solve the optimization problem. In the experimental process, these modified differential evolution algorithms were applied to 23 continuous test functions. Experiments show that MBDE2 and IHDE-BPSO3 are superior to the original differential evolution algorithm and its extended variants, and the best solutions can be found in most of the problems. It is inducted that the proposed improved differential evolution algorithm can adapt to most problems and obtain better results, and adding the concept of memory property is a great improvement to the capability of the proposed improved differential evolution algorithm.
Article
Computer Science and Mathematics
Applied Mathematics

Sanjar M. Abrarov,

Rehan Siddiqui,

Rajinder K. Jagpal,

Brendan M. Quine

Abstract: In this work, we develop a method of rational approximation of the Fourier transform (FT) based on the real and imaginary parts of the complex error function \[ w(z) = e^{-z^2}(1 - {\rm{erf}}(-iz)) = K(x,y) + iL(x,y), \quad z = x + iy, \] where $K(x,y)$ and $L(x,y)$ are known as the Voigt and imaginary Voigt functions, respectively. In contrast to our previous rational approximation of the FT, the expansion coefficients in this method are not dependent on values of a sampled function. As a set of the Voigt/complex error function values remains the same, this approach provides rapid computation. Mathematica codes with some examples are presented.
Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Bogdan-Iulian Ciubotaru

Abstract: This systematic review examines the evolution, technical architecture, applications, limitations, and future directions of generative artificial intelligence (AI) and large language models (LLMs). Through comprehensive analysis of scientific literature, it was traced the development of these technologies from early linguistic theories to modern transformer-based architectures. The findings presented in this review article reveal the transformative impact of LLMs across diverse domains including healthcare, education, software development, and creative industries. Significant technical limitations were identified, including hallucinations, context window constraints, and reasoning deficiencies, alongside ethical concerns regarding bias, privacy, and environmental impact. The review concludes by exploring emerging trends in model architecture, efficiency improvements, and ethical frameworks that will shape future development. This work provides researchers, practitioners, and policymakers with a comprehensive understanding of the current state and future trajectory of generative AI and LLMs.
Article
Computer Science and Mathematics
Computer Vision and Graphics

Matteo Fincato,

Roberto Vezzani

Abstract: Multi-person pose estimation is the task of detecting and regressing the keypoint coordinates of multiple people in a single image. Significant progress has been achieved in recent years, especially with the introduction of transformer-based end-to-end methods. In this paper, we present DualPose, a novel framework that enhances multi-person pose estimation by leveraging a dual-block transformer decoding architecture. Class prediction and keypoint estimation are split into parallel blocks so each sub-task can be separately improved and the risk of interference is reduced. This architecture improves the precision of keypoint localization and the model's capacity to accurately classify individuals. To improve model performance, the keypoints-block uses parallel processing of self-attentions, providing a novel strategy that improves keypoint localization accuracy and precision. Additionally, DualPose incorporates a contrastive denoising (CDN) mechanism, leveraging positive and negative samples to stabilize training and improve robustness. Thanks to CDN, a variety of training samples is created by introducing controlled noise into the ground truth, improving the model's ability to discern between valid and incorrect keypoints. DualPose achieves state-of-the-art results outperforming recent end-to-end methods, as shown by extensive experiments on the MS COCO and CrowdPose datasets. The code and pretrained models are publicly available.
Article
Computer Science and Mathematics
Applied Mathematics

Álvaro Presno-Vélez,

Z. Fernández-Muñiz,

Juan Luis Fernández-Martínez

Abstract: This study investigates the use of Deep Belief Networks (DBNs) for classifying structural states in Structural Health Monitoring (SHM) systems. Dimensionality reduction techniques—such as Principal Component Analysis (PCA) and t-SNE—enabled a clear separation between pre- and post-retrofitting conditions, emphasizing the DBN’s ability to capture relevant classification features. Compared to unsupervised approaches such as K-means and PCA, DBNs demonstrate superior discrimination and generalization capabilities by leveraging multiple layers of Restricted Boltzmann Machines (RBMs) to extract richer data representations. Experimental results confirm the model’s ability to detect complex patterns in large datasets, achieving median cross-validation accuracies of 98.04% for ambient data and 96.96% for train data, with low variability. While DBNs perform slightly below the Random Forest model in ambient data classification (99.19%), they surpass it when handling more complex signals such as train data (95.91%). Noise robustness analysis indicates acceptable tolerance, maintaining over 90% accuracy for ambient data at noise levels σnoise≤0.5, though accuracy declines significantly under extreme noise conditions. Comparisons with previous SHM research confirm the competitive performance of DBNs, often exceeding results from CNN and LSTM models. These findings support DBNs as a promising and reliable approach for SHM applications. Future research should aim to enhance noise resilience and improve performance under imbalanced data conditions, further promoting DBNs as a robust tool for data-driven structural assessment and infrastructure monitoring.
Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Mehdi Imani,

Majid Joudaki,

Ali Beikmohamadi,

Hamid Reza Arabnia

Abstract: Customer churn poses a significant challenge across various sectors, resulting in considerable revenue losses and increased customer acquisition costs. Machine Learning (ML) and Deep Learning (DL) have emerged as transformative approaches in churn prediction, significantly outperforming traditional statistical methods by effectively analyzing high-dimensional and dynamic customer datasets. This literature review systematically examines recent advancements in churn prediction methodologies based on 240 peer-reviewed studies published between 2020 and 2024 across diverse domains such as telecommunications, retail, banking, healthcare, education, and insurance. It examines the evolution, strengths, and limitations of conventional ML techniques—such as Decision Trees, Random Forests, and boosting algorithms—and advanced DL methods, including convolutional neural networks (CNNs), long short-term memory (LSTM) networks, Transformers, and hybrid models. Key emerging trends identified are the increasing adoption of ensemble models, profit-driven frameworks, sophisticated DL architectures, and attention-based mechanisms, coupled with a stronger emphasis on Explainable AI (XAI) and adaptive learning strategies. Despite significant progress, the review highlights persistent challenges like class imbalance, interpretability issues associated with DL's black-box nature, and difficulties addressing concept drift in dynamic customer behaviors. This study categorizes predominant methodologies, compares model performances, and identifies critical gaps such as limited consideration of real-world deployment constraints and business-oriented metrics. By addressing these gaps, the review provides actionable insights to develop robust, interpretable, economically beneficial churn prediction models, emphasizing alignment with business goals and guiding future research toward improved accuracy, adaptability, and practical deployment.
Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Paschalina Lialiou,

Ilias Maglogiannis

Abstract: (1) Background: Current uses of smartwatches wearable devices have been expanded not only in everyday routine life but also, they have a dynamic role in early detection of many behavioral patterns of users. The objective of this systematic literature review emphasizes in the role of AI wearable devices in early symptom detection of burnout in student population. (2) Methods: A systematic literature review was designed based on PRISMA guidelines. The general extracted aspect was to exploit all the current related research evidence about the effectiveness of wearable devices in student population. (3) Results: The reviewed studies document the importance of physiological monitoring, AI-driven predictive models, with the collaboration of self-reported scales in assessing mental well-being. It is reported that stress is the most frequently studied burnout-related symptom. Meanwhile, heart rate (HR) and heart rate variability (HRV) being the most commonly used biomarkers that can be monitored and evaluated in early burnout detection. (4) Conclusions: Despite the promising potential of these technologies, several challenges and limitations must be addressed to enhance their effectiveness and reliability.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Jiaxin Lu,

Siyue Li

Abstract: The prevalence of hallucinations in responses generated by large language models (LLMs) poses significant challenges for the reliability of natural language processing applications. This study addresses the detection of such hallucinations through an enhanced Roberta-base model, specifically targeting hallucination responses produced by the Mistral 7B Instruct model. By implementing Low-Rank Adaptation (LoRA) for fine-tuning and incorporating hierarchical multi-head attention and multi-level self-attention weighting mechanisms, we aim to improve both the accuracy of hallucination detection and the interpretability of the model's decisions. Our experimental results demonstrate that the proposed model significantly outperforms baseline models across various metrics, including accuracy, precision, recall, and area under the curve (AUC). Future research directions will explore the integration of larger-scale models and additional fine-tuning techniques to further bolster the model’s capacity for detecting hallucinations, thereby enhancing the reliability of LLM outputs.
Review
Computer Science and Mathematics
Information Systems

Tanishq Chauhan,

Sivam Visnu,

Dr. Sandeep Kumar

Abstract: Digital literacy and technology-enabled education have immense potential to transform rural communities by addressing long-standing challenges such as limited access to quality education, lack of skilled teachers, and infrastructural constraints. Leveraging e-learning and Learning Management Systems (LMS) can provide innovative solutions to these problems, offering tailored learning experiences, access to diverse educational resources, and skill development opportunities. LMS tools, integrated with localized content and multilingual support, can bridge geographical and cultural divides, enabling equitable access to education and vocational training. Despite their transformative potential, implementing e-learning and LMS in rural settings faces numerous obstacles, including inadequate digital infrastructure, low internet penetration, limited affordability of digital devices, and a lack of digital literacy among both learners and educators. Furthermore, community reluctance to adopt modern educational technologies and the need for region-specific customization present additional hurdles. Addressing these challenges requires a multistakeholder approach involving government agencies, non-profit organizations, and technology providers to ensure sustainable adoption and meaningful impact. This review paper examines the role of digital literacy and e-learning in rural education, explores successful case studies of LMS adoption in resource-constrained environments, and analyzes the challenges of implementing these technologies effectively. It also provides actionable recommendations for scaling elearning initiatives, such as improving digital infrastructure, conducting community-focused training programs, and offering online, low-bandwidthcompatible LMS solutions. The findings underscore the critical role of digital literacy and LMS in bridging the rural-urban educational divide, enhancing skill development, and promoting socio-economic progress. The paper concludes with policy recommendations and future directions for research to ensure the scalability and inclusivity of digital learning systems, ultimately contributing to the vision of equitable and effective education for all learners.
Article
Computer Science and Mathematics
Computer Networks and Communications

Qutaiba Ibrahim,

Sohaib Awad

Abstract: Efficient network management by SDN controllers is challenging in dynamic and high-traffic environments. Traditional controllers like POX I2_learning rely on static algorithms, adaptability, and limiting scalability. AI solutions are crucial to achieving optimal performance in complex networks. This work improves the POX I2_learning controller towards optimizing its performance under dynamic and high-traffic networks and then incorporates machine learning on the same platform. The improvements include real-time congestion metrics, adaptive timeouts, and load balancing leading to improving scalability, stability, and congestion management. Also, an XG-Boost, a machine learning model, was incorporated to classify network states and improve routing decisions in real-time. The proposed method established above achieved a marked improvement in overall system performance and network control including a stable latency of 3.52 ms, zero packet loss, and a slight improvement in throughput to 9.56 Mbps. The lightweight XG-Boost model with a compact size of 140 KB is delivered for optimal realization of real-time SDN application to offer an effective and dynamic network adaptation. This resulted in an overall accuracy of 99.67% with a balanced measure of precision, recall, and F1 score at 99%. These experimental results outperform recent SDN approaches in adaptability and performance and show that the system is reliable and able to predict a proactive decision, as well as, optimize resource usage and make the proposed framework relevant to SDN application developments.
Article
Computer Science and Mathematics
Analysis

Gabriel Alberto Santana,

José Benito Hernández

Abstract: This research introduces the concept of harmonically m-convex set-valued functions, combining harmonically m-convex functions and set-valued mappings. We establish fundamental properties and derive a Hermite-Hadamard-type inequality for these functions, generalizing classical results in convex analysis. The study provides a theoretical foundation with potential applications in optimization, variational analysis, and mathematical economics, where set-valued mappings are essential. This work advances the understanding of harmonic convexity in the context of set-valued analysis, offering new insights for both theoretical and applied mathematics.
Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Olamilekan Shobayo,

Reza Saatchi

Abstract: Deep learning has revolutionized medical image analysis, offering the possibility of automated, efficient, and highly accurate diagnostic solutions. This article explores recent developments in deep learning techniques applied to medical imaging, including Convolutional Neural Networks (CNNs) for classification and segmentation, Recurrent Neural Networks (RNNs) for temporal analysis, Autoencoders for feature extraction, and Generative Adversarial Networks (GANs) for image synthesis and augmentation. Additionally, U-Net models for segmentation, Vision Transformers (ViTs) for global feature extraction, and hybrid models integrating multiple architectures are explored. The preferred reporting items for systematic reviews and meta-analyses (PRISMA) process such as PubMed, Google Scholar and Scopus databases were used. The findings highlight key challenges such as data availability, interpretability, overfitting, and computational requirements. While deep learning has demonstrated significant potential in enhancing diagnostic accuracy across multiple medical imaging modalities—including MRI, CT, and X-ray—factors such as model trust, data privacy, and ethical considerations remain ongoing concerns. The study underscores the importance of integrating multimodal data, improving computational efficiency, and advancing explainability to facilitate a broader clinical adoption. Future research directions emphasize optimizing deep learning models for real-time applications, enhancing interpretability, and integrating deep learning with existing healthcare frameworks for improved patient outcomes.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Izak Tait

Abstract: This paper examines whether GPT-4, a Generative Pre-Trained Transformer model developed by OpenAI, possesses a 'self' and whether it is aware of it. It employs the Structures Theory and evaluates GPT-4 against five critical structures deemed essential for self-awareness: unified consciousness, volition, a Theory of Others, self-awareness, and personal identity. While GPT-4 demonstrates capabilities in four of these areas, it conspicuously lacks unified consciousness. This absence decisively negates GPT-4's present self-awareness and its classification as having a "self." Nevertheless, if each instance or session of GPT-4 were viewed as a separate entity, then there might be potential for unified consciousness (should it be demonstrated that GPT-4 is conscious). The paper argues that GPT-4's cognitive architecture requires no modification for self-awareness except for the attainment of consciousness. It highlights the necessity for further research into technologies that could endow GPT-4 with consciousness and explores potential behavioural indications of self-awareness and its implications for society. The findings suggest that because the leap to self-awareness hinges solely on its capacity for consciousness, there is a need for significant philosophical and regulatory debates about the nature and rights of self-aware AI entities.

of 461

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2025 MDPI (Basel, Switzerland) unless otherwise stated