Computer Science and Mathematics

Sort by

Brief Report
Computer Science and Mathematics
Logic

Marija Srećković

Abstract: In this note we discuss the relationships between the weak dictatorship axiom, as introduced by A. Mas-Colel and H. Sonnenschein, and the vetoer axiom, as given by P. C. Fishburn, in a wider context of traditional Social Choice Theory (SCT). Namely, we prove the equivalence between these two axioms. This note can be a good formal reasoning exercise for students in the field of preference logic. This note is written as an outline for the 'second' lesson in SCT for non-mathematicians, which contains elementary formal logical argumentation, after the basics of preference logic, and before the extensive and complicated treatment of impossibility theorems.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Antonio J. Rodriguez-Almeida,

Carmelo Betancort,

Ana M. Wägner,

Gustavo M. Callico,

Himar Fabelo

Abstract: More than 14% of the world’s population suffered from diabetes mellitus in 2022. This metabolic condition is defined by increased blood glucose concentration. Among the different types of diabetes, type 1 diabetes, caused by a lack of insulin secretion, has a particularly challenging treatment. In this regard, automatic glucose level estimation implements Continuous Glucose Monitoring (CGM) devices, showing positive therapeutic outcomes. AI-based glucose prediction has commonly followed a deterministic approach, usually with lack of interpretability. Therefore, these AI-based methods do not provide enough information in critical decision-making scenarios, like in the medical field. This work intends to provide accurate, interpretable, and personalized glucose prediction using the Temporal Fusion Transformer (TFT), also including uncertainty estimation. The TFT was trained using two databases, an in-house collected dataset and the OhioT1DM dataset, commonly used for glucose forecasting benchmarking. For both datasets, the set of input features to train the model was varied to assess its impact on model interpretability and prediction performance. Models were evaluated using common prediction metrics, diabetes-specific metrics, uncertainty estimation and interpretability of the model, including feature importance and attention. Obtained results showed that TFT outperforms in terms of RMSE by at least 13% existing methods for both datasets.
Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Seema Shivapur,

Syed Mehfooz C S,

Syed Mansoor,

V Mohammed Hanzala,

Syed Fahad

Abstract: Artificial intelligence (AI) advances have spurred intelligent voice assistants (IVAs), significantly transforming human-machine interaction paradigms. This paper surveys the evolution and future directions of this field, motivating the development of next-generation systems. We focus on designs integrating machine learning (ML), natural language processing (NLP), and speech recognition to deliver highly responsive, interactive, and adaptive user experiences. Unlike basic IVAs limited to predefined commands, advanced systems utilize modular architectures enabling dynamic task execution, including application management, system control, real-time web/multimedia search, and versatile content generation (in formats like PDF, Word, Excel, PPT, TXT). This work aims to contribute toward IVAs capable of autonomous reasoning, making AI interaction profoundly more intuitive, efficient, and human-centric.
Review
Computer Science and Mathematics
Computer Networks and Communications

Adel A. Ahmed

Abstract: Nowadays Internet connectivity suffers from instability and slowness due to optical fiber cable attacking across the sea and oceans. The optimal solution of this problem is using the Low Earth Orbit (LEO) satellite network which can resolve the problem of Internet connectivity and reachability, and it has the power to bring real-time, reliable, low latency, high bandwidth, cost-effective Internet access to many urban and rural areas in any region of the earth. However, satellite orbital placement and navigation should be carefully designed to reduce the signal impairments. The challenges of orbital satellite placement for LEO are constellation development, satellite parameters optimalisation, bandwidth optimization, consideration signal impairments, and coverage are dimeters. This paper presents a comprehensive review of satellite orbital placement, coverage optimization, prevalent issues affecting LEO Internet connectivity, evaluates existing solutions and suggests novel solutions to address these challenges. Furthermore, it recommends machine learning solution based for coverage optimization and satellite orbital placement that can be used to efficiently enhance the internet reliability and reachability for LEO satellite networks. This survey will open up the gate for developing an optimal solution for global Internet connectivity and reachability.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Fnu Sheza Abdul Subhan

Abstract: Periodontal disease is a common, progressive condition that leads to tooth loss and contributes to systemic issues such as diabetes and cardiovascular disease. Although prior studies have linked smoking, obesity, and diabetes to periodontitis, few have leveraged explainable machine learning models to provide transparent, personalized risk predictions. In this study, I used the 2013–2014 National Health and Nutrition Examination Survey (NHANES) dataset (n = 3,720) to develop and compare two classifiers Random Forest and XGBoost on a stratified 60%/20%/20% train–validation–test split repeated across multiple seeds and trials. To make the models’ decisions interpretable, I applied SHAP (SHapley Additive exPlanations) to quantify each feature’s contribution to the prediction of severe periodontitis. SHAP identified age, body-mass index, and systolic blood pressure as the strongest drivers of risk, with additional insights from smoking status, diastolic blood pressure, diabetes status, and gender. A SHAP dependence analysis further revealed that advancing age increases predicted risk more steeply for males than for females. By combining robust model evaluation with patient-level explanations, this approach supports early identification of high-risk individuals and enhances patient provider communication and targeted prevention in dental practice and public health.
Article
Computer Science and Mathematics
Computer Networks and Communications

Ioannis Konstantoulas,

Iliana Loi,

Dimosthenis Tsimas,

Kyriakos Sgarbas,

Apostolos Gkamas,

Christos Bouras

Abstract: Fifth-Generation Networks deal with dynamic fluctuations in user traffic and the demands of each connected user and application. This creates a need for optimizing resource allocation to reduce network congestion in densely populated urban centers, and further ensure Quality of Service in 5G environments. To address this issue, we present a framework for both predicting user traffic and allocating users to base stations in 5G networks using neural network architectures. This framework consists of a hybrid approach utilizing a Long Short-Term Memory(LSTM) network or a Transformer architecture for user traffic prediction in base stations, as well as a Convolutional Neural Network (CNN) to allocate users to base stations in a realistic scenario. The models show high accuracy in the tasks performed; especially, in the user traffic prediction task, where the models show an accuracy of over 99%. Overall, our framework is capable of capturing long-term temporal features and spatial features from 5G user data, taking a significant step towards a holistic approach in data-driven resource allocation and traffic prediction in 5G networks.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Rogério Figurelli

Abstract: This paper introduces EDL (Epistemic Description Language), a domain-specific declarative language designed to configure, orchestrate, and trace epistemic agents operating within symbolic cognitive architectures grounded in Heuristic Physics. Rather than solving algorithmic problems, Heuristic Physics (hPhy) provides a theoretical substrate where cognition is modeled as the compression, recombination, and persistence of symbolic structures under epistemic drift. From this substrate emerge architectures—cognitive fields—capable of sustaining agentic interpretation despite structural mutation, contradiction, or collapse. EDL is designed for one such system: a seventh-generation symbolic architecture that supports contradiction-tolerant reasoning, semantic recomposition, and adaptive survivability. Within this architecture, agents are not programmed but declared—defined through symbolic schemas capable of surviving meaning degradation. EDL provides a grammar of epistemic roles, mutation boundaries, heuristic strategies, and traceable inheritance logic. All EDL declarations are embedded within the eXtended Content Protocol (XCP), a semantic-first communication protocol that enables symbolic continuity across heterogeneous agents and transport layers. XCP serves as a concrete instance of a cross-cognitive protocol—one that does not assume shared infrastructure, schema alignment, or ontological stability, but instead encodes messages to be reconstructable under symbolic loss and structural asymmetry. EDL and XCP together instantiate a cross-cognitive design paradigm: one in which cognitive architectures operate not through rigid determinism, but through symbolic negotiation. This paradigm supports emergent applications including heuristic modeling of the P versus NP boundary, swarm-based agent recomposition, distributed privacy enforcement, and AGI bootstrapping under semantic entropy. This paper positions EDL not only as a language, but as a formal epistemic infrastructure for the engineering of cognition in collapse-prone, multi-agent environments.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Jermaine E. Le Grand

Abstract: The rise of online gambling has increased concern around identifying behavioral addiction in digital environments. Current predictive systems offer limited interpretability and justification for individual-level risk assessments as they often operate as black boxes. This study proposes a hybrid framework that combines a traditional machine learning model (XGBoost) with a language-based Retrieval-Augmented Generation (RAG) system to combat these limitations. Using user-level behavioral and demographic data, an XGBoost classifier was trained and SHAP (SHapley Additive exPlanations) was applied to uncover the key features of addiction. These insights were then incorporated into a large language model (LLM)-based RAG pipeline using sentence-transformer embeddings and FAISS vector retrieval to generate individualized text justifications for each user classification. Through label refinement based on SHAP-ranked feature thresholds and targeted model tuning, the system achieved improved generalization and classification stability, resulting in an AUC of 0.87 while preserving clear, human-readable explanations via the RAG pipeline. This approach demonstrates the potential of integrating structured and unstructured AI techniques in addiction research and risk screening to support more accountable and understandable behavioral health interventions.
Article
Computer Science and Mathematics
Probability and Statistics

Sifiso Vilakati

Abstract: The integration of generative artificial intelligence (AI), particularly large language models (LLMs), into medical statistics presents both significant opportunities and critical risks. This paper explores how prompt engineering, defined as the deliberate design of inputs to guide AI behavior, can help mitigate statistical errors in biomedical research. Four prompting strategies are evaluated: zero-shot, explicit instruction, chain of thought, and hybrid approaches. Case studies involving descriptive and inferential statistical tasks show that while zero-shot prompting is generally sufficient for basic summaries, more complex analyses require structured, multi-step prompts to ensure methodological soundness. Among the strategies assessed, hybrid prompting, which combines explicit instructions, reasoning scaffolds, and output formatting, consistently produced the most accurate and interpretable results across two LLMs. The findings emphasize that prompt design, rather than model architecture alone, is the primary determinant of output quality. Although limited access to a broader range of models is a constraint, this study highlights the importance of prompt engineering as a core competency in AI-assisted medical research. It calls for the development of standardized prompt templates, evaluation rubrics, and further studies across diverse statistical domains to support robust and reproducible scientific inquiry.
Article
Computer Science and Mathematics
Algebra and Number Theory

An-Ping Li

Abstract: In the version, there is an improvement on Theorem 1.1, the error term of (1.2) is reduced to \( O(T^{1/3}(\log T)^{7/3}) \), this is achieved mainly by dividing function \( \omega(s,T_1,T_2) \) into two parts, one is dominant, and other one is minor, and the argument of the dominant one is small, so the final result is improved.
Article
Computer Science and Mathematics
Analysis

Hakan Karayılan

Abstract: In the present paper, the concept related orbitally completeness of two metric spaces for multi-valued mappings is introduced. A new related fixed point theorem for multi-valued mappings is proved and some important results are obtained as the corollaries of the present main theorem. Single-valued version of the present main theorem is obtained like a simple corollary and also two illustrative examples are given.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Hongye Zheng,

Yumeng Ma,

Yichen Wang,

Guiran Liu,

Zhen Qi,

Xu Yan

Abstract: This paper addresses the challenges of fine-tuning efficiency and semantic adaptation in large language models for question answering tasks. It proposes a low-rank parameter adaptation method that incorporates semantic representations. While keeping the main model parameters frozen, the method introduces a semantic guidance function to improve traditional low-rank tuning strategies. This allows the parameter update process to dynamically align with input semantics, enhancing the model's ability to perceive complex semantic structures. The method embeds a semantic-aware module into the attention layers of the Transformer architecture. It uses representation vectors generated by a semantic encoder to guide the construction of low-rank matrices. In addition, a semantic similarity regularization term is applied to enforce consistency in the model's responses to semantically similar inputs. The method was evaluated across multiple experimental settings. These include comparisons with existing mainstream parameter-efficient fine-tuning approaches, analysis of adaptability to different QA types, and robustness under semantic perturbation. In all cases, the proposed method demonstrates strong accuracy, stability, and generalization ability. Furthermore, training loss curves show that the method achieves good convergence speed and training stability during optimization. Overall, the results indicate that the semantically guided low-rank adaptation strategy enhances the semantic understanding of QA systems while significantly reducing computational and storage costs during fine-tuning. This provides a simple yet robust solution for building efficient intelligent QA models.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Jiawei Tian,

Jingyi Lu,

Meijia Wang,

Hongji Li,

Haifeng Xu

Abstract: This study presents a comprehensive analysis of property tax classification using machine learning approaches applied to the 2024 U.S. Property Tax Roll dataset. The research employs four different machine learning algorithms - XGBoost, Random Forest, Support Vector Machine (SVM), and Logistic Regression - to predict and analyze property classifications across American states. To address the challenge of imbalanced data distribution in property classes, we implement the SMOTE technique for data balancing. The experimental results demonstrate that the XGBoost algorithm achieves superior performance with an accuracy of 0.901, significantly outperforming other models across multiple evaluation metrics. The study reveals strong correlations between total assessment values and tax exemptions (correlation coefficient 0.98), providing insights into the relationship between property valuation and tax policy implementation. The findings have important implications for both tax administrators and policymakers, offering a data-driven approach to property tax classification and assessment.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Rogério Figurelli

Abstract: This paper develops a symbolic-argumentative expansion of the P = NP conjecture, grounded in the principles of Heuristic Physics (hPhy). It is offered as a formal continuation and epistemic complement to an earlier submission [1], which was presented upon the first emergent results suggesting the viability of the P = NP hypothesis. That initial paper served as a conceptual milestone, laying the foundation for a new interpretive framework. The present work builds upon that foundation, aiming to deepen the analysis, formalize the symbolic architecture, and consolidate the theoretical implications. Rather than relying on algorithmic execution or classical proof construction, the work introduces a multi-layered epistemic architecture, composed of three interdependent layers: (i) an operational layer, where heuristic agents confront problem generators in dynamic SAT-based simulations; (ii) a tactical layer, where competing P vs NP hypotheses are ranked, critiqued, and validated symbolically; and (iii) a strategic layer, which reinterprets complexity through narrative resilience, symbolic survival, and contradiction-aware compression. This framework is tested both theoretically and within its dedicated heuristic systems, simulating adversarial conditions in which symbolic solvers and entropy-maximizing generators engage in recursive, semantically volatile encounters. Results consistently show that compression-based agents outperform their complexity-inducing counterparts not by brute force, but through adaptive pattern recognition, contradiction navigation, and epistemic reformulation. This convergence supports the claim that P = NP becomes epistemically plausible when the problem space is reframed through symbolic persistence rather than classical time complexity. Accordingly, this paper is submitted as an official argumentative contribution to the Clay Mathematics Institute’s Millennium Problem on P vs NP, asserting that under symbolic collapse and cognitive structuring, the classical boundary between P and NP may no longer be ontologically stable.
Article
Computer Science and Mathematics
Other

Ian Coston,

Karl David Hezel,

Eadan Plotnizky,

Mehrdad Nojoumian

Abstract: This paper introduces the Automated Zero Trust Risk Management with DevSecOps Integration (AZTRM-D) framework, a novel and comprehensive approach designed to intrinsically embed security throughout the entire Secure Software and System Development Lifecycle (S-SDLC). AZTRM-D strategically unifies established methodologies—DevSecOps practices, the NIST Risk Management Framework (RMF), and the Zero Trust (ZT) model—and significantly augments their capabilities through the pervasive application of Artificial Intelligence (AI). This integration shifts traditional, often fragmented, security paradigms towards a proactive, automated, and continuously adaptive security posture. AI serves as the foundational enabler, providing real-time threat intelligence, automating critical security controls, facilitating continuous vulnerability detection, and enabling dynamic policy enforcement from initial code development through operational deployment. By automating key security functions and providing continuous oversight, AZTRM-D enhances risk mitigation, reduces vulnerabilities, streamlines compliance, and significantly strengthens the overall security posture of software systems, thereby addressing the complexities of modern cyber threats and accelerating the delivery of secure software.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Bipul Bhattarai

Abstract: This comprehensive technical analysis examines the emerging field of AI-driven self-healing DevOps pipelines, focusing on the architectural implementation, multi-agent orchestration systems, and governance mechanisms that enable autonomous infrastructure management. The study analyzes breakthrough advancements in LLM-based log parsing frameworks achieving 98% precision in root-cause analysis, sophisticated multi-agent remediation systems demonstrating 5.76x performance improvements over traditional approaches, and robust governance architectures with confidence-based decision making at 0.85 thresholds.The analysis reveals that modern self-healing systems employ sophisticated detection stages utilizing LogParser-LLM frameworks processing 3.6 million logs with minimal LLM invocations, while maintaining 90.6% F1 scores for grouping accuracy. Multi-agent orchestration patterns leverage specialized agents across functional domains with hierarchical communication protocols, implementing event-driven workflows and state machine orchestration for distributed transaction management. Governance mechanisms integrate policy engines with blast radius controls, automated audit trails, and LLM-generated natural-language rationales for explainable AI decision-making.Empirical validation demonstrates significant operational improvements including 55% reduction in Mean Time to Recovery (MTTR), 208x increase in code deployment frequency for DevOps-mature organizations, and over 90% developer trust retention across enterprise implementations. The market evolution shows exceptional growth from $942.5 million in 2022 to projected $22.1 billion by 2032, with 74% organizational DevOps adoption and 51% code copilot utilization representing the highest AI tool adoption rates.Integration with modern cloud platforms including AWS SageMaker, Kubernetes orchestration, and Terraform infrastructure-as-code demonstrates mature production-ready implementations. The analysis connects theoretical frameworks to practical deployments across major enterprise environments, revealing standardized multi-agent communication protocols and sophisticated resilience patterns including circuit breakers, retry mechanisms with exponential backoff, and graceful degradation capabilities.The study concludes that AI-driven self-healing DevOps represents a paradigm shift from reactive to predictive infrastructure management, with proven capabilities for transforming software delivery processes through autonomous anomaly detection, intelligent remediation, and comprehensive governance frameworks that ensure safety, explainability, and regulatory compliance in enterprise-scale deployments.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Owen Graham,

Kelvin Kloss

Abstract: The convergence of DevSecOps and generative artificial intelligence (AI) signifies a transformative paradigm in contemporary software engineering, wherein security is no longer a static checkpoint but an adaptive, continuous, and intelligent process integrated into the entire software delivery pipeline. This paper explores the evolving role of large language models (LLMs), such as GPT and CodeBERT, in automating the detection and remediation of security vulnerabilities within code repositories. As organizations increasingly adopt Infrastructure-as-Code (IaC), microservices, and distributed development practices, the complexity and scale of codebases have rendered manual code reviews and conventional static analysis tools insufficient in achieving real-time, scalable security assurances. LLMs, with their capacity to comprehend, generate, and reason over code and natural language artifacts, offer a powerful augmentation to DevSecOps workflows. This study critically examines the architecture, training paradigms, and capabilities of LLMs in context-aware vulnerability identification, contextual code explanation, and automated patch generation. By leveraging transfer learning and fine-tuning techniques on curated vulnerability datasets such as CWE (Common Weakness Enumeration) and CVE (Common Vulnerabilities and Exposures), LLMs can serve as intelligent assistants that proactively identify insecure coding patterns and suggest compliant, secure alternatives in accordance with secure coding standards like OWASP ASVS. Furthermore, we present empirical insights from experiments integrating LLMs into continuous integration/continuous deployment (CI/CD) pipelines, showcasing enhancements in detection precision, reduction in time-to-remediation, and decreased developer cognitive load. In addition to technical evaluations, the paper reflects on the socio-technical implications of delegating security-critical tasks to AI agents, including challenges related to model explainability, false positives, bias in training data, and compliance with privacy and auditability standards. The findings affirm that the fusion of DevSecOps and generative AI is not merely an augmentation but a redefinition of how secure software is conceptualized, built, and maintained. This work contributes a foundational understanding of LLM-driven security augmentation and outlines a roadmap for future research at the intersection of secure software engineering, AI ethics, and operational scalability.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Owen Graham,

Lloris Wilcox

Abstract: Alzheimer's Disease (AD) represents one of the most prevalent and devastating neurodegenerative disorders, posing profound challenges to public health systems worldwide due to its progressive nature and the lack of curative treatments. Early detection remains critical for managing disease progression, enabling timely therapeutic interventions, and improving patient outcomes. This study proposes a robust machine learning (ML) framework for the early detection of Alzheimer's Disease through the integrative analysis of cognitive assessment scores and magnetic resonance imaging (MRI)-based image features. The framework utilizes a multimodal dataset comprising neuropsychological test results and high-resolution structural MRI scans sourced from publicly available cohorts, including the Alzheimer's Disease Neuroimaging Initiative (ADNI). Cognitive metrics such as the Mini-Mental State Examination (MMSE), Clinical Dementia Rating (CDR), and Alzheimer’s Disease Assessment Scale–Cognitive Subscale (ADAS-Cog) are combined with extracted MRI-based volumetric and morphometric features—particularly from brain regions vulnerable to AD-related atrophy (e.g., hippocampus, entorhinal cortex). Feature engineering techniques, including principal component analysis (PCA) and mutual information ranking, are applied to reduce dimensionality and highlight salient biomarkers. Multiple supervised machine learning algorithms—namely Support Vector Machines (SVM), Random Forests, Gradient Boosting, and deep neural networks—are trained and validated on stratified datasets to distinguish between cognitively normal individuals, patients with mild cognitive impairment (MCI), and those with early-stage Alzheimer’s. Evaluation metrics such as accuracy, precision, recall, F1-score, and the area under the receiver operating characteristic curve (AUC-ROC) are used to assess diagnostic performance. The best-performing models achieved classification accuracies exceeding 90%, with MRI features contributing significantly to early MCI detection when fused with cognitive data. Additionally, SHAP (Shapley Additive Explanations) and Grad-CAM techniques are integrated to ensure model transparency and interpretability, facilitating clinical trust in AI-based diagnostics. The findings underscore the efficacy of a hybrid data-driven approach in enhancing the sensitivity and specificity of Alzheimer’s screening tools. This research contributes to the growing body of literature advocating for AI-enhanced clinical decision support systems in neurology and demonstrates that machine learning models, grounded in multimodal data fusion and explainability, can play a pivotal role in addressing the complex challenge of early Alzheimer's detection.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Natally Celestino Gama,

Luiz Eduardo Soares Oliveira,

Samuel de Pádua Chaves e Carvalho,

Alexandre Behling,

Pedro Luiz de Paula Filho,

Marcia Orie de Souza Hamada,

Eduardo da Silva Leal,

Deivison Venicio Souza

Abstract: The application of artificial intelligence (AI) techniques has improved the accuracy of forest species identification, particularly in timber inventories conducted under Sustainable Forest Management (SFM). This study developed and evaluated machine learning models to recognize 16 Amazonian timber species using digital images of tree bark. Data were collected from three SFM units located in Nova Maringá, Feliz Natal, and Cotriguaçu, in the state of Mato Grosso, Brazil. High-resolution images were processed into sub-images (256 × 256 pixels), and two feature extraction methods were tested: Local Binary Patterns (LBP) and pre-trained Convolutional Neural Networks (ResNet50, VGG16, InceptionV3, MobileNetV2). Four classifiers—Support Vector Machine (SVM), Artificial Neural Networks (ANN), Random Forest (RF), and Linear Discriminant Analysis (LDA)—were used. The best result (95% accuracy) was achieved using ResNet50 with SVM, confirming the effectiveness of transfer learning for species recognition based on bark texture. These findings highlight the potential of AI-based tools to enhance accuracy in forest inventories and support decision-making in tropical forest management.
Article
Computer Science and Mathematics
Algebra and Number Theory

Jing Huang,

Deyu Zhang,

Feng Zhao

Abstract: Suppose that x is a sufficiently large number and j≥2 is any integer. Let λsymjf(n) be the nth Fourier coefficient of jth symmetric power L-function. In this paper, we establish asymptotic formula for sums of Dirichlet coefficients λsymjf(n) over a sequence of positive integers.

of 507

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2025 MDPI (Basel, Switzerland) unless otherwise stated