Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Rogério Figurelli

Abstract: This paper develops a symbolic-argumentative expansion of the P = NP conjecture, grounded in the principles of Heuristic Physics (hPhy). It is offered as a formal continuation and epistemic complement to an earlier submission [1], which was presented upon the first emergent results suggesting the viability of the P = NP hypothesis. That initial paper served as a conceptual milestone, laying the foundation for a new interpretive framework. The present work builds upon that foundation, aiming to deepen the analysis, formalize the symbolic architecture, and consolidate the theoretical implications. Rather than relying on algorithmic execution or classical proof construction, the work introduces a multi-layered epistemic architecture, composed of three interdependent layers: (i) an operational layer, where heuristic agents confront problem generators in dynamic SAT-based simulations; (ii) a tactical layer, where competing P vs NP hypotheses are ranked, critiqued, and validated symbolically; and (iii) a strategic layer, which reinterprets complexity through narrative resilience, symbolic survival, and contradiction-aware compression. This framework is tested both theoretically and within its dedicated heuristic systems, simulating adversarial conditions in which symbolic solvers and entropy-maximizing generators engage in recursive, semantically volatile encounters. Results consistently show that compression-based agents outperform their complexity-inducing counterparts not by brute force, but through adaptive pattern recognition, contradiction navigation, and epistemic reformulation. This convergence supports the claim that P = NP becomes epistemically plausible when the problem space is reframed through symbolic persistence rather than classical time complexity. Accordingly, this paper is submitted as an official argumentative contribution to the Clay Mathematics Institute’s Millennium Problem on P vs NP, asserting that under symbolic collapse and cognitive structuring, the classical boundary between P and NP may no longer be ontologically stable.
Article
Computer Science and Mathematics
Other

Ian Coston,

Karl David Hezel,

Eadan Plotnizky,

Mehrdad Nojoumian

Abstract: This paper introduces the Automated Zero Trust Risk Management with DevSecOps Integration (AZTRM-D) framework, a novel and comprehensive approach designed to intrinsically embed security throughout the entire Secure Software and System Development Lifecycle (S-SDLC). AZTRM-D strategically unifies established methodologies—DevSecOps practices, the NIST Risk Management Framework (RMF), and the Zero Trust (ZT) model—and significantly augments their capabilities through the pervasive application of Artificial Intelligence (AI). This integration shifts traditional, often fragmented, security paradigms towards a proactive, automated, and continuously adaptive security posture. AI serves as the foundational enabler, providing real-time threat intelligence, automating critical security controls, facilitating continuous vulnerability detection, and enabling dynamic policy enforcement from initial code development through operational deployment. By automating key security functions and providing continuous oversight, AZTRM-D enhances risk mitigation, reduces vulnerabilities, streamlines compliance, and significantly strengthens the overall security posture of software systems, thereby addressing the complexities of modern cyber threats and accelerating the delivery of secure software.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Bipul Bhattarai

Abstract: This comprehensive technical analysis examines the emerging field of AI-driven self-healing DevOps pipelines, focusing on the architectural implementation, multi-agent orchestration systems, and governance mechanisms that enable autonomous infrastructure management. The study analyzes breakthrough advancements in LLM-based log parsing frameworks achieving 98% precision in root-cause analysis, sophisticated multi-agent remediation systems demonstrating 5.76x performance improvements over traditional approaches, and robust governance architectures with confidence-based decision making at 0.85 thresholds.The analysis reveals that modern self-healing systems employ sophisticated detection stages utilizing LogParser-LLM frameworks processing 3.6 million logs with minimal LLM invocations, while maintaining 90.6% F1 scores for grouping accuracy. Multi-agent orchestration patterns leverage specialized agents across functional domains with hierarchical communication protocols, implementing event-driven workflows and state machine orchestration for distributed transaction management. Governance mechanisms integrate policy engines with blast radius controls, automated audit trails, and LLM-generated natural-language rationales for explainable AI decision-making.Empirical validation demonstrates significant operational improvements including 55% reduction in Mean Time to Recovery (MTTR), 208x increase in code deployment frequency for DevOps-mature organizations, and over 90% developer trust retention across enterprise implementations. The market evolution shows exceptional growth from $942.5 million in 2022 to projected $22.1 billion by 2032, with 74% organizational DevOps adoption and 51% code copilot utilization representing the highest AI tool adoption rates.Integration with modern cloud platforms including AWS SageMaker, Kubernetes orchestration, and Terraform infrastructure-as-code demonstrates mature production-ready implementations. The analysis connects theoretical frameworks to practical deployments across major enterprise environments, revealing standardized multi-agent communication protocols and sophisticated resilience patterns including circuit breakers, retry mechanisms with exponential backoff, and graceful degradation capabilities.The study concludes that AI-driven self-healing DevOps represents a paradigm shift from reactive to predictive infrastructure management, with proven capabilities for transforming software delivery processes through autonomous anomaly detection, intelligent remediation, and comprehensive governance frameworks that ensure safety, explainability, and regulatory compliance in enterprise-scale deployments.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Owen Graham,

Kelvin Kloss

Abstract: The convergence of DevSecOps and generative artificial intelligence (AI) signifies a transformative paradigm in contemporary software engineering, wherein security is no longer a static checkpoint but an adaptive, continuous, and intelligent process integrated into the entire software delivery pipeline. This paper explores the evolving role of large language models (LLMs), such as GPT and CodeBERT, in automating the detection and remediation of security vulnerabilities within code repositories. As organizations increasingly adopt Infrastructure-as-Code (IaC), microservices, and distributed development practices, the complexity and scale of codebases have rendered manual code reviews and conventional static analysis tools insufficient in achieving real-time, scalable security assurances. LLMs, with their capacity to comprehend, generate, and reason over code and natural language artifacts, offer a powerful augmentation to DevSecOps workflows. This study critically examines the architecture, training paradigms, and capabilities of LLMs in context-aware vulnerability identification, contextual code explanation, and automated patch generation. By leveraging transfer learning and fine-tuning techniques on curated vulnerability datasets such as CWE (Common Weakness Enumeration) and CVE (Common Vulnerabilities and Exposures), LLMs can serve as intelligent assistants that proactively identify insecure coding patterns and suggest compliant, secure alternatives in accordance with secure coding standards like OWASP ASVS. Furthermore, we present empirical insights from experiments integrating LLMs into continuous integration/continuous deployment (CI/CD) pipelines, showcasing enhancements in detection precision, reduction in time-to-remediation, and decreased developer cognitive load. In addition to technical evaluations, the paper reflects on the socio-technical implications of delegating security-critical tasks to AI agents, including challenges related to model explainability, false positives, bias in training data, and compliance with privacy and auditability standards. The findings affirm that the fusion of DevSecOps and generative AI is not merely an augmentation but a redefinition of how secure software is conceptualized, built, and maintained. This work contributes a foundational understanding of LLM-driven security augmentation and outlines a roadmap for future research at the intersection of secure software engineering, AI ethics, and operational scalability.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Owen Graham,

Lloris Wilcox

Abstract: Alzheimer's Disease (AD) represents one of the most prevalent and devastating neurodegenerative disorders, posing profound challenges to public health systems worldwide due to its progressive nature and the lack of curative treatments. Early detection remains critical for managing disease progression, enabling timely therapeutic interventions, and improving patient outcomes. This study proposes a robust machine learning (ML) framework for the early detection of Alzheimer's Disease through the integrative analysis of cognitive assessment scores and magnetic resonance imaging (MRI)-based image features. The framework utilizes a multimodal dataset comprising neuropsychological test results and high-resolution structural MRI scans sourced from publicly available cohorts, including the Alzheimer's Disease Neuroimaging Initiative (ADNI). Cognitive metrics such as the Mini-Mental State Examination (MMSE), Clinical Dementia Rating (CDR), and Alzheimer’s Disease Assessment Scale–Cognitive Subscale (ADAS-Cog) are combined with extracted MRI-based volumetric and morphometric features—particularly from brain regions vulnerable to AD-related atrophy (e.g., hippocampus, entorhinal cortex). Feature engineering techniques, including principal component analysis (PCA) and mutual information ranking, are applied to reduce dimensionality and highlight salient biomarkers. Multiple supervised machine learning algorithms—namely Support Vector Machines (SVM), Random Forests, Gradient Boosting, and deep neural networks—are trained and validated on stratified datasets to distinguish between cognitively normal individuals, patients with mild cognitive impairment (MCI), and those with early-stage Alzheimer’s. Evaluation metrics such as accuracy, precision, recall, F1-score, and the area under the receiver operating characteristic curve (AUC-ROC) are used to assess diagnostic performance. The best-performing models achieved classification accuracies exceeding 90%, with MRI features contributing significantly to early MCI detection when fused with cognitive data. Additionally, SHAP (Shapley Additive Explanations) and Grad-CAM techniques are integrated to ensure model transparency and interpretability, facilitating clinical trust in AI-based diagnostics. The findings underscore the efficacy of a hybrid data-driven approach in enhancing the sensitivity and specificity of Alzheimer’s screening tools. This research contributes to the growing body of literature advocating for AI-enhanced clinical decision support systems in neurology and demonstrates that machine learning models, grounded in multimodal data fusion and explainability, can play a pivotal role in addressing the complex challenge of early Alzheimer's detection. CHAPTER ONE: INTRODUCTION 1.1 Background of the Study Alzheimer’s Disease (AD) is a progressive and irreversible neurodegenerative disorder that primarily affects the elderly population and leads to cognitive impairment, memory loss, language deterioration, and eventually complete functional dependence. As the global population ages, the prevalence of AD is expected to rise dramatically, posing significant social, economic, and healthcare challenges. According to the World Health Organization, over 55 million people currently live with dementia globally, with Alzheimer’s Disease accounting for 60–70% of these cases. The societal cost of managing Alzheimer’s and related dementias is projected to exceed one trillion dollars globally, thereby intensifying the need for early detection and intervention. Early diagnosis of Alzheimer’s Disease is crucial, as it enables patients to receive appropriate therapeutic interventions, participate in clinical trials, and plan for the future. However, traditional diagnostic methods, including clinical interviews, neuropsychological tests, and radiological assessments, are often subjective, time-consuming, and sometimes inconclusive, especially in the prodromal stages. Structural imaging techniques, such as magnetic resonance imaging (MRI), have provided valuable insights into neuroanatomical changes, particularly in the medial temporal lobe regions. Concurrently, cognitive scores derived from instruments like the Mini-Mental State Examination (MMSE), the Alzheimer’s Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), and the Clinical Dementia Rating (CDR) offer quantitative measures of cognitive decline. The increasing availability of large-scale, multimodal medical datasets has enabled the application of machine learning (ML) algorithms to automate and enhance Alzheimer’s detection processes. ML techniques have demonstrated promise in extracting patterns from complex datasets, combining imaging biomarkers and cognitive features, and classifying disease stages with high accuracy. However, developing an interpretable and clinically reliable ML framework that leverages both cognitive assessments and MRI-based image features remains an ongoing research challenge.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Natally Celestino Gama,

Luiz Eduardo Soares Oliveira,

Samuel de Pádua Chaves e Carvalho,

Alexandre Behling,

Pedro Luiz de Paula Filho,

Marcia Orie de Souza Hamada,

Eduardo da Silva Leal,

Deivison Venicio Souza

Abstract: The application of artificial intelligence (AI) techniques has improved the accuracy of forest species identification, particularly in timber inventories conducted under Sustainable Forest Management (SFM). This study developed and evaluated machine learning models to recognize 16 Amazonian timber species using digital images of tree bark. Data were collected from three SFM units located in Nova Maringá, Feliz Natal, and Cotriguaçu, in the state of Mato Grosso, Brazil. High-resolution images were processed into sub-images (256 × 256 pixels), and two feature extraction methods were tested: Local Binary Patterns (LBP) and pre-trained Convolutional Neural Networks (ResNet50, VGG16, InceptionV3, MobileNetV2). Four classifiers—Support Vector Machine (SVM), Artificial Neural Networks (ANN), Random Forest (RF), and Linear Discriminant Analysis (LDA)—were used. The best result (95% accuracy) was achieved using ResNet50 with SVM, confirming the effectiveness of transfer learning for species recognition based on bark texture. These findings highlight the potential of AI-based tools to enhance accuracy in forest inventories and support decision-making in tropical forest management.
Article
Computer Science and Mathematics
Algebra and Number Theory

Jing Huang,

Deyu Zhang,

Feng Zhao

Abstract: Suppose that x is a sufficiently large number and j≥2 is any integer. Let λsymjf(n) be the nth Fourier coefficient of jth symmetric power L-function. In this paper, we establish asymptotic formula for sums of Dirichlet coefficients λsymjf(n) over a sequence of positive integers.
Article
Computer Science and Mathematics
Computer Science

Owen Graham,

Lloris Wilcox

Abstract: The exponential growth of large-scale medical datasets—driven by the adoption of electronic health records (EHRs), wearable health technologies, and AI-based clinical systems—has significantly enhanced opportunities for medical research and personalized healthcare delivery. However, this expansion also introduces complex privacy challenges, particularly concerning the risk of re-identification, unauthorized data inference, and linkage attacks. Existing privacy protection mechanisms often fall short in providing scalable, context-sensitive, and quantitative assessments of these risks. This study presents a comprehensive examination of privacy risk assessment frameworks that utilize computational metrics to evaluate the vulnerability of large-scale medical datasets. It critically reviews current approaches, including differential privacy, k-anonymity, l-diversity, and adversarial risk modeling, and identifies their limitations in handling the dynamic and high-dimensional nature of medical data. Building on these insights, we propose a novel, metric-based privacy risk assessment framework that integrates probabilistic modeling, sensitivity analysis, and contextual data flow mapping to offer real-time, fine-grained risk evaluations. Empirical validation is conducted using diverse medical datasets, assessing the framework's performance across multiple dimensions: accuracy in risk estimation, adaptability to evolving data-sharing scenarios, and compliance with legal and ethical standards such as GDPR and HIPAA. Furthermore, the study explores the incorporation of privacy-enhancing technologies (PETs), including federated learning, homomorphic encryption, and synthetic data generation, to mitigate identified risks without compromising data utility. The results demonstrate the framework’s capacity to support data custodians and healthcare institutions in making informed, accountable decisions about data sharing and use. By grounding privacy risk assessment in computational rigor and practical applicability, this work advances the development of scalable, trustworthy infrastructures for secure medical data management in the era of data-driven healthcare.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Jingyi Huang,

Yujuan Qiu

Abstract: In order to improve the accuracy and real-time performance of detecting abnormal behavior of smart meters, a time series prediction model based on Long Short-Term Memory (LSTM) is constructed, combining the sliding window mechanism and the residual dynamic thresholding strategy to realize the determination of abnormal behavior. The study covers data preprocessing, model structure design, system deployment, and visualization feedback, and optimizes the training performance by introducing Early Stopping, Dropout and learning rate adjustment. Comparison experiments are carried out based on real residential electricity consumption data, and the analysis shows that the LSTM model is better than traditional methods such as ARIMA, SVR and GRU in terms of prediction error and recognition accuracy, and it has strong sequence modeling capability and anomaly recognition stability.
Article
Computer Science and Mathematics
Computer Science

James Henderson,

Mark Pearson

Abstract: The increasing adoption of Natural Language Processing (NLP) in healthcare has the potential to transform clinical practices by enabling the efficient extraction of insights from unstructured clinical notes. However, the sensitive nature of patient information contained within these notes raises significant privacy concerns, necessitating robust privacy-preserving methods. This paper explores the integration of privacy-preserving techniques in NLP applications designed for clinical notes, addressing the dual objectives of maintaining patient confidentiality and leveraging the rich data for clinical decision-making. We begin by reviewing existing privacy regulations and the ethical implications of handling sensitive healthcare data. The study then examines various privacy-preserving methodologies, including differential privacy, federated learning, and homomorphic encryption, highlighting their applicability in the context of NLP. Empirical evaluations demonstrate the effectiveness of these techniques in safeguarding patient information while preserving the utility of NLP models. The findings underscore the importance of developing privacy-aware NLP frameworks that balance the need for data-driven insights with stringent privacy requirements. By proposing a comprehensive approach to privacy-preserving NLP in clinical settings, this research contributes to the ongoing discourse on ethical AI deployment in healthcare, ultimately fostering greater trust and security in the use of advanced analytics for patient care.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Changmin Lee,

Hyunwoo Lee,

Mincheol Whang

Abstract: Remote photoplethysmography (rPPG) enables non-contact physiological measurement for emotion recognition, yet the temporally sparse nature of emotional cardiovascular responses, intrinsic measurement noise, weak session-level labels, and subtle correlates of valence pose critical challenges. To address these issues, we propose a physiologically inspired deep learning framework comprising a Multi-scale Temporal Dynamics Encoder (MTDE) to capture autonomic nervous system dynamics across multiple timescales, an adaptive sparse α-entmax attention mechanism to identify salient emotional segments amidst noisy signals, Gated Temporal Pooling for robust aggregation of emotional fea-tures, and a structured three-phase curriculum learning strategy to systematically handle temporal sparsity, weak labels, and noise. Evaluated on the MAHNOB-HCI dataset (27 subjects, 527 sessions, subject-independent split), our temporal-only model achieved competitive performance in arousal recognition (66.04% accuracy, 61.97% weighted F1), surpassing prior CNN-LSTM baselines. However, lower performance in valence (62.26% accuracy) revealed inherent physiological limitations of unimodal temporal cardiovas-cular analysis. These findings establish clear benchmarks for temporal-only rPPG emo-tion recognition and underscore the necessity of incorporating spatial or multimodal information to effectively capture nuanced emotional dimensions such as valence, guiding future research directions in affective computing.
Article
Computer Science and Mathematics
Computer Science

Owen Graham,

Lloris Wilcox

Abstract: The rapid evolution of deep learning has fundamentally transformed medical image analysis, enabling unprecedented advancements in diagnostic accuracy, disease detection, and clinical decision support. This study presents a comprehensive investigation into the development, optimization, and evaluation of deep learning models specifically tailored for the classification of medical imaging data across various modalities, including radiographs (X-rays), computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound. The research addresses critical challenges associated with data heterogeneity, limited labeled samples, class imbalance, and the need for explainability in clinical contexts. A hybrid methodology was adopted, combining convolutional neural networks (CNNs), transfer learning techniques, and attention mechanisms to develop robust classifiers capable of distinguishing between pathological and non-pathological images with high precision. Publicly available datasets such as ChestX-ray14, BraTS, and NIH's DeepLesion were used to ensure diversity and generalizability. The models were evaluated using rigorous metrics including accuracy, area under the receiver operating characteristic curve (AUC-ROC), sensitivity, specificity, and F1-score. The best-performing architecture—based on an ensemble of ResNet50 and EfficientNet-B4—achieved an average classification accuracy of 94.3% and an AUC-ROC of 0.97 across multiple tasks, outperforming traditional machine learning baselines. Furthermore, the study explores the integration of Grad-CAM and SHAP interpretability frameworks to visualize and validate the model’s decision-making process, enhancing clinical trust and adoption potential. A critical component of the research also includes a comparative analysis of training paradigms under supervised, semi-supervised, and self-supervised learning conditions, demonstrating that hybrid semi-supervised approaches significantly reduce dependency on large annotated datasets without compromising model performance. This work contributes to the growing body of knowledge in AI-driven healthcare by offering a scalable and generalizable framework for automated image classification, addressing both technical performance and ethical transparency. The findings have far-reaching implications for radiology, oncology, and pathology, potentially enabling faster diagnosis, reduced diagnostic error, and improved healthcare accessibility, particularly in low-resource settings. Future research directions include integrating multimodal imaging data, leveraging federated learning for privacy-preserving training, and extending the framework to real-time clinical deployment.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Fishon Amos,

Solomon Emmanuel,

James Chukwuemeka

Abstract: As Large Language Models (LLMs) increasingly integrate with external tools and APIs, the risk of hallucinated or unsafe tool invocations poses significant challenges for production deployments. We present HGuard, a middleware system designed to detect, prevent, and mitigate dangerous tool use in LLM-powered applications. Our system employs a multi-stage validation pipeline incorporating schema validation, fuzzy matching, and configurable policy enforcement to intercept potentially harmful tool calls before execution. Through comprehensive evaluation on 100 diverse test scenarios, we demonstrate that HGuard achieves 98% accuracy in detecting unsafe tool calls with minimal latency overhead (<10ms median). The system successfully prevents unauthorized API calls, parameter hallucinations, and phantom tool invocations while maintaining high throughput (>5,000 requests/second). These results establish HallucinationGuard as a practical safety layer for production AI systems requiring reliable tool use capabilities.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Owen Graham,

Arabella Holmes

Abstract: Parkinson’s Disease (PD) is a progressive neurodegenerative disorder characterized by motor dysfunction and cognitive decline, necessitating robust predictive models for early diagnosis and disease monitoring. In recent years, the integration of machine learning (ML) techniques into clinical neurology has demonstrated promising potential for enhancing the prediction of disease progression. This study explores a multimodal machine learning framework that leverages gait dynamics and neuroimaging features to forecast the progression trajectory of Parkinson’s Disease. Gait data, extracted through wearable inertial sensors and pressure-sensitive walkways, provide real-time motor function assessments, while structural and functional magnetic resonance imaging (MRI/fMRI) deliver rich neuroanatomical and connectivity markers correlated with disease severity. The proposed model employs feature selection strategies such as recursive feature elimination and principal component analysis to reduce dimensionality and enhance model interpretability. Multiple supervised learning algorithms—including support vector machines (SVM), random forest (RF), and deep neural networks (DNN)—are trained and evaluated on a clinically validated dataset comprising longitudinal gait and imaging data. Performance metrics including accuracy, area under the receiver operating characteristic curve (AUC-ROC), and mean absolute error (MAE) are employed to compare model effectiveness across different stages of PD, as defined by the Hoehn and Yahr scale and Unified Parkinson's Disease Rating Scale (UPDRS). Findings suggest that multimodal models outperform unimodal counterparts, with the combination of gait variability indices and cortical thinning patterns showing the strongest predictive capability. The study underscores the value of integrating heterogeneous data sources in machine learning pipelines for clinical prognosis. Furthermore, this research contributes to the development of precision medicine approaches, enabling personalized therapeutic interventions and optimized disease management strategies for Parkinson’s patients.
Article
Computer Science and Mathematics
Information Systems

Sittichai Choosumrong,

Kampanart Piyathamrongchai,

Rhutairat Hataitara,

Urin Soteyome,

Nirut Konkong,

Rapikorn Chalongsuppunyoo,

Venkatesh Raghavan,

Tatsuya Nemoto

Abstract: Disaster risk reduction requires efficient flood control in lowland and flood-prone areas, especially in agricultural areas like Bang Rakam model area in Phitsanulok province, Thailand. In order to improve flood prediction and response, this study proposes the creation of a low-cost, real-time water level monitoring integrated with spatial data analysis using Geographic Information System (GIS) technology. Ten ultrasonic sensor-equipped monitoring stations were installed thoughtfully around sub-catchment areas to provide highly accurate water level readings. To define inundation zones and create flood depth maps, the sensors gather flood level data from each station, which is then processed using a 1-meter Digital Elevation Model (DEM) and Python-based geospatial analysis. In order to create dynamic flood maps that offer information on flood extent, depth, and water volume within each sub-catchment, an automated method was created to use real-time water level data. These results demonstrate the promise of low-cost IoT-based flood monitoring devices as an affordable and scalable remedy for communities that are at risk. This method improves knowledge of flood dynamics in the Bang Rakam model area by combining sensor technology and spatial data analysis. It also acts as a standard for flood management tactics in other lowland areas. The study emphasizes how crucial real-time data-driven flood monitoring is to enhancing early warning systems, disaster preparedness, and water resource management.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Md. Sabilar Rahman,

Md. Rashed,

Mysha Islam Kakon

Abstract: Growing and growing rice is important for keeping the world's food supply safe, but the chance of diseases spreading to rice fields makes things very hard. These diseases can be caused by many things, such as bacteria, fungi, viruses, and external stresses. The main way that these diseases are found is by analyzing the leaves. We got 711 pictures that show five types of different diseases. After adding more pictures, the dataset grows to 4449 pictures that show the same five types of diseases: bacterial leaf blight, brown spot, hispa, leaf smut, and tungro. Because of these problems, researchers are looking into how to use new technologies, especially Convolutional Neural Networks (CNNs), to find rice leaf diseases more accurately. We came up with four deep learning models for this study: EfficientNetB7, VGG-19, ResNet-101, and InceptionResNetV2. A certain number of epochs were used to train each model, and different performance measures were looked at, such as training accuracy, validation accuracy, validation loss, and total model accuracy. The test results show that EfficientNetB7 and ResNet-101 work better than other models, getting amazing accuracy, recall, and F1-scores across a number of disease categories. EfficientNetB7 and ResNet-101 are the best, with an accuracy rate of 99%, validation accuracy of 99.44% and 99.58%, and the lowest validation loss of 0.0151 for EfficientNetB7 and 0.0279 for ResNet-101.
Article
Computer Science and Mathematics
Computational Mathematics

Charbel Mamlankou

Abstract: This paper presents a new Bayesian inverse framework improved by convolutional adaptation for the numerical approximation of weakly singular Volterra integral equations. The proposed methodology addresses the computational difficulties associated with singular kernels by implementing a convolution-based regularization technique that makes the kernel easier to deal with. A prior Gaussian process with a radial basis function kernel is used, together with a gradual mesh transformation that concentrates the quadrature points near the singularities, significantly improving numerical accuracy without compromising efficiency. Numerical experiments confirm the robustness and efficiency of the method in various scenarios, demonstrating strong convergence and stability in various hyperparameter configurations. Our approach outperforms existing methods in comparative benchmarks with collocation and Legendre wavelet probes.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Sheldon Lee Gosline

Abstract: Modern wireless networks are increasingly vulnerable to disruption, desynchronization, and overload, especially in dynamic environments where memory, frequency stability, or contextual regularity is compromised. This paper proposes a novel signal processing architecture inspired by long-wave civilizational memory patterns and recursive symbolic logic. We define two core operators: the Harmonic Attunement Function \mathcal{H}(t), which models continuity and stabilization over fluctuating input, and the Recursive Memory Operator \Delta^n \mathrm{rem}(t), which simulates layered temporal recall across signal epochs. Together, these operators form the basis of a symbolic signal-processing method designed to stabilize transmissions under epistemic or infrastructural discontinuities. We demonstrate how these theoretical constructs—rooted in cultural memory science and recursive encoding—can be translated into practical models for harmonically stabilized wireless communication, enabling phase-sensitive re-synchronization, improved redundancy handling, and symbolic correction in disrupted transmissions. A simulation scenario is proposed to test \mathcal{H}(t)-based stabilization in a hypothetically fragmented multi-node wireless network.
Review
Computer Science and Mathematics
Computer Science

Tahir Naquash,

Zeeshan Yalakpalli,

Shania Margaret Saini,

Shivshankar -,

Ayesha Siddiqua

Abstract: Cybersecurity remains a critical challenge, demand- ing efficient and reliable methods for vulnerability detection. Traditional approaches often struggle with speed, scalability, and the evolving nature of threats. This survey reviews recent litera- ture (2022-2024) focusing on automated vulnerability detection, particularly leveraging Artificial Intelligence (AI) and Machine Learning (ML). We analyze the key advancements presented in these studies, such as improved accuracy in identifying specific flaws (e.g., SQL Injection, Cross-Site Scripting) using hybrid methods and deep learning, and the increased efficiency offered by automation scripts and AI-driven penetration testing tools. However, we also critically examine the persistent limita- tions highlighted, including dependencies on large, high-quality datasets, the ’black-box’ nature of some AI models, challenges in detecting zero-day threats, the resource-intensive nature of advanced models, and the continued need for human validation. These identified gaps and drawbacks collectively underscore the necessity for exploring more integrated, autonomous, and trans- parent security solutions, motivating research into systems that combine AI’s detection capabilities with technologies ensuring verifiable and trustworthy reporting.
Article
Computer Science and Mathematics
Security Systems

Pengyu Li,

Feifei Chen,

Lei Pan,

Thuong Hoang,

Ye Zhu,

Leon Yang

Abstract: As network infrastructure and IoT technologies continue to evolve, immersive systems such as virtual reality (VR) are becoming increasingly integrated into interconnected environments. These advancements enable the transmission and processing of vast amounts of multimodal data in real time, enhancing user experiences through rich visual and 3D interactions. However, ensuring continuous user authentication in VR environments remains a significant challenge. Therefore, an effective user monitoring system is needed to track VR users in real time and trigger re-authentication when necessary. Based on this premise, we propose a multi-model authentication framework that combines eye-tracking data and biometric information named Mobilenetv3pro. Using the MobileNetV3 model, we extract and classify eye region features, while an CNN-based model processes sequential behavioral data. Authentication performance is measured through Equal Error Rate (EER), accuracy, Recall, F1-score, model size and inference time. Experimental results show that eye-based authentication using MobileNetV3pro achieves a lower EER (0.03) compared to baseline models, demonstrating its effectiveness in VR environments.

of 507

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2025 MDPI (Basel, Switzerland) unless otherwise stated