Computer Science and Mathematics

Sort by

Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Chandrakant Patel,

Falguni Suthar,

Shailesh Patel

Abstract: The integration of Artificial Intelligence into the educational systems is believed to have great potential to revolutionize the conventional paradigms of learning. As far as the learning environment is concerned, intelligent agents might be regarded as one of the more potent applications of Ai, programmed to perceive its environment and act towards the best probability of attaining their goals. Within the educational landscape, AI agents can take the form of intelligent tutoring systems, pedagogical agents, conversational facilitators, and adaptive assessment tools. The paper looks into the transformative powers of AI agents in education by reviewing current applications, asserted benefits, and allied challenges; findings from existing literature on personalized learning, automated feedback, enhanced engagement, and administrative support are synthesized. The paper also proposes a research methodology and a hypothetical experiment to empirically validate the impact of AI agents on student learning and engagement in a particular educational context, and the discussion elaborates on expected outcomes based on current trends and their implications for course design, teacher roles, and ethics. The central theme of the conclusion is to forward the future direction of research into the broader scope-AI agents in shaping the future of education-need for well-designed, ethically deployed, and continually evaluated AI agents to realize their full potential.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Md. Shahin Reja,

Md. Rashed,

Mysha Islam Kakon

Abstract: Pests that live on mango leaves are bad for both the safety of our food and the growth of our business. It is most common for powdery mildery, golmachi, bacterial canker, and anthracnose to be found on mango leaves in Bangladesh. If these illnesses are caught early, they can make people more productive. However, this can't be done quickly and correctly by hand. This study suggests a method that uses CNN and transfer learning to find infections before they can spread. The data set came from the Mango farm and several fields. Our goal is to quickly and accurately find mango diseases so that they can help the growing business and save farmers a lot of time and work. It was ResNet50, MobileNetV2, DenseNet201, MobileNetV3, and VGG16 that we used as transfer learning models. We used different activation functions, like elu and ReLu, as well as sigmoid and softmax functions for the thick layer to sort the model into groups. Based on the f1 Score, accuracy, memory, and precision, the suggested way is a better model than the current models. Among this model, the ResNet50 model provided better results, with 96.49% accuracy.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Rogério Figurelli

Abstract: Heuristic layering is introduced as a conceptual and architectural alternative to end-to-end modeling in artificial intelligence. Instead of treating intelligent systems as monolithic pipelines optimized solely through data-driven processes, this paradigm organizes AI into explicit, modular layers — each governed by transparent heuristics tailored to distinct cognitive or computational functions. The approach offers a pathway to increased interpretability, flexibility, robustness, and adaptability, addressing core limitations of end-to-end systems such as global brittleness and lack of targeted auditability. By mapping clear interfaces between layers and enabling local updates without retraining the entire system, heuristic layering supports both incremental innovation and systematic error isolation. This article frames the theoretical foundations of heuristic layering, draws parallels with modular design in software engineering and cognitive science, and discusses scenarios in which layered architectures may surpass traditional end-to-end models in transparency, maintainability, and domain transferability. The conclusion outlines future research avenues for the taxonomy, formal patterns, and practical deployment of heuristic-layered AI systems.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Uraquitan Lima Filho,

Tiago Alexandre Pais,

Ricardo Jorge Pais

Abstract: Model selection in machine learning applications for biomedical predictions is often constrained by reliance on conventional global performance metrics such as area under the ROC curve (AUC), sensitivity, and specificity. When these metrics are closely clustered across multiple candidate models, distinguishing the most suitable model for real-world application becomes challenging. We propose a novel composite evaluation framework that combines a visual analysis of prediction score distributions with a new performance metric, the MSDscore, tailored for classifiers producing continuous prediction scores. This approach enables detailed inspection of true positive, false positive, true negative, and false negative distributions across the scoring space, capturing local performance patterns often overlooked by standard metrics. Implemented within the Digital Phenomics platform as the MSDanalyser tool, our methodology was applied to 27 predictive models developed for breast, lung, and renal cancer prognosis. Although conventional metrics showed minimal variation between models, the MSDA methodology revealed critical differences in score-region behaviour, allowing the identification of models with greater real-world suitability. While our study focuses on oncology, the methodology is generalisable to other domains involving threshold-based classification. We conclude that integrating this composite framework alongside traditional performance metrics offers a complementary, more nuanced approach to model selection in clinical and biomedical settings.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Jude Dontoh,

Anthony Dontoh,

Andrews Danyo

Abstract: The deployment of Convolutional Neural Networks (CNNs) for plant species classification in agricultural and biodiversity monitoring applications requires robust interpretability to ensure reliable real-world performance. This study presents the first systematic analysis of visual bias in plant classification models using Gradient-weighted Class Activation Mapping (Grad-CAM) across five CNN architectures: Baseline CNN, Improved CNN, VGG16, ResNet50, and DenseNet121. We evaluated these models on plant species datasets to investigate potential biases in feature attribution patterns. Our analysis reveals a consistent and previously unreported bias toward light-colored plant features across all tested architectures, with models systematically focusing on bright leaves, flowers, and background elements while underutilizing darker plant components for classification decisions. This light-color dependency presents significant implications for deployment in diverse environmental conditions where lighting variations are common. Statistical analysis confirms the bias is architecture-independent, suggesting a fundamental limitation in current CNN training approaches for botanical applications. We provide methodological guidelines for bias detection in specialized computer vision domains and discuss implications for responsible AI deployment in agricultural systems. These findings highlight critical considerations for explainable AI in plant classification and establish a framework for identifying similar biases in domain-specific applications.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Priyam Deepak Choksi

Abstract: Background: Current medical image generation models typically require substantial computational resources, creating practical barriers for many research institutions. Recent diffusion models achieve notable results but demand multiple high-end GPUs and large datasets, limiting accessibility and reproducibility in medical AI research.Methods: We present a resource-efficient latent diffusion model for text-conditional chest X-ray generation, trained on a single NVIDIA RTX 4060 GPU using the Indiana University Chest X-ray dataset (3,301 frontal images). Our architecture combines a Variational Autoencoder (VAE) with 3.25M parameters and 8 latent channels, a U-Net denoising network with 39.66M parameters incorporating cross-attention mechanisms, and a BioBERT text encoder fine-tuned with parameter-efficient methods (593K trainable from 108.9M total parameters). We employ optimization strategies including gradient checkpointing, mixed precision training, and gradient accumulation to enable training within 8GB VRAM constraints.Results: The model achieves a validation loss of 0.0221 after 387 epochs of diffusion training, with the VAE converging at epoch 67. Inference time averages 663ms per 256×256 image on the RTX 4060, enabling real-time generation. Total training time was approximately 96 hours compared to 552+ hours reported for comparable multi-GPU models. The system successfully generates anatomically plausible chest X-rays conditioned on clinical text descriptions including various pathological findings.Conclusions: Our work demonstrates that effective medical image generation does not require massive computational resources. By achieving functional results with a single consumer GPU and limited data, we provide a practical pathway for medical AI research in resource-constrained settings. All code, model weights, and training configurations are publicly available at https://github.com/priyam-choksi/cxr-diffusion to facilitate reproducibility and further research.
Review
Computer Science and Mathematics
Signal Processing

Anggunmeka Prasasti,

Achmad Rizal,

Bayu Erfianto,

Said Ziani

Abstract: This study investigated the transformative potential of Compressive Sensing (CS) for optimizing multimodal biomedical signal fusion in Wireless Body Sensor Networks (WBSN), specifically targeting challenges in data storage, power consumption, and transmission bandwidth. Through a Systematic Mapping Study (SMS) and Systematic Literature Review (SLR) following the PRISMA protocol, significant advancements in adaptive CS algorithms and multimodal fusion have been achieved. However, this research also identified crucial gaps in computational efficiency, hardware scalability, and noise robustness for one-dimensional biomedical signals (e.g., ECG, EEG, PPG, and SCG). The findings strongly emphasize the potential of integrating CS with deep reinforcement learning and edge computing to develop energy-efficient, real-time healthcare monitoring systems, paving the way for future innovations in Internet of Medical Things (IoMT) applications.
Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Nikolay Karabutov

Abstract: Many publications have been devoted to the problem of parametric identifiability (PI). The major focus is on a priori identifiability. The parametric identifiability problem using experimental data (the so‐called practical identifiability (PID)) is less studied. This is a parametric identification task. PI has not been studied using current data (in adaptive systems). We propose an approach to estimating PI based on the application of Lyapunov functions. The role of the excitation constancy is shown. Conditions of local parametric identifiability (LPI) for a class of linear dynamical systems are got based on current experimental data. The case is considered when both the state vector and the input‐output set are measured. Estimates are obtained for the parametric residual. Case of limiting LPI on the set of current data is studied. Influence of initial conditions on PI is analysed. The case of m‐parametric identifiability is studied. Approach to estimating the PI of linear dynamical systems and systems with periodic coefficients based on the application of Lyapunov exponents is proposed. The LPI of decentralised systems is analysed. Examples are given.
Article
Computer Science and Mathematics
Geometry and Topology

Shuli Mei

Abstract: The primary distinction between technical design and engineering design lies in the role of analysis and optimization. From its inception, descriptive geometry has supported military and engineering applications, and its graphical rules inherently reflect principles of optimization—similar to the core ideas of sparse representation and compressed sensing. This paper explores the geometric and mathematical significance of the center line in symmetrical objects and the axis of rotation in solids of revolution, framing these elements within the theory of sparse representation. It further establishes rigorous correspondences between geometric primitives—points, lines, planes, and symmetric solids—and their sparse representations in descriptive geometry. By re-examining traditional engineering drawing techniques from the perspective of optimization analysis, this study reveals the hidden mathematical logic embedded in geometric constructions. The findings not only support deeper integration of mathematical reasoning in engineering education but also provide an intuitive framework for teaching abstract concepts such as sparsity and signal reconstruction. This work contributes to interdisciplinary understanding between descriptive geometry, mathematical modeling, and engineering pedagogy.
Article
Computer Science and Mathematics
Computer Science

Marian Ileana,

Pavel Petrov,

Vassil Milev

Abstract: In the context of modern healthcare, the integration of sensor networks into electronic health record (EHR) systems introduces new opportunities and challenges related to data privacy, security, and interoperability. This paper proposes a secure, distributed web system architecture that integrates real-time sensor data with a custom Customer Relationship Management (CRM) module to optimize patient monitoring and clinical decision-making. The architecture leverages IoT-enabled medical sensors to capture physiological signals, which are transmitted through secure communication channels and stored in a modular EHR system. Security mechanisms such as data encryption, role-based access control, and distributed authentication are embedded to address threats related to unauthorized access and data breaches. The CRM system enables personalized healthcare management while respecting strict privacy constraints defined by current healthcare standards. Experimental simulations validate the scalability, latency, and data protection performance of the proposed system. The results confirm the potential of combining CRM, sensor data, and distributed technologies to enhance healthcare delivery while ensuring privacy and security compliance.
Article
Computer Science and Mathematics
Algebra and Number Theory

Frank Vega

Abstract: For over two millennia, the question of whether odd perfect numbers---positive integers whose divisors sum to twice the number itself---exist has intrigued mathematicians, from Euclid's construction of even perfect numbers using Mersenne primes to Euler's exploration of potential odd counterparts. This paper resolves this enduring conjecture by proving, through a rigorous proof by contradiction, that odd perfect numbers do not exist. We utilize the abundancy index, defined as $I(n) = \frac{\sigma(n)}{n}$, where $\sigma(n)$ is the sum of the divisors of $n$, and the Euler totient function $\varphi(n)$. Assuming an odd perfect number $N$ exists with $I(N) = 2$, we employ the inequality $\frac{\sigma(N) \cdot \varphi(N)}{N^2} > \frac{8}{\pi^2}$ for odd $N$ and establish that $\frac{N}{\varphi(N)} \geq 3$ for odd perfect numbers with at least 10 distinct prime factors. This leads to a contradiction, as $\frac{N}{\varphi(N)}$ cannot be less than $\frac{\pi^2}{4} \approx 2.4674$ while being at least 3. Rooted in elementary number theory, this proof combines classical techniques with precise analytical bounds to confirm that all perfect numbers are even, resolving a historic problem in number theory.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Zhenyu Gao

Abstract: Feedback alignment (FA) has emerged as an alternative to backpropagation for training deep networks by using fixed random feedback weights. While FA shows promise in supervised tasks, its extension to preference-based fine-tuning (PFT) of large language models—which relies on human or learned preference signals—remains underexplored. In this work, we analyze theoretical limitations of FA applied to PFT objectives. We derive error propagation bounds, characterize convergence conditions for paired-FA updates, and quantify the impact of preference noise and feedback mismatch on fine-tuning stability. By integrating recent advances in meta-reinforcement learning and prompt compression, we highlight trade-offs between feedback complexity and fine-tuning efficiency, offering practical guidelines for hybrid FA–backprop architectures in large-scale preference optimization.
Article
Computer Science and Mathematics
Mathematics

Kazuhito Owada

Abstract: The Collatz conjecture, despite its deceptively simple formulation, remains one of the most enduring unsolved problems in mathematics. It posits that repeatedly applying the operation — divide by 2 if even, or multiply by 3 and add 1 if odd — to any positive integer will eventually reach the number 1. While the conjecture has been numerically validated for vast ranges, a general proof has eluded mathematicians.This paper introduces a novel structural and visual framework for understanding the Collatz problem by constructing a "Collatz Tree" — a directed rooted tree that systematically organizes all natural numbers. Each branch originates from an odd number and extends through its powers of two, forming infinite geometric sequences. We rigorously prove that every natural number is uniquely contained within this tree structure.Furthermore, we demonstrate that constructing a tree via the reverse Collatz operation (starting from 1 and applying valid inverses of the Collatz function) reproduces the exact same structure as the Collatz Tree. This equivalence implies that any number, when followed downward through the Collatz process, ultimately converges to the root node 1.By reframing the conjecture through this structural lens, we reveal a new avenue for understanding the convergence behavior of Collatz sequences, providing clarity to the flow of natural numbers through a deterministic tree topology, and reinforcing the conjecture’s validity through structural completeness and absence of cycles.
Article
Computer Science and Mathematics
Computational Mathematics

Rogério Figurelli

Abstract: Collapse Mathematics (cMth) is introduced as a new symbolic and epistemic discipline aimed at modeling the survival patterns of mathematical structures under escalating regimes of interpretive collapse. Unlike traditional mathematics, which emphasizes deductive stability or probabilistic approximation, cMth explores how formal structures degrade, mutate, or stabilize when subjected to symbolic entropy and compressive filtration. Derived as a complementary lineage to Heuristic Physics (hPhy), this framework does not seek physical simulation, but rather a structural diagnosis of symbolic survivability. Through the articulation of collapse pressure, spectral curvature, and interpretive fragility, cMth opens a new field for investigating why certain mathematical entities — such as the Riemann critical line — persist across interpretive collapse layers, while others fragment or dissolve. The approach invites both rigorous mathematical formalism and structural epistemology, proposing a symbolic theory of resilience that may underlie unsolved problems in number theory, complexity, and foundational logic.
Review
Computer Science and Mathematics
Geometry and Topology

Anant Chebiam

Abstract: An Expository Literature Review Presented At Euler Circle Math Talk. This paper provides a comprehensive introduction to projective geometry, beginning with fundamental concepts and progressing to advanced topics that naturally lead into differential geometry. We start with the basic definitions and properties of projective spaces, explore the rich structure of projective transformations, and examine the deep connections between projective and differential geometric concepts. Each theorem is accompanied by rigorous proofs, making this exposition suitable for readers ranging from advanced undergraduates to graduate students in mathematics.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Dohyoung Rim

Abstract: This study investigated the utility of the cyclic dual latent discovery (CDLD) model inpredicting cognitive training performance among individuals with mild cognitive impairment(MCI), using data from the SUPERBRAIN-MEET randomized controlled trial. CDLDintegrates dual deep neural networks to model the latent traits of both users and trainingcontents, enabling the prediction of task accuracy prior to engagement. The model wastrained on 9,607 observations collected from 130 participants across 166 cognitive trainingtasks. CDLD demonstrated superior predictive accuracy compared to conventional modelsincluding random forest, gradient boosting, and matrix factorization, achieving a root meansquared error of 0.132 on the test set. Ablation analysis underscored the critical contributionof latent traits to prediction performance. Moreover, user latent traits showed significantassociations with baseline cognitive measures, particularly in visuospatial function andimmediate memory. These findings suggest that CDLD predicted training performance byeffectively capturing individual’s cognitive characteristics. By tailoring content withpredicted user performance, CDLD may optimize training efficacy and engagement inindividuals with MCI.
Article
Computer Science and Mathematics
Computer Vision and Graphics

Yiwei Zhou,

Mingfeng Li,

Ding Tan,

Bo Yang

Abstract: Robust point cloud registration under low-overlap conditions remains a significant challenge in 3D computer vision and perception. To address this issue, we propose a novel registration framework that integrates edge-guided feature extraction, FPFH-based correspondence estimation, and quaternion averaging. The proposed method begins by detecting edge features through a normal-extrema-based strategy, which identifies geometrically salient points to enhance structural consistency in sparse overlapping regions. Next, FPFH descriptors are employed to establish point correspondences, followed by quaternion averaging to obtain a globally consistent initial alignment. Finally, a point-to-plane ICP refinement step is applied to improve the registration precision. Comprehensive experiments are conducted on three benchmark datasets—Stanford Bunny, Dragon, and Happy Buddha—to evaluate the performance of the proposed method. Compared with classical ICP and RANSAC-ICP algorithms, our method achieves significantly improved registration accuracy under low-overlap conditions, with the highest improvement reaching 75.7%. The results demonstrate the effectiveness and robustness of the proposed framework in challenging partial overlap scenarios.
Article
Computer Science and Mathematics
Mathematics

Masoud Ataei

Abstract: We construct a spatio-temporal model of information dynamics in which fundamental mathematical constants arise from analytically defined observational mechanisms. The model describes a transient field propagating radially with exponential intensity and geometric attenuation, yielding a point of maximal observability at radius $\Omega$, the Omega constant. Temporal detection is modeled via a harmonic acquisition process, with cumulative inefficiency asymptotically approaching the Euler-Mascheroni constant $\gamma$. These two scales are shown to satisfy the approximate balance law $\gamma+\Omega \approx$ $\log \pi$, interpreted as a structural resonance between spatial embedding and temporal accumulation. A refined expression, $\frac{e^\gamma}{\Omega}+\frac{\alpha}{2 \pi} \approx \pi,$ incorporating the fine-structure constant $\alpha$, achieves greater numerical precision. The formulation highlights an interaction between growth, attenuation, and sampling efficiency, and suggests a deeper geometric-informational correspondence among different mathematical constants.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Dave Paulson,

Lucas Hernandez

Abstract: The complexity of medical information presents a significant barrier to effective health communication, particularly for individuals with low health literacy. As global health systems increasingly rely on digital communication and patient-facing resources, the ability to simplify medical texts without compromising accuracy has become a critical public health objective. This study explores the potential of large language models (LLMs), particularly transformer-based architectures, to automate the simplification of health literacy materials. We focus on evaluating their performance in reducing linguistic complexity while preserving core medical semantics and factual consistency. The research begins with the development of a benchmark dataset comprising public-domain health documents sourced from organizations such as the Centers for Disease Control and Prevention (CDC), the World Health Organization (WHO), and MedlinePlus. Original materials are paired with human-written simplifications at a readability level suitable for the general public. This curated corpus serves as both training and evaluation ground for assessing the capabilities of general-purpose models such as GPT-3.5, GPT-4, and domain-adapted variants like BioBERT and PubMedGPT in sentence- and paragraph-level simplification tasks. We implement a multi-faceted evaluation pipeline combining automated metrics (e.g., Flesch-Kincaid Grade Level, SARI, BLEU, and BERTScore) with human evaluations conducted by public health communication experts and linguistic annotators. These evaluations focus on four key dimensions: linguistic simplicity, medical accuracy, grammatical fluency, and reader comprehension. Our findings reveal that while general-purpose LLMs excel at reducing sentence complexity and improving fluency, they occasionally introduce semantic shifts or omit critical health information. Domain-adapted models, though more semantically faithful, tend to produce less readable outputs due to retained technical jargon. To address this trade-off, we further explore ensemble and prompt-engineering strategies, including few-shot examples that guide models toward producing simplified outputs with greater semantic fidelity. In addition, we examine the potential of reinforcement learning with human feedback (RLHF) to iteratively fine-tune model outputs toward user-specific readability targets. The results suggest that hybrid approaches combining domain knowledge with large-scale language generation offer the most promising path forward. This research contributes to the growing field of health informatics and natural language processing by providing a comprehensive assessment of LLM capabilities in the context of health literacy. It also delivers a reproducible framework and benchmark dataset for future investigations. Importantly, the study maintains strict ethical compliance by using only publicly available documents and refraining from engaging with patient data, ensuring both methodological transparency and societal relevance. The findings have significant implications for developers of digital health tools, public health educators, and healthcare institutions aiming to democratize access to critical medical information.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Shanelle Tennekoon,

Nushara Wedasingha,

Anuradhi Welhenge,

Nimsiri Abhayasinghe,

Iain Murray

Abstract: Navigating through narrow spaces and doorways can be a daily struggle for wheelchair users, often resulting in frustration, collisions, or reliance on external assistance. These challenges highlight a pressing need for intelligent, user-centered mobility solutions that go beyond traditional object detection. In this study, we propose a lightweight segmentation model that integrates context-attention and geometric reasoning to support real-time doorway alignment. The model incorporates a convolutional block attention Module (CBAM) for refined feature emphasis, a content-guided convolutional attention fusion module (CGCAFusion) for multi-scale semantic integration, an unsupervised depth estimation module, and an alignment estimation module that provides intuitive navigational guidance. Trained on the DeepDoorsv2 dataset, our model demonstrates a mean average precision (mAP50) of 95.8% and a F1 score of 93% while maintaining hardware efficiency with 2.96 M parameters, outperforming baseline models. By eliminating the need for depth sensors and enabling contextual decision-making, this study offers a robust solution to improve indoor mobility and delivering actionable feedback to support safe and independent navigation for wheelchair users.

of 510

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2025 MDPI (Basel, Switzerland) unless otherwise stated