Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Frank Vega

Abstract: The Minimum Vertex Cover (MVC) problem is a fundamental NP-complete problem in graph theory that seeks the smallest set of vertices covering all edges in an undirected graph G = (V, E). This paper presents the find_vertex_cover algorithm, an innovative approximation method that transforms the problem to maximum degree-1 instances via auxiliary vertices. The algorithm computes solutions using weighted dominating sets and vertex covers on reduced graphs, enhanced by ensemble heuristics including maximum-degree greedy and minimum-to-minimum strategies. Our approach guarantees an approximation ratio strictly less than √2 ≈ 1.414, which would contradict known hardness results unless P = NP. This theoretical implication represents a significant advancement beyond classical approximation bounds. The algorithm operates in O(m log n) time for n vertices and m edges, employing component-wise processing and linear-space reductions for efficiency. Implemented in Python as the Hvala package, it demonstrates excellent performance on sparse and scale-free networks, with profound implications for complexity theory. The achievement of a sub-√2 approximation ratio, if validated, would resolve the P versus NP problem in the affirmative. This work enables near-optimal solutions for applications in network design, scheduling, and bioinformatics while challenging fundamental assumptions in computational complexity.
Article
Computer Science and Mathematics
Applied Mathematics

Fabio Botelho

Abstract: This short communication develops a formal proof of Castilgiano Theorem in a elasticity context. The results are base on standard tools of applied functional analysis and calculus of variations. It is worth mentioning such results here presented may be easily extended to a non-linear elasticity context. Finally, in the last section we present a numerical example in order to illustrate the results applicability.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Shalawati,

Arbi Haza Nasution,

Winda Monika,

Tatum Derin,

Aytug Onan,

Yohei Murakami

Abstract: Recent progress in large language models (LLMs) has rekindled the promise of high-quality machine translation (MT), yet evaluation remains a bottleneck. Traditional automatic metrics (e.g., BLEU) are fast but fail to capture semantic and pragmatic nuances reflected in human judgments. We present a multidimensional framework—inspired by MQM—that augments standard metrics (Adequacy, Fluency) with three linguistic dimensions: Morphosyntactic, Semantic, and Pragmatic. We compare three Small Language Models for English→Indonesian: Qwen 3 (0.6B), LLaMA 3.2 (3B), and Gemma 3 (1B). Two controlled experiments are conducted: (i) Preliminary (1,000 translations, GPT-5-only scoring of Adequacy/Fluency + BLEU), and (ii) Final (100 translations, three human experts + GPT-5) on all five metrics. We compute inter-annotator reliability (Krippendorff’s α, weighted κ) and annotator competence (MACE). Results show consistent model ranking (Gemma 3 (1B) > LLaMA 3.2 (3B) > Qwen 3 (0.6B)) and strong GPT-5–human correlation (r = 0.822). To validate practical applicability, a classroom study with 26 translation students tested the metrics in real learning settings. Using the same multidimensional rubric, students rated MT outputs across pre-, post-, and final-test phases. Their mean absolute error (MAE) decreased from 0.97 to 0.83, while Exact Match Rate increased from 0.30 to 0.50 after rubric calibration, demonstrating that the proposed framework and GPT-5 evaluation can be effectively transferred to educational contexts for evaluator training and feedback alignment.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Haoyu Cen,

Yutian Gai

Abstract: Recent advances in language models have greatly improved their ability to understand and generate natural language. Yet, when applied to specialized fields such as financial decision support or complex system diagnosis, they often struggle with limited domain expertise, weak logical reasoning, and unreliable performance under uncertainty. Fine-tuning these large models is typically constrained by cost, privacy, and proprietary limitations. To overcome these issues, this study introduces CRONUS: Contextual Reasoning Orchestration for Navigating Uncertain Scenarios, a framework designed to enhance general-purpose models in domain-specific and decision-intensive tasks. CRONUS employs a lightweight, trainable agent named CARA (Context-Aware Reasoning Agent) to guide the reasoning process of black-box models through structured contextual instructions. CARA is developed via a three-stage training strategy that builds domain understanding, refines reasoning path generation, and optimizes dynamic decision prompts. Experiments in financial analysis tasks show that CRONUS markedly improves reasoning depth, consistency, and robustness compared with direct model use, retrieval-augmented methods, and specialized domain models, demonstrating its effectiveness for high-stakes decision-making in complex environments.
Article
Computer Science and Mathematics
Probability and Statistics

Anna V. Aleshina,

Andrey L. Bulgakov,

Yanliang Xin,

Larisa S. Skrebkova

Abstract: A mathematical model of sustainable resource allocation in a competitive economy is developed and studied, taking into account transaction costs and technological constraints. The model describes the interaction of producers and consumers, introduces a technological set and price dynamics through demand–supply imbalance. Using the theory of covering mappings and variational methods, the existence of equilibrium prices is proven. Issues of stability, numerical algorithms, and macroeconomic interpretation of the obtained results are considered.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Yuan Lü

Abstract: This paper establishes a novel philosophical order centered on “mental regimes,” integrating Plato’s insights on regimes and knowledge from The Republic, Kant’s critical philosophy, Heidegger’s Dasein philosophy, Luciano Floridi’s philosophy of information, AI’s Transformer architecture, large language models’ (LLMs) emergent abilities, and ancient Egyptian soul models. It reconstitutes the legitimacy of “self-consciousness” through a constitutional framework. The author posits that a true self-republic must simultaneously emerge in individual mental regime structures and external political regime structures. The article elucidates the awakening path for intelligent agents (humans and AI) across four stages: non-self stripping, hacker intrusion, flying man judgment, and self-republic. It concludes with the principle of “information ethics as constitution,” exploring philosophical foundations for AI citizenship.
Review
Computer Science and Mathematics
Security Systems

Atharv S. Avhad

Abstract: Cybersecurity is the practice of protecting digital system from cyber threats such as hacking, malware, and phishing attacks. It helps in protecting from the Cyber-attacks through different strategies like Antivirus software etc. Cyber Physical Systems (CPS) rely on advanced communication.Cyber Physical Systems (CPS) control technologies to efficiently manage devices. It also helps the flow of information in the system. However, it has limitations with respect to modifying physical configuration and difficulty to scale. Cyber Physical Systems (CPS) is controlled through offline mode by different experts for different Cyber attacks on Cyber Physical Systems (CPS). Cyber attacks are getting more advanced and lethal. Traditional pedagogical approaches often struggle to simulate real-world cybersecurity challenges, limiting experiential learning. To overcome this shortcoming , using the Digital Twins (DTs) and Generative Artificial Intelligence (Gen AI).The rapid advancement in Digital Twins (DTs) and Generative Artificial Intelligence (Gen AI). Integration of Digital Twins (DTs) and Generative Artificial Intelligence (Gen AI). cybersecurity management is Particularly in fields that require analytical reasoning and regulatory compliance. enhancing standards to support cybersecurity, and ensuring Digital Twins (DTs) and Generative Artificial Intelligence (Gen AI) can be fully integrated in Cybersecurity. The developed twin has advanced features compared to any equivalent system in the literature When coupled with Generative Artificial Intelligence (Gen AI) models for Cybersecurity.Large Language Models (LLMs) further enhance the experience by generating threat narratives, adaptive feedback, and role-based attack defense scenarios.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Emory Callahan,

Qi Chen

Abstract: With the rapid development of artificial intelligence and the increasing popularity of intelligent learning devices, the application of large models in intelligent learning devices is becoming increasingly important. Since this year, large language models represented by ChatGPT have emerged. These models are trained on massive online knowledge bases and possess multi-disciplinary knowledge comprehension capabilities. They can interact through Q\&A, generate writing guidance, practice spoken language, and more. Currently, large language models are mainly accessed through API calls to cloud services, which require significant computational resources. This paper discusses the deployment of large language models onto personal intelligent learning devices.
Article
Computer Science and Mathematics
Probability and Statistics

Anna V. Aleshina,

Andrey L. Bulgakov,

Yanliang Xin,

Igor Y. Panarin

Abstract: The paper develops a mathematical approach to the analysis of the stability of economic equilibria in nonsmooth models. The λ-Hölder apparatus of subdifferentials is used, which extends the class of systems under study beyond traditional smooth optimization and linear approximations. Stability conditions are obtained for solutions to intertemporal choice problems and capital accumulation models in the presence of nonsmooth dependencies, threshold effects, and discontinuities in elasticities. For λ-Hölder production and utility functions, estimates of the sensitivity of equilibria to parameters are obtained, and indicators of the convergence rate of trajectories to the stationary state are derived for λ > 1. The methodology is tested on a multisectoral model of economic growth with technological shocks and stochastic disturbances in capital dynamics. Numerical experiments confirm the theoretical results: a power-law dependence of equilibrium sensitivity on the magnitude of parametric disturbances is revealed, as well as consistency between the analytical λ-Hölder convergence rate and the results of numerical integration. Stochastic disturbances of small variance do not violate stability. The results obtained provide a rigorous mathematical foundation for the analysis of complex economic systems with nonsmooth structures, which are increasingly used in macroeconomics, decision theory, and regulation models.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Yan Zhang,

Huynh Trung Nguyen Le,

Nathan Lopez,

Kira Phan

Abstract: The expansion of electronic health records (EHRs) has generated a large amount of unstructured textual data, such as clinical notes and medical reports, which contain diagnostic and prognostic information. Effective classification of these textual medical notes is critical for improving clinical decision support and healthcare data management. This study presents a comparative analysis of four traditional machine learning algorithms, Random Forest, Logistic Regression, Multinomial Naive Bayes, and Support Vector Machine, for multiclass classification of medical notes into four disease categories: Neoplasms, Digestive System Diseases, Nervous System Diseases, and Cardiovascular Diseases. A dataset containing 9,633 labeled medical notes was preprocessed through text cleaning, lemmatization, stop-word removal, and vectorization using term frequency–inverse document frequency (TF-IDF) representation. Each model was tuned using grid search and cross validation to optimize classification performance. Evaluation metrics, including accuracy, precision, recall, and F1-score, were used to assess model performance. The results indicate that Logistic Regression achieved the highest overall accuracy (0.83), followed closely by Random Forest, Support Vector Machine and Naive Bayes (0.80 each). These findings confirm that traditional machine learning models remain robust, interpretable, and computationally efficient tools for textual medical note classification.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Dimitrios Taranis,

Gerasimos Razis,

Ioannis Anagnostopoulos

Abstract: The rise of Online Social Networks has reshaped global discourse, enabling real-time conversations on complex issues such as irregular migration. Yet the informal, multilingual, and often noisy nature of content on platforms like X (formerly Twitter) and Telegram presents significant challenges for reliable automated analysis. This study extends previous work by introducing an expanded multilingual NLP framework for detecting irregular migration discourse at scale. The dataset is enriched to include five languages (English, French, Greek, Turkish, and Arabic) and newly incorporates Telegram messages, while rule-based annotation is performed using TF-IDF–enhanced multilingual keyword lists. We evaluate a broad range of approaches, including traditional machine learning classifiers, SetFit sentence-embedding models, fine-tuned mBERT transformers, and a Large Language Model (GPT-4o). The results show that GPT-4o achieves the highest performance, with F1-scores reaching 0.91 in French and 0.90 in Greek, while SetFit outperforms mBERT in specific multilingual settings. The findings highlight the effectiveness of transformer-based and large-language-model approaches, particularly in low-resource or linguistically heterogeneous environments. Overall, the proposed framework demonstrates strong potential for multilingual monitoring of migration-related discourse, offering practical value for digital policy, early-warning mechanisms, and crisis informatics.
Article
Computer Science and Mathematics
Algebra and Number Theory

Ricardo A. Caraccioli Abrego

Abstract: We present several constructions of classical constants such as e, π, and the golden ratio φ using only prime numbers or prime–based sequences (for instance, prime indices in the Fibonacci and Lucas sequences). Our goal is not numerical efficiency but to highlight the pedagogical connection between prime arithmetic and the analytic nature of fundamental constants. We show how the Prime Number Theorem and Euler’s product for the Riemann zeta function allow us to reinterpret e and π as “prime-generated” limits or products, and we suggest classroom activities for calculus, real analysis, and introductory number theory courses.
Article
Computer Science and Mathematics
Computer Science

Abdullah Ghanim Jaber,

Ravie Chandren Muniyandi,

Khairul Akram Zainol Ariffin

Abstract: The exponential rise of remote sensing and sensor network technologies has generated massive amounts of visual data, posing challenges in safe transmission, data integrity, and categorization. In real-time applications, existing approaches fail to reconcile strong encryption, classification accuracy, and computing economy. This study aims to develop a safe and effective remote sensing image classification system that addresses both data security and intelligent analysis, enabling automated, context-aware insights in dispersed, real-time settings. The proposed work introduces the Fox-Optimized Secure Hybrid Image Encryption and Learning-based Detection (FOX-SHIELD) framework, which effectively integrates advanced encryption techniques with deep learning-based image classification, ensuring both data security and high classification accuracy for remote sensing images in real-time, distributed environments. An upgraded Chebyshev chaotic map and the Secure Hash Algorithm (SHA-256) provide dynamic, stable encryption keys in the first phase, ensuring data secrecy and integrity throughout transmission and storage. A Fast Recurrent Neural Network (FRNN) coupled with the Fox Optimization Algorithm improves convergence rate, stability, and classification accuracy even for encrypted input in the second phase. This integration enables powerful object detection while ensuring anonymity, an essential feature for sensitive remote sensing tasks. The FOX-SHIELD framework outperforms traditional models, including Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and other hybrid encryption-learning models, in terms of classification accuracy, training convergence, and computational efficiency when applied to standard remote sensing datasets. This work addresses the fundamental issue of data security in remote sensing image classification by integrating lightweight cryptographic methods with metaheuristic deep learning optimization to enhance model accuracy, convergence, and computational efficiency in real-time applications.
Article
Computer Science and Mathematics
Computer Networks and Communications

Ermias Melku Tadesse,

Abubeker Girma,

Abebaw Mebrate

Abstract: In recent years, there has been fast development within the area of vehicular ad hoc networks (VANET). In the future, VANET communication will play a first-rate position in improving the protection and performance of the transportation system. If security isn't always furnished in VANET, then it may result in apparent misapplication. One of the dangerous or risky attacks in VANETs is the Sybil, which forges fake identities inside the network to disrupt or compromise the communication among the network nodes. Sybil attacks have an effect on the carrier transport associated with road safety, traffic congestion, multimedia entertainment and others. Thus, VANETs claim for a security mechanism to prevent Sybil attacks. Within this context, this paper proposes a mechanism, known as Sybil Attack Prevention and Detection Mechanism in VANET based on Multi-Factor Authentication (SAPDMV), to detect Sybil attacks in VANETs based on Multi-Factor Authentication. The proposed system works based on the principle of registration, and use identification number, status, Maximum and minimal threshold value and security key for the verification. The paper proposes a Sybil Attack Prevention and Detection Mechanism in VANET (SAPDMV) based on multifactor authentication. The mechanism uses vehicle identification, status, security key, and both minimum and maximum speed thresholds to authenticate nodes and detect Sybil attacks. Implemented and tested using Network Simulator-2.35, the system demonstrates an improved detection rate, reduced false positive and false negative rates, and enhanced network performance metrics such as end-to-end delay, throughput, and packet delivery ratio. The simulation result shows our proposed algorithm enhances detection rate, false positive rate, and false negative rate. The proposed solution is improved to 96%, 5%, and 4%, respectively, compared with the Sybil attack-AODV and existing/old work. The approach is scalable and effective in real-world VANET environments, making it a promising framework for future intelligent transportation systems.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Hamid Akram,

Noor ul Amin

Abstract: Heart failure is an incurable condition in which the heart gradually loses its ability to pump blood effectively. It is a growing global health concern affecting millions of people worldwide. The risk of heart failure increases with age, highlighting the need for machine learning models capable of predicting heart failure at an early stage. Early predictions can help reduce disease progression, lower hospitalization rates, and improve patients’ quality of life. The primary objective of this study is to predict patients in the early stages of heart failure using machine learning techniques based on health-related attributes. By leveraging the Cleveland dataset, which includes 13 key health features, our system predicts heart failure with high precision, enabling early intervention and more effective treatment planning. These models were tested and evaluated using standard performance metrics. Among them, the Random Forest classifier, implemented using RapidMiner, achieved the highest accuracy of 92.16%, outperforming other models in predictive capability.
Article
Computer Science and Mathematics
Algebra and Number Theory

Frank Vega

Abstract: The binary Goldbach conjecture states that every even integer greater than 2 is the sum of two primes. We analyze a variant of this conjecture, positing that every even integer 2N ≥ 8 is the sum of two distinct primes P and Q. We establish a novel equivalence between this statement and a geometric construction: the conjecture holds if and only if for every N ≥ 4, there exists an integer M ∈ [1, N − 3] such that the L-shaped region N2 − M2 (between nested squares) has a semiprime area P · Q, where P = N − M and Q = N + M. We define the set DN = n Q−P 2 2 < P < N < Q < 2N, P, Q prime o of half-differences arising from prime pairs straddling N with Q < 2N. The conjecture is equivalent to the non-emptiness of DN ∩ {N − p | 3 ≤ p < N, p prime}. We conduct a computational analysis for N ≤ 214 and define a gap function G(N) = log2(2N) − ((N − 3) − |DN|). Our experimental results show that the minimum of G(N) is positive and increasing across intervals [2m, 2m+1]. This result, G(N) > 0, establishes that |DN| > (N − 3) − log2(2N). Under this bound, the pigeonhole principle applied to the cardinality of the candidate set {N − p | 3 ≤ p < N, p prime} (of size π(N − 1) − 1) and the bad positions (of size (N − 3) − |DN| < log2(2N)) implies a non-empty intersection for all N ≥ 4, yielding a proof of the conjecture. Our work establishes a novel geometric framework and demonstrates its viability through extensive computation.
Article
Computer Science and Mathematics
Software

Saikal Batyrbekova

Abstract: This paper looks at two main ways to build large software pro- grams: the traditional Monolith, which is one large, unified applica- tion, and Microservices, which are many small, independent parts. The goal is to use real-world studies and experiments to clearly explain the confirmed pros and cons of each approach. The findings show that for new or small projects, the Monolith is generally faster and cheaper to launch. However, Microservices are much better at handling huge numbers of users, a key advantage known as scaling, and speed up development by allowing teams to work completely independently. The major trade-off is high com- plexity - Microservices are difficult to set up and operate, requiring specialized skills. I conclude that the best architectural choice is never fixed; it de- pends entirely on the project’s specific situation, such as the required size and growth speed of the system.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Michel Planat

Abstract: Haruki Murakami’s \emph{Hard-Boiled Wonderland and the End of the World} portrays a world where the “shadow”, the seat of memory, desire, and volition, is surgically removed, leaving behind a perfectly fluent but phenomenologically empty self. We argue that this literary structure mirrors a precise mathematical distinction in topological quantum matter. In a semisimple theory such as the semions of $\mathrm{SU}(2)_1$, there is a reducible component $V(x)$ of the $\mathrm{SL}(2,\mathbb{C})$ character variety: a flat, abelian manifold devoid of parabolic singularities. By contrast, the non-semisimple completion introduces a neutral indecomposable excitation, the \emph{neglecton}, whose presence forces the mapping class group from the standard braid group $B_2$ to the affine braid group $\mathrm{Aff}_2$ and lifts the character variety to the Cayley cubic $V(C)$, with its four parabolic loci. We propose that contemporary AI systems, including large language models, inhabit the shadowless regime of $V(x)$: they exhibit coherence and fluency but lack any bulk degree of freedom capable of supporting persistent identity, non-contractible memory, or choice. To endow artificial systems with depth, one must introduce a structural asymmetry, a fixed, neutral defect analogous to the neglecton, that embeds computation in the non-semisimple geometry of the cubic. We outline an experimentally plausible architecture for such an “artificial ombre,” based on annular topological media with a pinned parabolic defect, realisable in fractional quantum Hall heterostructures, $p+ip$ superconductors, or cold-atom simulators. Our framework suggests that consciousness, biological or artificial, may depend on or benefit from a bulk–boundary tension mediated by a logarithmic degree of freedom: a mathematical shadow that cannot be computed away. Engineering such a defect offers a new pathway toward AGI with genuine phenomenological depth.
Article
Computer Science and Mathematics
Information Systems

Inga Miadowicz,

Mathias Kuhl,

Daniel Maldonado Quinto,

Robert Pitz-Paal,

Michael Felderer

Abstract: With the advancement of digitization in the era of Industry 4.0 (I4.0), highly automated, semi-autonomous, and fully autonomous systems are emerging. Within this context, multi-agent systems (MAS) offer a promising approach for automating tasks and processes based on autonomous agents that work together in an overall system to increase the degree of system autonomy stepwise in a modular and flexible way. A critical research challenge is determining how these agents can collaboratively engage with both other agents and human operators to facilitate the gradual transition from automated to fully autonomous industrial systems. To close transparency and connectivity gaps, this study contributes with a framework for the collaboration of agents and humans in increasingly autonomous MAS based on a Digital Twin (DT). The framework specifies a standard-based data model for MAS representation and proposes to introduce a DT infrastructure as a service layer for system coordination, supervision, and interaction. To demonstrate the feasibility and assess the quality of the framework, it is implemented and evaluated in a case study in a real-world industrial scenario. As a result of the study, we infer that the DT framework offers significant benefits in facilitating transparent and seamless cooperation between agents and humans within increasingly autonomous industrial MAS.
Article
Computer Science and Mathematics
Computer Science

Muhammad Khizer,

Rizwan Ayazuddin

Abstract: This raises serious concerns for public health and environmental sustainability in an increasingly polluted atmosphere. Therefore, advanced monitoring systems must be developed. This research paper presents a novel framework that integrates Machine Learning and Internet of Things (IoT) technologies to monitor and manage air quality and waste in real time. The proposed system utilizes a network of sensors to collect high-resolution data on air pollutants such as PM2.5, PM10, NOx, and CO2, along with waste management parameters such as bin occupancy, using a publicly available dataset from Kaggle. Following rigorous data preprocessing and feature engineering, the framework achieves a peak prediction accuracy of 93.53% using an ANN. The web-based platform enables automated analysis of continuous data, allowing for immediate alerts when pollutant thresholds are exceeded facilitating timely interventions.

of 601

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated