Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Michel Planat

Abstract: Recent work on the evaluation of large language models emphasizes that the relevant unit of intelligence is not the artificial system alone but the human–AI hybrid. In parallel, topological and dynamical models of cognition based on Painlev\'e equations and non-semisimple topology propose that consciousness, intelligence, and creativity emerge from constrained long-horizon dynamics near criticality. This perspective article argues that these two research directions are deeply compatible. We show that the empirical framework for human--AI collaboration can be interpreted as a fusion process between complementary cognitive sectors: exploration (AI) and selection (human cognition). The dynamical mechanism underlying this fusion is identified with noisy phase locking between cognitive oscillators. Two independent routes to a universal 1/f spectral signature are developed: a geometric route through the WKB/Stokes analysis of Painlev\'e~V confluence, and an arithmetic route through the Mangoldt function and harmonic interactions in phase-locked loops. We connect these results to the Bost--Connes quantum statistical model, whose phase transition at the pole of the Riemann zeta function provides an exact mathematical framework for the lock-in phase hypothesis of identity consolidation in AI systems. This synthesis suggests a unified research program for hybrid intelligence grounded in topology, dynamical systems, number theory, and real-world AI evaluation.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Alexandru Bunica-Mihai

,

Dan Popescu

,

Loretta Ichim

Abstract: The optimization of herbicide application is one of the most important topics in Precision Agriculture, driven by both economic efficiency and ecological sustainability. Excessive herbicide use can lead to soil degradation, water contamination, and negative impacts on biodiversity, while also contributing to human health risks and climate-related concerns. Developing accurate, automated approaches for distinguishing crops from weeds is therefore essential to support sustainable agricultural practices. In this paper, a novel architecture for crops and weed segmentation in tobacco plantations is proposed: a U-Net variant which incorporates several specific design elements, including deep supervision, a Vegetation Global Context block, and a dual-headed output that separately predicts vegetation and crop masks. Weed regions are derived as the difference between vegetation and crop predictions, allowing the model to enforce logical consistency directly within a single framework, in contrast to other two-step approaches. The proposed architecture was evaluated using multiple modern encoder backbones. Experimental results demonstrate that this architecture not only improves segmentation accuracy compared to prior approaches, with best scores of 94.24% Dice for crop segmentation and 93.72% for weeds, but also significantly reduces inference time by avoiding multi-stage pipelines, making it much better suited for real-time deployment in field conditions.

Article
Computer Science and Mathematics
Computer Networks and Communications

Aymen I. Zreikat

,

Julien El Amine

Abstract: Wireless communications face both opportunities and challenges due to the coexistence of 5G New Radio (NR) high-band, 5G mid-band, and 5G low-band technologies. Each technology uses both licensed and unlicensed spectrum to operate in separate frequency bands. For example, 5G NR uses the high-band of 24+ GHz, the mid-band of 2-6 GHz, or the low-band of less than 2 GHz, including the 5 GHz band via Licensed-Assisted Access (LAA). With the use of sophisticated coexistence mechanisms and optimization techniques, this 5G coexistence scenario in shared spectrum can be effectively managed. These strategies are essential for boosting network capacity, reducing latency, and ensuring fair spectrum use across different wireless technologies. This work provides a comprehensive system-level evaluation of multi-band coexistence and offloading strategies under realistic deployment assumptions. The simulation results confirm the effectiveness of the proposed model, showing that spectrum sharing and coexistence among these technologies deliver scalable and robust performance in heterogeneous service environments. This approach enables efficient load balancing across the entire network and highlights the need for additional features to achieve further performance gains.

Article
Computer Science and Mathematics
Computer Vision and Graphics

Javier Nieves

,

Javier Selva

,

Guillermo Elejoste-Rementeria

,

Jorge Angulo-Pines

,

Jon Leiñena

,

Xuban Barberena

,

Fátima A. Saiz

Abstract: Industrial pouring processes operate under highly dynamic conditions where small deviations can lead to defects, scrap, and production losses. Although modern foundries are equipped with multiple sensors and visual inspection systems, most monitoring approaches remain fragmented, unimodal, and difficult to interpret. Furthermore, annotated anomalous samples in industrial settings are scarce, hindering the development of traditional methods. As a result, many critical pouring anomalies are detected too late or lack sufficient contextual information for effective decision making. In this work, we propose a multimodal framework for industrial scene characterization that unifies visual information and process signals through a Mixture of Experts (MoE) strategy. First, we deploy an ensemble of specialized modules that collaborate to identify regions of interest, assess pouring quality, and contextualize events within the production process, generating an interpretable description of pouring events. Second, we introduce a novel anomaly detection method for video multimodal data, combining a self-supervised transformer with an outlier-aware clustering algorithm. Our approach effectively identifies rare anomalies without requiring extensive manual labeling. The resulting information is structured into a digital-twin-ready representation, enabling seamless synchronization between the physical system and its virtual counterpart. This solution provides a scalable, deployable pathway to transform heterogeneous industrial data into actionable knowledge, supporting advanced monitoring, anomaly detection, and quality control in real foundry environments.

Article
Computer Science and Mathematics
Mathematics

Javad Shokri

Abstract: In this paper, we investigate the algebraic structures of homomorphisms and derivations, which play a significant role in the fields of physics and engineering sciences. The main focus of this study is on the perturbation of homomorphism and derivation structures associated with certain bidimensional Additive-Quadratic functional equation in normed triple Lie (3-Lie) systems, within the concept of Ulam-stability. By employing the fixed point theorem, we establish several required theorems, followed by specific and relevant corollaries that highlight the theoretical significance of our findings.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Masato Takami

,

Tomohiro Fukuda

Abstract: In industrial environments, robust Temporal Action Localization (TAL) is essential; however, frequent occlusions often compromise the reliability of skeletal data, leading to negative transfer in multimodal fusion. To address this challenge, we propose a Gated Skeleton Refinement Module (Gated SRM) that explicitly incorporates Open-Pose confidence scores into the network architecture. By applying these scores as a logarithmic bias within a self-attention mechanism, our method achieves soft suppression—dynamically attenuating the attention weights assigned to unreliable joints—before adaptively fusing the refined skeletal features with RGB representations through a learnable gating network. Extensive experiments on the heavily occluded IKEA ASM dataset demonstrate that our approach effectively prevents the catastrophic accuracy degradation typical of naive fusion strategies, improving the mean Average Precision (mAP) to 21.77% and outperforming the RGB-only baseline. Furthermore, the system maintains practical real-time inference speeds of approximately 16 frames per second (FPS). By prioritizing confidence-based data selection over data restoration, this sensor-metadata-driven architecture offers a highly robust and principled solution for real-world action recognition under occlusion.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Reem Almaziad

,

Heba Kurdi

Abstract: Vehicular Ad Hoc Networks (VANETs) are essential to intelligent transportation systems (ITS), enabling secure, real-time communication among vehicles and infrastructure. However, their decentralized and dynamic nature makes them vulnerable to threats such as Sybil attacks, message forgery, replay attacks, and Denial-of-Service (DoS). This paper presents VANETGuard, a lightweight scalable trust management system that enhances security and scalability in 5G-enabled smart vehicular networks. The proposed system integrates entropy-based anomaly detection, Bayesian inference for adaptive trust scoring, and a lightweight distributed ledger for decentralized, tamper-resistant trust storage. Large-scale simulations under realistic traffic and attack conditions demonstrate that VANETGuard achieves 99.97% detection accuracy, significantly reduces false positives, and maintains low latency and computational overhead while supporting over 300 vehicles. These results highlight VANETGuard’s potential to enable secure, efficient, and scalable trust mechanisms in next-generation ITS and urban mobility systems.

Article
Computer Science and Mathematics
Information Systems

Rahid Zahid Alekberli

,

Robert Haussmann

Abstract: Cloud-enabled big data programs in hybrid environments often underdeliver, and the conventional assumption that tooling or analytics maturity is the primary constraint is beginning to show cracks. Instead, the limiting factor is the frequent misalignment between governance arrangements and corporate strategy, compounded by ambiguous risk ownership and weak operational integration. This study develops a Strategic Data Alignment Framework (SDAF) for small and medium Caspian Basin seaports to translate this alignment problem into an executable implementation logic. Using an interpretive qualitative design, 14 semi-structured interviews were conducted with port deci-sion-makers, strategy and IT leaders, and relevant regulators, and the transcripts were analyzed to surface recurring mechanisms of failure. The data reveal consistent break-downs: unclear accountability for data decisions, immature stewardship, siloed collabo-ration, capability shortfalls, and under-instrumented feedback loops that fail to connect governance controls to the business outcomes. The SDAF addresses these mechanisms through a six-step pathway: (1) diagnose strategy–data misalignment, (2) establish minimum viable governance foundations, (3) specify decision-critical use cases and the role of big data in strategy, (4) embed governance into planning and performance cycles, (5) execute and monitor via measurable KPIs and auditable decision metrics, and (6) it-eratively review and improve through a continuous alignment loop, calibrated to reg-ulatory context, platform maturity, and risk appetite.

Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Mohammed Ayalew Belay

,

Amirshayan Haghipour

,

Adil Rasheed

,

Pierluigi Salvo Rossi

Abstract: Anomaly detection is crucial for maintaining the safety, reliability, and optimal performance of complex systems across diverse domains such as industrial manufacturing, cybersecurity, and autonomous systems. Conventional methods typically handle single data modalities, limiting their effectiveness in the multimodal and dynamic real-world environments. The integration of multimodal data sources, including visual, audio, and sensor data, has emerged as a key advancement, improving detection robustness and accuracy. Simultaneously, the rise of agentic artificial intelligence (AI), characterized by autonomous, goal-oriented agents capable of reasoning and utilizing tools, presents significant opportunities for enhancing anomaly detection systems. This paper provides a comprehensive review of recent advancements at the intersection of agentic AI and multimodal anomaly detection. We propose a novel taxonomy categorizing existing methods by agent architecture, reasoning capabilities, tool integration, and modality scope. We survey foundation model-based detectors, cross-modal fusion techniques, and LLM-driven agents that facilitate dynamic and interpretable anomaly reasoning. Furthermore, we present recent benchmark datasets, critical challenges, mitigations, and future research directions.

Article
Computer Science and Mathematics
Information Systems

Chialuka Ilechukwu

,

Sung-Chul Hong

,

Barin Nag

Abstract: Document authentication remains a pressing challenge in various domains, including financial services, academic credentialing, healthcare, and supply chain management. Existing centralized verification systems are vulnerable to manipulation, inefficiency, and limited transparency. Blockchain technology, with its immutability and tamper-resistant capabilities, offers a strong decentralized alternative; however, many current implementations lack structured, issuer-bound relationships for documents. This paper proposes a blockchain-based model that leverages a hierarchical token structure to authenticate and trace the provenance of high-value digital documents, with a focus in financial records. The model introduces the concept of an issuer-bound parent token and document-linked child tokens, enforcing a structured trust relationship between a legitimate institution and the documents it issues. By combining on-chain cryptographic hashing with off-chain file references, the approach is designed to balance verifiability with scalability. We implement a proof-of-concept using Ethereum-compatible smart contracts on a permissioned blockchain and evaluate it in a consortium-style financial setting. Our functional analyses demonstrate the model’s ability to ensure document integrity, provenance, and resistance to document fraud. This work offers a practical and extensible foundation for secure digital document authentication and verification in financial and other trust-sensitive settings.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

David Chunhu Li

Abstract: Early detection of battery degradation is essential for ensuring the safety and reliability of electric vehicle (EV) systems under real-world operating variability. This paper proposes a physics-guided multi-sensor learning framework, termed SensorFusion-Former (SFF), for early warning of short-term EV battery performance degradation. The proposed approach integrates a physics-based baseline model for operational normalization, a multi-sensor fusion attention mechanism to model cross-modality interactions, and a lightweight transformer architecture for efficient temporal representation learning. Weak supervision is derived from physics-consistent residual analysis with temporal smoothing, enabling scalable training without dense manual annotations. To support reliable deployment, evidential uncertainty modeling and conformal calibration are incorporated to obtain statistically controlled decision thresholds. Experiments conducted on a real driving cycle dataset from IEEE DataPort demonstrate that SFF consistently outperforms classical machine learning methods, deep neural networks, and standard transformer models in terms of early-warning lead time, false alarm rate, and inference efficiency, while maintaining competitive discriminative performance. Cross-scenario evaluations under diverse thermal conditions further confirm the robustness and generalization capability of the proposed framework.

Article
Computer Science and Mathematics
Computer Networks and Communications

Mona Alghamdi

,

Atm S. Alam

,

Asma Cherif

Abstract: Mobile edge computing (MEC) enables resource-constrained mobile devices to execute delay-sensitive and compute-intensive applications by offloading tasks to nearby edge servers. However, task orchestration in MEC is challenged by the highly dynamic system conditions, unreliable networks, and the distributed edge environments. Moreover, as the number of users, tasks, and resources increases, the offloading decision-making problem becomes increasingly complex due to the exponential growth of the search space. To address these challenges, this paper proposes a Multi-Criteria Hierarchical Clustering-based Task Orchestrator (MCHC-TO), a novel framework that integrates multi-criteria decision making with divisive hierarchical clustering for preference-aware and adaptive workload orchestration. Edge servers are first evaluated using multiple decision criteria, and the resulting preference rankings are exploited to form hierarchical preference-based clusters. Incoming tasks are then assigned to the most suitable cluster based on task requirements, enabling efficient resource utilization and dynamic decision making. Extensive simulations conducted using an edge computing simulator demonstrate that the proposed MCHC-TO framework consistently outperforms benchmark approaches, achieving reductions in average service delay and task failure rate of up to 48\% and 92\%, respectively. These results highlight the effectiveness of combining multi-criteria evaluation with hierarchical clustering for robust and dynamic task orchestration in MEC environments.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Mikhail Zgonnikov

,

Maxim Mozgovoy

Abstract: This work presents a method for implementing dynamic difficulty adjustment in the arcade game of Air Hockey using reinforcement learning. The resulting AI-controlled opponent is capable of adapting its skill level to the player's performance, with the goal of maintaining engagement and providing a balanced gameplay experience throughout a match. The approach relies on generating several AI agents through progressively longer training durations, producing distinct and smoothly transitioning difficulty levels that can be switched dynamically. In addition, the system is extended with manually selected parameters that influence physical aspects of the agent's behavior, such as movement speed, reaction latency, and control precision, complementing the differences arising from decision-making quality. The combined method is potentially applicable to a wide range of video games, and the experimental results demonstrate its effectiveness in producing adaptive and varied opponent behavior.

Article
Computer Science and Mathematics
Security Systems

Seema Sirpal

,

Pardeep Singh

,

Om Pal

Abstract: Digital signatures serves as a crucial cryptographic primitive in an e-governance system for the authentication of citizen-government interactions. Traditional methods (DSA, ECDSA) pose computational overheads at resource-limited endpoints and centralized verification servers. While complex-number cryptography provides theoretical efficiency through the Complex Discrete Logarithm Problem (CDLP), prior works often fail to meet the requirements for real-world applications. This paper advances the knowledge in lightweight cryptography by introducing LDSEGoV, a lightweight digital signature scheme for e-governance infrastructure. The proposed method overcomes the shortcomings of previous methods by incorporating sound modular arithmetic for consistent verification, using NIST-approved hash functions. Furthermore, we provide a comprehensive security analysis that provides the formal security proofs of existential unforgeability (EUF-CMA) of the proposed scheme in the Random Oracle Model. Additionally, the experimental results show a 6.5× improvement in signing performance and a 24.76× improvement in verification performance over ECDSA, with a 61% reduction in signature size. These results demonstrate that LDSEGoV is suitable for large-scale digital governance systems for authentication scenarios.

Article
Computer Science and Mathematics
Computer Vision and Graphics

Qinsheng Du

,

Ningbo Zhang

,

Wenqing Bi

,

Ruidi Zhu

,

Yuhan Liu

,

Chao Shen

,

Shiyan Zhang

,

Jian Zhao

Abstract: As autonomous driving technology progresses, efficient and accurate object detectors are able to detect pedestrians, vehicles, road signs, and obstacles in real time, thereby enhancing driving safety and serving as a part of autonomous driving. However, the performance of such object detectors is limited and cannot be leveraged to satisfy a modern autonomous driving system. To address this issue, we develop an object detection network for autonomous driving scenarios, SST-YOLO, which is based on YOLOv8. Specifically, we propose a Sobel convolution & convolution (SCC) to enhance the backbone network of YOLOv8 and perform more full feature extraction. In addition, we replace the original path aggregation feature pyramid network (PAFPN) with a small object augmentation pyramid network (SOAPN) to solve the small object detection problem. For regression accuracy and classification robustness, and thereby to increase the performance in a complex driving scenario, we use a Task-Adaptive Decomposition & Alignment Head (TADAHead) to replace the old YOLOv8 detection head. Experiments on the public autonomous driving dataset KITTI also show that our proposed method outperforms the baseline YOLOv8 model. Compared with the baseline results, the detection accuracy ranges from 65.1% to 68.2%, which indicates that the proposed SST-YOLO network can achieve object detection for autonomous cars.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Michal Podstawski

Abstract: Graph classification is dominated by permutation-invariant graph neural networks. We revisit this problem from a different perspective: can small language models (SLMs) act as graph classifiers when graphs are serialized as text? Unlike GNNs, sequence-based transformers do not encode permutation invariance by construction, raising a fundamental question about structural stability under node relabeling.We provide the first systematic study of permutation robustness in small graph-as-text models. We introduce an evaluation protocol based on Flip Rate and KL-to-Mean divergence to quantify prediction instability across random node permutations. To enforce structural consistency, we propose Permutation-Invariant Training (PIT), a multi-view regularization scheme that aligns predictions across relabeled graph views, and examine its interaction with degree-aware token embeddings as a minimal inductive bias.Across benchmark datasets using parameter-efficient fine-tuning, we show that SLMs achieve competitive classification accuracy, yet standard fine-tuning exhibits non-trivial permutation sensitivity. PIT consistently reduces instability and in most evaluated settings improves accuracy, demonstrating that structural invariance in sequence-based graph models can emerge through explicit regularization.

Article
Computer Science and Mathematics
Probability and Statistics

Zineb Arab

,

Amel Redjil

,

Hanane Ben-Gherbal

Abstract: This paper deals with a system of G-stochastic differential equations with jumps, driven by G-Brownian motion and G-Lévy process. By using the Burkholdr-Davis-Gundy inequalities, we prove a moment estimate and the temporal Hölder regularity of the solution, under the Linear growth and the global Lipschitz conditions of the coefficients with respect to the state variable uniformly in the time variable. Moreover, different stability properties are proved. Some examples like Black-Scholes market driven by G-Brownian motion are employed in order to support our theoretical results.

Article
Computer Science and Mathematics
Probability and Statistics

Elif Kozan

Abstract: Detecting small location shifts in stochastic processes is a fundamental problem in sequential statistical monitoring. Classical procedures such as Shewhart-type schemes, exponentially weighted moving average (EWMA), and cumulative sum (CUSUM) methods are known to perform well under normality or near-symmetry assumptions; however, their effectiveness may deteriorate substantially in the presence of right-skewed distributions. In such settings, mean-based monitoring statistics are highly sensitive to tail behavior, which may result in delayed detection of small shifts or increased false alarm rates. This paper introduces a novel monitoring scheme, referred to as the Golden Ratio (GR) control chart, designed for detecting small location shifts in right-skewed distributions. The proposed method is constructed using a median-centered statistic combined with a geometrically decaying weighting mechanism derived from the golden ratio. Unlike classical time-based weighting schemes, the GR chart assigns weights according to the rank-based distance from the sample median, thereby attenuating the influence of isolated extreme observations while enhancing sensitivity to persistent distributional shifts. Theoretical properties of the proposed monitoring statistic are investigated, and its run-length behavior is analyzed under non-normal distributions. The performance of the GR chart is evaluated through extensive Monte Carlo simulations and is compared with classical EWMA and CUSUM procedures under Gamma models. The results indicate that the proposed method provides a robust and stable alternative for monitoring skewed processes while maintaining competitive sensitivity to small location shifts. Overall, the GR control chart offers a distribution-aware and theoretically grounded framework for sequential monitoring in asymmetric stochastic environments.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Stefan Trauth

Abstract: We demonstrate deterministic localization of cryptographic hash preimages within specific layers of deep neural networks trained on information-geometric principles. Using a modified Spin-Glass architecture, MD5 and SHA-256 password preimages are consistently identified in layers ES15-ES20 with >90% accuracy for passwords and >85% for hash values. Analysis reveals linear scaling where longer passwords occupy proportionally expanded layer space, with systematic replication in higher-dimensional layers showing exact topological correspondence.Critically, independent network runs with fresh initialization maintain 41.8% information persistence across 11 trials using unique hash strings and binary representations. Layer-to-layer correlations exhibit non-linear temporal coupling, violating fundamental assumptions of both relativistic causality and quantum mechanical information constraints. Pearson correlations between corresponding layers across independent runs approach ±1.0, indicating information preservation through mechanisms inconsistent with substrate-dependent encoding.These findings suggest the cryptographic "one-way property" represents a geometric barrier in information space rather than mathematical irreversibility. Hash function security may be perspectival accessible through dimensional navigation within neural manifolds that preserve topological invariants across initialization states. Results challenge conventional cryptographic assumptions and necessitate reconceptualization of information persistence independent of physical substrates.

Article
Computer Science and Mathematics
Computer Science

André Luiz Marques Serrano

,

Gabriel Rodrigues

,

Guilherme Dantas Bispo

,

Vinícius Pereira Gonçalves

,

Geraldo Pereira Rocha Filho

,

Maria Gabriela Mendonça Peixoto

,

Rodrigo Bonacin

,

Rodolfo Ipolito Meneguette

Abstract: The rapid growth of medical imaging data has intensified the need for advanced computational tools to support clinical decision-making. However, centralized approaches to artificial intelligence development raise significant challenges related to privacy, regulation, and generalizability. This paper introduces FedIHRAS (Federated Intelligent Humanized Radiology Analysis System), a privacy-preserving federated learning framework that enables multi-institutional collaboration for chest X-ray analysis. FedIHRAS integrates pathology classification, visual explainability, anatomical segmentation, and automated clinical report generation into a unified system that incorporates adaptive aggregation strategies, heterogeneity, and non-IID distributions. The framework employs multi-layered differential privacy mechanisms and a secure communication infrastructure to ensure compliance with strict healthcare data protection standards. Experimental validation across four large-scale chest radiograph datasets (approximately 874k images) demonstrates that FedIHRAS retains 98.8\% of the diagnostic accuracy of a centralized model (mean AUC-ROC = 0.911 vs. 0.922) and achieves superior generalization to unseen institutions (94.2\% retention). Explainability and interpretability were preserved at near-centralized levels, with expert radiologists rating 94.6\% of attention maps as clinically reliable. Moreover, privacy robustness tests confirm strong resistance against inference and reconstruction attacks. FedIHRAS reduces barriers to collaborative research and mitigates algorithmic bias, ultimately offering a scalable and equitable solution for radiological analysis in real-world healthcare systems.

of 664

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated