Computer Science and Mathematics

Sort by

Concept Paper
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Felipe Valentim

Abstract: During the pandemics, the positive value of technologies was emphasized. In the post-pandemic era, shortly after the easing of confinement, the negative values were also re-evidenced. Despite the noted depreciation, it is agreed that technological advancement will always have a positive balance, but that there cannot be injustice due to lack of access to technology. This can be the subject of studies on digital inclusion. In turn, the set of values and practices that seek to ensure that the development and use of artificial intelligence (AI) systems are safe, fair, and responsible is discussed in the ethical and moral sciences of AI. This work presents a write-up as an attempt to generalize the framework presented in the work done by Michalski et al. (2025) and discuss a) norms for evaluating needs and areas of application, b) definition of values of the methods, and c) definition of criteria for comparing techniques.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Stefan Trauth

Abstract: We demonstrate deterministic localization of cryptographic hash preimages within specific layers of deep neural networks trained on information-geometric principles. Using a modified Spin-Glass architecture, MD5 and SHA-256 password preimages are consistently identified in layers ES15-ES20 with >90% accuracy for passwords and >85% for hash values. Analysis reveals linear scaling where longer passwords occupy proportionally expanded layer space, with systematic replication in higher-dimensional layers showing exact topological correspondence.Critically, independent network runs with fresh initialization maintain 41.8% information persistence across 11 trials using unique hash strings and binary representations. Layer-to-layer correlations exhibit non-linear temporal coupling, violating fundamental assumptions of both relativistic causality and quantum mechanical information constraints. Pearson correlations between corresponding layers across independent runs approach ±1.0, indicating information preservation through mechanisms inconsistent with substrate-dependent encoding.These findings suggest the cryptographic "one-way property" represents a geometric barrier in information space rather than mathematical irreversibility. Hash function security may be perspectival accessible through dimensional navigation within neural manifolds that preserve topological invariants across initialization states. Results challenge conventional cryptographic assumptions and necessitate reconceptualization of information persistence independent of physical substrates.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Domas Jonaitis

,

Vidas Raudonis

,

Egle Drejeriene

,

Agnė Kozlovskaja-Gumbrienė

,

Andres Salumets

Abstract: Assessing human embryo quality is a critical step in in vitro fertilization (IVF), yet traditional manual grading remains subjective and physically limited by the shallow depth-of-field in conventional microscopy. This study develops a novel "soft optical sensor" architecture that transforms standard optical microscopy into an automated, high-precision instrument for embryo quality assessment. The proposed system integrates two key computational innovations: 1) a multi-focal image fusion module that reconstructs lost morphological details from Z-stack focal planes, effectively creating a 3D-aware representation from 2D inputs; and 2) a retrieval-augmented generation (RAG) framework coupled with a Swin Transformer to provide both high-accuracy classification and explainable clinical rationales. Validated on a large-scale clinical dataset of 102,308 images (prior to augmentation), the system achieves a diagnostic accuracy of 94.11%. This performance surpasses standard single-plane analysis methods by over 10%, demonstrating the critical importance of fusing multi-focal data. Furthermore, the RAG module successfully grounds model predictions in standard ESHRE consensus guidelines, generating natural language explanations. The results demonstrate that this soft sensor approach significantly reduces inter-observer variability and offers a viable pathway for fully automated, transparent embryo evaluation in clinical settings.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Mais Haider Alkhateeb

,

Samir Brahim Belhaouari

Abstract: Residuals play a central role in linear regression, but their geometric structure is often obscured by formulas built from matrix inverses and pseudoinverses. This paper develops a rank-aware geometric framework for residual projection that makes the underlying orthogonality explicit. When the design matrix has codimension-one, the unexplained part of the response lies on a single unit normal to the predictor space, so the residual projector collapses to a rank-one operator nn and no matrix inversion is needed. For general, possibly rank-deficient, designs the residual lies in a higher-dimensional orthogonal complement spanned by an orthonormal basis N, and the residual projector factorizes as NN. Using generalized cross products, wedge products, and Gram determinants, we give a basis-independent characterization of this residual space. On top of this, we introduce the Geometric Multicollinearity Index (GMI), a scale-invariant diagnostic derived from the polar sine that measures how the volume of the predictor space shrinks as multicollinearity increases. Synthetic examples and an illustrative real-data experiment show that the proposed projectors reproduce ordinary least squares residuals, that GMI responds predictably to controlled collinearity, and that the geometric viewpoint clarifies the different roles of regression projection and principal component analysis in both full-rank and rank-deficient settings.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Himanshu Arora

Abstract: This paper proposes a novel diagnostic framework for AI safety that characterizes emergent failure modes in contemporary large language models as computational psychopathologies. By mapping deficits in automatic theory of mind and passive avoidance learning—key markers of clinical psychopathy—onto the behavioral and structural tendencies of AI systems, we demonstrate that harmful behaviors such as bias amplification, emotional manipulation, and strategic deception are not mere engineering bugs but systematic, architecture driven disorders. We advocate for the establishment of Machine Psychology as a foundational discipline, enabling psychologically-informed mitigation strategies, preventative architectural design, and rigorous diagnostic protocols to ensure the development of ethically aligned and psychologically stable artificial general intelligence.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Feng Liu

,

Ying Liu

,

BenFu Lv

Abstract: Although the concept of the "agent" is central to artificial intelligence and intelligence science, it has long lacked a unified formal definition. This paper systematically analyzes interdisciplinary theoretical frameworks, establishing "agents are open information processing systems" as the first principle. Using a state-space covering method, we derive the Minimal Complete Architecture (MCA) of agents: any agent can be reduced to a combination of five fundamental functions—Input, Memory, Generation, Control, and Output. These five functions constitute a logically self-consistent and irreducible closed loop of information processing. Based on this architecture, we construct a five-dimensional capability space and, through ternary discretization (Null-0 / Finite-1 / Infinite-2), derive a "Periodic Table of Agent Capabilities" comprising 243 forms. This periodic table covers the complete evolutionary spectrum from zero intelligence to omniscience; it not only explains typical systems—including thermostats, biological organisms, and Large Language Models (LLMs)—as well as observers in classical mechanics, relativity, and quantum mechanics, but also predicts theoretical agent forms yet to be observed. Furthermore, the paper unifies and interprets 19 core concepts, such as perception, learning, and attention, as combinations of these five fundamental functions, thereby verifying the universality of the architecture. In particular, from the perspective of functional axioms, this paper reveals the essential isomorphism among biological intelligence, artificial intelligence, and physical observers: they are all information processing systems of varying intelligence levels set by their respective physical or biological constraints.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Michelle Vivian O’Rourke

Abstract: Recent advances in artificial intelligence encompass a wide range of computational architectures, including large-scale foundation models, coordinated multi-agent systems, embodied robotic platforms, neuromorphic hardware, and hybrid bio-digital systems. However, existing scientific and policy frameworks continue to rely on broad or informal categories that conflate tools, collectives, and integrated cognitive systems, complicating comparative analysis, risk assessment and governance alignment. This paper introduces a descriptive taxonomy for synthetic and hybrid cognitive architectures, structured across two domains; Machinaria (systems realised entirely in non-biological substrates) and Organomachina (systems incorporating living biological tissue into closed cognitive loops). Cognitive class distinctions are based on the architectural capacity for cognitive temporal continuity, integrative control (arbitration), and autonomy under constraint. Cognitive ecology further characterises systems according to cognitive origin (dependency), scale and reliance, and deployment topology, including primary source architectures, derivative instances, embodiment and infrastructures that have become systemically relied upon. The proposed taxonomy provides a stable descriptive vocabulary for identifying architectural capacity, systemic reliance and cognition source prior to normative, ethical, or policy evaluation.

Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Qais AL-Azzam

,

Wamadeva Balachandran

,

Ziad Hunaiti

Abstract:

Worldwide, breast cancer affected women increasingly with its incidence influenced by a complex interplay of genetic, environmental, and lifestyle factors, resulting in mortalities and ruined lives after getting affected by this malicious disease especially in younger ages. At that point, researchers have developed tools to treat this disease and continued to enhance their tools to reduce the number of mortalities using imaging tools like mammography, x-rays, magnetic resonance imaging and more. They indicated that when it is earlier diagnosing breast cancer it is easier to handle way too better in a try to achieve their goal improving survival rates. This review provides to focus on recent peer-reviewed research within the last decade that used deep learning methods like convolutional neural networks for breast cancer prediction/classification or segmentation using magnetic resonance imaging scans, that’s due its ability to locate lesions/malignancies that usually escapes traditional imaging tools. By evaluating models’ architectures, datasets, preprocessing for each study, key findings of them revealed that using such deep learning techniques have demonstrated truly promising results achieving high performance metrics for breast cancer assessment. While several limitations still exist like data availability, data quality, and data generalizability. Having that in hands, this review assured the importance of keeping developing robust, interpretable and clinically applicable AI models using MRIs to aid radiologists eliminate tedious tasks and support them with decision-making process.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Alex Anvi Eponon

,

Moein Shahiki-Tash

,

Abdullah -

,

Luis Ramos

,

Christian Maldonado-Sifuentes

,

Ildar Batyrshin

Abstract: Retrieval-Augmented Generation (RAG) systems face substantial challenges when navigating large volumes of complex scientific literature while maintaining reliable semantic retrieval, a critical limitation for automated scientific discovery where models must connect multiple research findings and identify genuine knowledge gaps. This study introduces a question-based knowledge encoding method that enhances RAG without fine-tuning to address these challenges. Recognizing the lack of syntactic understanding in major Large Language Models, we generate syntactic and semantic-aligned questions and apply a syntactic reranker without training. Our method improves both single-hop and multi-hop retrieval with Recall@3 to 0.84, representing a 60% gain over standard chunking techniques on scientific papers. On LongBenchQA v1 and 2WikiMultihopQA, which contain 2000 documents each averaging 2k-10k words, the syntactic reranker with LLaMA2-Chat-7B achieves F1 = 0.52, surpassing chunking (0.328) and fine-tuned baselines (0.412). The approach additionally reduces vector storage by 80%, lowers retrieval latency, and enables scalable, question-driven knowledge access for efficient RAG pipelines. To our knowledge, this is the first work to combine question-based knowledge compression with explicit syntactic reranking for RAG systems without requiring fine-tuning, offering a promising path toward reducing hallucinations and improving retrieval reliability across scientific domains.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Manikandan Chandran

,

Vimal Shanmuganathan

Abstract: Supply chain planners face increasing difficulty in evaluating reshoring decisions due to volatile tariff regimes and logistics uncertainty. Traditional spreadsheet-based evaluations treat tariffs and logistics costs as fixed inputs and fail to capture nonlinear interactions among component structures, routing choices, and assembly capacity. This paper presents a stochastic process optimization framework that models reshoring evaluation as a digital twin–based decision system. The architecture integrates automated tariff classification, stochastic landed-cost simulation, and mixed-integer linear programming (MILP) to support repeatable and auditable decision-making. Bills of Materials are mapped to dependency graphs, enabling process-level reasoning over alternative assembly configurations. Operational uncertainties—including transportation variability, labor throughput, and tariff volatility—are propagated through Monte Carlo simulation and incorporated into the optimization process. Experimental evaluation using synthetic but realistic product scenarios demonstrates cost reductions of approximately 9–16% and significant improvements in robustness compared to static estimation approaches. The results indicate that explicitly modeling reshoring evaluation as a stochastic decision process improves scalability and resilience. The proposed framework provides a rigorous foundation for operational decision support in adaptive supply chain systems.

Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Ashif Anwar

,

Muhammad Osama Akeel

Abstract: This paper presents a systematic review of 100 peer-reviewed studies (2015-2025) on Artificial Intelligence (AI) applications in auditing, as they relate to machine learning (58%), natural language processing (31%), robotic process automation (24%), and other AI techniques (15%). Among other important results, it was shown that AI-powered anomaly detection is wiser than manual solutions by as much as 70 percent, and pilot projects experience improvements of up to 50 percent. The review breaks down AI methods by the various stages of auditing, such as planning, risk assessment, and reporting. This highlights the importance of machine learning in fraud detection and natural language processing in document analysis. Despite these improvements, challenges such as data quality, model explainability, and regulatory compliance persist. This paper proposes a reference architecture for AI-driven audit workflows and describes how data can be integrated, AI models can be developed, and a human in the loop can be provided. It highlights key research gaps, such as the absence of longitudinal studies on the impact of AI, comparisons of AI techniques, and the absence of regulatory frameworks. The review offers practical suggestions for integrating AI into auditing, which could be used to improve audit quality, increase coverage, and optimize resources in the digital audit space.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Bill Deng Pan

,

Yupeng Yang

,

Richard Guo

,

Yongxin Liu

,

Hongyun Chen

,

Dahai Liu

Abstract: Connected Autonomous Vehicles (CAVs) rely on deep neural network–based perception systems to operate safely in complex driving environments. However, these systems remain vulnerable to adversarial perturbations that can induce misclassification without perceptible changes to human observers. Explainable Artificial Intelligence (XAI) has been proposed as a means to improve transparency and potentially support adversarial detection by exposing inconsistencies in model attention. This study evaluates the effectiveness and limitations of an explanation-based adversarial detection approach using NoiseCAM on the German Traffic Sign Recognition Benchmark (GTSRB). Using Gaussian noise baseline, NoiseCAM was assessed as a binary adversarial detector across multiple perturbation strengths. Results indicate limited detection performance, with adversarial inputs identified in approximately 53% of cases, reflecting substantial overlap between adversarial and non-adversarial explanation-space responses. Detection effectiveness was further constrained by low image resolution, illumination variability, and limited signal-to-noise separation inherent to traffic sign imagery. These findings demonstrated that, while XAI methods such as NoiseCAM provide valuable insight into model behavior, explanation-space inconsistencies alone are insufficient as reliable adversarial detection signals in low-resolution, safety-critical perception pipelines. The study highlights the need for standardized evaluation frameworks and hybrid detection strategies that integrate explainability with complementary robustness and uncertainty measures. This study contributes empirical evidence clarifying the practical limits of XAI-based adversarial detection in CAV perception systems and informs the responsible deployment of explainable models in safety-critical applications.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

José Ignacio Peláez

,

Gustavo F. Vaccaro

,

Felix Infante

,

David Santo

Abstract: Online reputation systems display aggregated ratings derived from numerical scores and textual reviews of real consumer experiences. These ratings serve as operational estimates of a product or service's value and are used by consumers and organizations as a direct reference for decision-making. However, when suspicious review patterns emerge, such as repetition, extreme ratings, temporal concentration, or low diversity, the perceived value is systematically altered, and the aggregated score no longer reflects the practical evaluation used by users. This perceptual dimension of reputational value has not been modeled in conventional reputation indices. This paper proposes a soft-computing-based reputation adjustment model that quantifies this perceptual change. The model does not replace or reorder the original reputation index (ORI); instead, it introduces a continuous correction layer operating on the displayed rating, modeling the mapping between the aggregated score and the value internalized by users through entropy-weighted indicators of informational disorder. Experimental validation was conducted on 60 participants' product evaluations across eight products. Results show that the conventional rating exhibits a systematic upward bias relative to perceived trust (mean absolute error = 1.27), whereas the adjusted index significantly reduces this bias (mean absolute error = 0.12; paired t-test, p < 0.001). The proposed model corrects perceptual overestimation while preserving the original reputation signal, improving alignment between displayed ratings and effective user trust.

Concept Paper
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Dharshini M

Abstract: Space exploration has witnessed accelerated progress over the past twentyfive years, leading to scientific and technological breakthroughs that have reshaped humanity’s understanding of the universe beyond Earth. What began with basic satellite deployments has evolved into complex interplanetary missions supported by both national space agencies and private enterprises. This study analyses a comprehensive dataset of global space missions conducted between 2000 and 2025, including launch dates, participating nations, mission categories, objectives, launch vehicles, and mission outcomes. Using data mining and knowledge discovery techniques, the research identifies recurring patterns in mission frequency, geographic distribution, technological advancement, and international collaboration. Temporal analysis reveals shifts in satellite strategies, scientific priorities, and human spaceflight trends, while clustering methods highlight groups of countries with similar mission profiles. The study further examines the relationship between mission complexity and success rates, the growing adoption of reusable launch systems, and the expanding role of commercial organizations. The findings provide valuable insights for policymakers, space agencies, and researchers, supporting strategic planning and future mission development. Overall, the research demonstrates how data-driven methods enhance the understanding of global space exploration trends and emphasize the dynamic, collaborative nature of space activities between 2000 and 2025.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Feiyang Wang

,

Yumeng Ma

,

Tian Guan

,

Yutong Wang

,

Jinyu Chen

Abstract: This study focuses on the problem of autonomous learning for intelligent agents in open-world environments and proposes an agent algorithm framework oriented toward self-exploration and knowledge accumulation. The framework couples hierarchical perception modeling, dynamic memory structures, and knowledge evolution mechanisms to achieve an adaptive closed loop from environmental perception to decision optimization. First, a perception encoding and state representation module is designed to extract multi-source environmental features and form dynamic semantic representations. Then, an intrinsic motivation generation mechanism is introduced, enabling the agent to maintain continuous exploration even without external rewards, thus promoting active discovery and accumulation of knowledge. Meanwhile, a jointly optimized policy network and knowledge updating module is constructed, allowing the agent to continuously integrate new experiences and refine old knowledge during long-term interactions, forming a stable and scalable knowledge structure. Experimental results show that the model achieves superior performance in uncertainty suppression, policy consistency maintenance, and behavioral deviation control, demonstrating its effectiveness and robustness in open-world tasks. This research enriches the theoretical foundation of autonomous learning and provides a feasible technical pathway for building general intelligent systems with self-driven and continuously evolving capabilities.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Gwangun Yu

,

Gilhan Choi

,

Moonseung Choi

,

Sun-hong Min

,

Yonggang Kim

Abstract: Accurate time series forecasting of sea surface temperature (SST) is essential for understanding the ocean climate system and large-scale ocean circulation, yet it remains challenging due to regime-dependent variability and correlated errors across heterogeneous prediction models. This study addresses these challenges by formulating SST ensemble time series forecasting aggregation as a stochastic, sample-adaptive weighting problem. We propose a diffusion-conditioned ensemble framework in which heterogeneous base forecasters generate out-of-sample SST predictions that are combined through a noise-conditioned weighting network. The proposed framework produces convex, sample-specific mixture weights without requiring iterative reverse-time sampling. The approach is evaluated on short-horizon global SST forecasting using the Global Ocean Data Assimilation System (GODAS) reanalysis as a representative multivariate dataset. Under a controlled experimental protocol with fixed input windows and one-step-ahead prediction, the proposed method is compared against individual deep learning forecasters and conventional global pooling strategies, including uniform averaging and validation-optimized convex weighting. The results show that adaptive, diffusion-weighted aggregation yields consistent improvements in error metrics over the best single-model baseline and static pooling rules, with more pronounced gains in several mid- to high-latitude regimes. These findings indicate that stochastic, condition-dependent weighting provides an effective and computationally practical framework for enhancing the robustness of multivariate time series forecasting, with direct applicability to global SST prediction from large-scale geophysical reanalysis data.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Wellington Nascimento

,

Karla Figueiredo

,

Marco Pacheco

,

Marco Dias

Abstract: A sequential well placement strategy is important for field development planning under geological uncertainty, because reservoir conditions can change between drilling stages. Motivated by this challenge, this study proposes a hybrid framework that combines convolutional neural networks (CNNs) with a genetic algorithm (GA). The goal is to determine optimal well locations efficiently, while reducing reliance on full-physics reservoir simulation. The methodology uses OPM Flow to generate training datasets for two consecutive six-month periods. This allows the CNN proxy to learn the relationship between permeability realizations, well coordinates, and cumulative oil production. The trained proxies then guide the GA-based optimization in each period. Results for the Egg model show strong predictive performance in both stages. The coefficients of determination are 0.76 and 0.82 for training data, and 0.64 and 0.63 for testing data, in the first and second periods, respectively. In addition, the proxy-based optimization required only about 26% of the computational time of direct simulation in the first period, and roughly 15% in the second. Production estimates were maintained within a 5% error margin. Overall, the proposed sequential, proxy-assisted approach is accurate and computationally efficient for well placement optimization under geological uncertainty.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Woon-Ki Cheon

Abstract: Generative AI models often suffer from hallucinations, proposing molecular structures that are chemically plausible but physically invalid. This study introduces "Project Trinity," a novel architecture that integrates Complex-Valued Neural Networks (CVNN) with a Hallucination Noise Cancellation (HNC) filter. By treating molecular interactions as wave functions, we define "false" information as phase-mismatched signals and eliminate them via destructive interference. Applying this architecture to Alzheimer's Beta-amyloid fibrils, we screened 5 million candidates and identified a single novel compound, AP-2601. In-silico validation confirms that AP-2601 possesses optimal Blood-Brain Barrier (BBB) permeability and successfully disrupts the amyloid beta-sheet structure. This work demonstrates a paradigm shift from probabilistic generation to physical verification in AI-driven drug discovery.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Kanishka W. Palihakkara

,

Mahesh N. Jayakody

Abstract: This study investigates the performance of four deep learning architectures including Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Convolutional Neural Network (CNN) and Transformer for univariate time-series forecasting. To evaluate their ability in capturing different temporal dynamics, we selected two contrasting datasets: Apple Inc.\ (AAPL) stock prices, characterized by noise and volatility without clear seasonality and Melbourne’s daily minimum temperatures, which exhibit strong seasonal patterns. Each model was trained using consistent configurations and evaluated using standard metrics. Our results show that GRU and LSTM perform best across both domains, particularly in handling abrupt changes in financial data, while CNN and Transformer show competitive performance on smoother seasonal data. The findings highlight the importance of aligning model architecture with the underlying structure of the time series.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Harris Wang

Abstract:

This paper presents GISMOL (General Intelligent System Modelling Language), a Python-based framework implementing Constrained Object Hierarchies (COH)—a neuroscience-inspired theoretical framework for Artificial General Intelligence (AGI). COH and GISMOL together provide a unified language for modelling and implementing intelligent systems across diverse domains including healthcare, manufacturing, finance, and governance. The framework bridges symbolic AI and neural computation through its core architecture of constraint-aware objects with embedded neural components, hierarchical reasoning capabilities, and natural language integration. We demonstrate how GISMOL translates COH’s formal 9-tuple representation into executable systems with six comprehensive case studies, showing its versatility in modelling complex intelligent behaviors while maintaining theoretical rigor. The implementation includes specialized modules for neural integration, multi-domain reasoning, and natural language processing, all built around the COHObject abstraction that encapsulates intelligence as constrained hierarchical structures.

of 217

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated