Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Computational Mathematics

Basem Ajarmah

,

Saber Syouri

Abstract: Managing inventory for perishable goods remains a persistent operational challenge, largely because conventional exponential decay models struggle to capture the irregular deterioration patterns observed in practice. This paper develops the Reliable Fractional Derivative (RFD) framework, which incorporates memory effects into the modeling of product decay through a time-shifted kernel. Unlike standard approaches that assume constant deterioration, this formulation accommodates both accelerating and decelerating patterns depending on product characteristics and storage conditions. We derive closed-form expressions for optimal ordering quantities under both deterministic and stochastic demand, then test the framework's performance through numerical experiments spanning two thousand parameter combinations. The analysis reveals that RFD models deliver the greatest improvements when deterioration rates are steep, holding costs are substantial, or storage horizons are extended—conditions under which switching from conventional methods yields average cost reductions approaching nineteen percent, with substantially larger gains in certain cases. A pharmaceutical application confirms savings between 3.6 and 9.1 percent relative to misspecified traditional models. These findings connect with recent industry movements toward more sophisticated safety-stock practices, offering managers a principled basis for selecting inventory policies aligned with actual product behavior rather than assuming decay conforms to simpler theoretical forms.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Zhou Yang

,

Siming Han

,

Ming Wu

Abstract: Few-shot remote sensing scene classification (FSRSSC) entails identifying images scene classes from limited labeled samples, facing the challenges of labeled data scarcity, as well as the intricacy and variety of remote sensing images with high intraclass variance and interclass similarity. To address these challenges, we propose a novel framework named as MMFF-Net in this article, which consists of four key components: diffusion augmentation (DA), multiscale feature fusion (MSFF), dual attention fusion module (DAFM), and information interaction mutual attention (IIMA). The DA is utilized to augment support set samples with high-quality. In addition, the MSFF focuses on obtaining the local spatial details, and the DAFM is utilized to fuse the local feature and the global feature. What is more, the IIMA module is employed to interact between the query set and support set information. What is more, we use word2vec to obtain the semantic features for reducing the disparity between them and the visual features with LSE Loss. The comparative experimental results with multiple models on three benchmark remote sensing scene (RSS) datasets validate the effectiveness of the proposed MMFF-Net, showcasing the superiority and feasibility of our approach in most FSRSSC cases.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Mohsen Mostafa

Abstract: This paper introduces Bayesian R-LayerNorm, a novel normalization layer that extends the previously proposed R-LayerNorm with formal mathematical foundations and uncertainty quantification. Building upon the empirical success of R-LayerNorm, we present a complete mathematical formalism using sta-tistical field theory, renormalization group methods, and information geometry. Our approach provides provable stability guarantees through three theorems: numerical stability, gradient stability, and training convergence. The Bayesian extension incorporates uncertainty estimation through a stable ψ-function, enabling adaptive noise suppression based on local entropy estimates. A key contribution is the integration of uncertainty quantification directly into the normal-ization operation, providing confidence estimates for each normalized activation without additional cost. The method is adaptive to local noise, varying its normalization strength spatially based on estimated noise levels. Despite its theoretical depth, the implementation is simple and serves as a drop-in replacement for existing normalization layers, adding only two learnable parameters per layer. Experimental validation on the full CIFAR-10-C dataset demonstrates consistent improvements: Bayesian R-LayerNorm achieves average accuracy gains of +0.49% over standard LayerNorm across four common corruptions, with the largest improvement of +0.74% on shot noise. The method requires minimal computational overhead (∼ 10%) and we provide complete open-source implementation. We further show that the learned λ parameters offer interpretability, revealing which layers adapt most strongly to different corruptions. While the accuracy gains are modest, the framework opens new di-rections for trustworthy and interpretable normalization in safety-critical applications where uncertainty matters as much as accuracy.

Concept Paper
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Abdulmohsen H. Alrohaimi

Abstract: Genomic regulation is typically interpreted through observable molecular states such as gene expression, chromatin accessibility and epigenetic modifications. However, biological systems also contain large reservoirs of genomic information that remain transcriptionally inactive for extended periods while retaining the capacity to influence future regulatory behaviour. This phenomenon, referred to here as gene latency, suggests that genomes may preserve forms of biological memory beyond currently expressed molecular states. Recent advances in artificial intelligence—particularly transformer-based architectures—demonstrate how complex systems can encode structured information within latent representational spaces that influence outputs without continuous activation (Vaswani et al., 2017; Brown et al., 2020). In this study, we propose a conceptual framework that interprets gene latency as a form of genomic memory using principles derived from latent representation learning in artificial intelligence. By aligning concepts from systems biology, epigenetics and machine learning, we outline theoretical and computational perspectives for identifying latent regulatory potential within genomic systems. This framework suggests that genomes may contain distributed reservoirs of regulatory capacity shaped by developmental history and environmental exposure. Integrating artificial intelligence with genomic theory may therefore enable new approaches for modelling latent regulatory states and predicting transitions from genomic latency to activation.

Article
Computer Science and Mathematics
Security Systems

Zhen Li

,

Kexin Qiang

,

Yiming Yang

,

Zongyue Wang

,

An Wang

Abstract: In side-channel analysis, simple power analysis (SPA) is a widely used technique for recovering secret information by exploiting differences between operations in traces. However, in realistic measurement environments, SPA is often hindered by noise, temporal misalignment, and weak or transient leakage, which obscure secret-dependent features in single or very few power traces. In this paper, we provide a systematic analysis of moving-skewness-based trace preprocessing for enhancing asymmetric leakage characteristics relevant to SPA. The method computes local skewness within a moving window along the trace, transforming the original signal into a skewness trace that emphasizes distributional asymmetry while suppressing noise. Unlike conventional smoothing-based preprocessing techniques, the proposed approach preserves and can even amplify subtle leakage patterns and spike-like transient events that are often attenuated by low-pass filtering or moving-average methods. To further improve applicability under different leakage conditions, we introduce feature-driven window-selection strategies that align preprocessing parameters with various leakage characteristics. Both simulated datasets and real measurement traces collected from multiple cryptographic platforms are used to evaluate the effectiveness of the approach. Experimental results indicate that moving-skewness preprocessing improves leakage visibility and achieves higher SPA success rates compared to commonly used preprocessing methods.

Article
Computer Science and Mathematics
Algebra and Number Theory

Xian Wang

,

Luoyi Fu

Abstract: This study aims to prove the Riemann Hypothesis and the Generalized Riemann Hypothesis by ex-tending the Riemann zeta function and Dirichlet L -functions to the elliptic complex domain, based ona newly constructed system of elliptic complex numbers Cλ(λ < 0) . The core challenge addressed is theinherent difficulty in resolving these conjectures within the traditional ”circular complex domain” frame-work (λ = −1); the author posits that a complete proof is unattainable strictly within this conventionalsetting.The primary innovation of this work lies in the formulation of the theory of elliptic complex numbers,specifically identifying the limiting case as λ → 0− as the key to the proof. Through rigorous deduction,a bijective correspondence between zeros across different complex planes is established. By employingproof by contradiction and leveraging the correspondence between Cλ (as λ → 0) and the circle complexplane C, the Riemann Hypothesis and the Generalized Riemann Hypothesis are ultimately proven. Thispaper is organized into three parts:(1) Construction and Geometric Properties: The first part details the construction of elliptic complexnumbers and their fundamental geometric properties, laying the necessary foundation for subsequentanalysis and the proof of the conjectures.(2) Analytic Extension: The second part introduces elliptic complex numbers into mathematical anal-ysis, deriving numerous results analogous to those in classical complex variable function theory.(3) Proof of Conjectures: The final part presents the formal proofs of the Riemann Hypothesis and theGeneralized Riemann Hypothesis.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Sudhakavya Bodapati Venkata

Abstract: Hybrid use of Terraform for infrastructure andAnsible for configuration is common on Azure, but the two toolsare often joined only by ad hoc scripts and fragile handoffs in CIpipelines. Runbook Mesh proposes a small MCP based controlplane that treats Terraform and Ansible as one coordinatedchange unit rather than two independent stages. Azure DevOpstriggers an MCP server that drives a deployment state machine:it receives Terraform plans and apply results, derives a dynamicAnsible inventory from Terraform outputs, and orchestratesconfiguration playbooks with drain, cordon, and health checksfor VM scale sets, AKS nodes, and virtual machines. TheMCP enforces simple invariants on ordering, handoff safety,and rollback reachability, and packages each deployment intoa witness bundle containing plan digests, state and inventoryhashes, play outcomes, and Azure Resource Graph snapshots.The result is an Azure native pattern where infrastructure andconfiguration share a single timeline, a defined rollback path, anda tamper evident change ledger suited to regulated environments.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Ting Liu

Abstract: This study develops a leakage-safe PCA–APT framework that constructs an idiosyncratic market-stress index from cross-sectional residual dispersion and evaluates its usefulness for anticipating equity drawdowns. Using daily adjusted prices for SPY and 11 U.S. sector ETFs from 2020–2025, we compute sector excess returns (sector minus SPY), estimate a low-dimensional common component via principal component analysis (PCA), and define residual stress as the cross-sectional root-mean-square magnitude of PCA reconstruction residuals. To prevent look-ahead bias, the PCA mapping is estimated using information available only through t−1, stress is computed out-of-sample at t, and stress regimes are identified using a rolling train-only quantile threshold that is shifted forward by one trading day. Drawdown-warning performance is assessed using drawdown-onset events and early-warning classification metrics (ROC-AUC, PR-AUC, and horizon-H precision/recall). Empirically, residual stress spikes cluster around drawdown onsets and provides predictive information, although a volatility-based benchmark remains stronger on average across discrimination metrics. Importantly, residual stress exhibits state-dependent complementarity with volatility: conditional on low volatility, high residual stress is associated with a materially higher probability of a drawdown onset within the next H=21 trading days (approximately 17% vs. 8%), and the joint high-stress/high-volatility regime identifies the highest-risk states (approximately 36% onset probability). Event-level overlap diagnostics further indicate that residual stress can flag a subset of drawdown onsets not captured by a volatility-threshold rule, while some onsets are not preceded by either signal. Economic relevance is examined under transaction costs through (i) a residual-ranked sector long–short portfolio and (ii) stress-managed SPY overlays that reduce exposure during detected regimes. In the baseline sample, a volatility-managed overlay improves drawdown control relative to buy-and-hold, whereas the residual-stress overlay does not reduce maximum drawdown and the residual-ranked long–short strategy is not robustly profitable after costs. Overall, the paper contributes a reproducible, leakage-safe evaluation pipeline linking cross-sectional residual dispersion to drawdown risk and clarifies when residual stress serves as a complementary market-structure risk indicator alongside standard volatility-based signals.

Concept Paper
Computer Science and Mathematics
Computer Networks and Communications

Edet Ekpenyong

,

Ubio Obu

,

Godspower Emmanuel Achi

,

Clement Umoh

,

Duke Peter

,

Udoma Obu

Abstract: In blockchain ecosystems, maintaining transparency and privacy has become an ethical dilemma. This is because, while certain specific information of the user is shared to ensure transparency of transactions across networks, such information could be detrimental to the user, as there is a possibility of it being tampered with. For instance, in the Catalyst voting process in Cardano, users can still see the amount of ADA tokens being held by other users, which can influence their voting options, especially when large ADA holders vote in support of certain ideas or proposals. To discourage such challenges as voter manipulation and vote buying, this study proposed the implementation of zero-knowledge proof (ZKP) in blockchain ecosystems to enhance the transparency of the catalyst voting process and enhance efficiency and speed of result release. Using survey questionnaire and a multivocal literature review, this study was able to proof that ZKP cannot only be applied in the catalyst voting process to enhance its transparency, but also addressed potential challenges to its applications such as scalability, encourage trust and fairness of the voting system, and improve voter participation due to its user-friendliness. Mathematical models emphasize scaled voting as optimal for balancing inclusion and plutocratic control.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Rao Mikkilineni

,

W. Patrick Kelly

Abstract: Contemporary enterprise IT operations are implemented largely atop Shannon–Turing computing: programs execute read–compute–write cycles over data structures, while governance (fault handling, configuration control, auditability, and continuity) is applied externally through infrastructure platforms, observability stacks, and human processes. This separation scales analytic throughput but accumulates coherence debt: locally expedient commitments whose provenance and revisability degrade until exposed by shocks (failures, security incidents, regulatory demands, or architectural transitions). We synthesize a model evolution that integrates computation with regulation at two distinct levels: (i) Distributed Intelligent Managed Elements (DIME), which modifies the Turing cycle to read–check-with-oracle–compute–write by infusing a signaling overlay and FCAPS (Fault, Configuration, Accountability, Performance, Security) supervision into computation in progress; and (ii) AMOS, which fully decouples the process executor from governance by treating any Turing-equivalent engine as a replaceable execution substrate while elevating knowledge structures—encoded as local and global Digital Genomes—to first-class operational state in a governed Knowledge Network. We further present implementation evidence via a microservice transaction testbed that operationalizes dynamic topology as data, a capability-oriented control plane, decoupled application-layer FCAPS from IaaS/PaaS FCAPS, and policy-selectable consistency/availability semantics. We argue that the principal benefit of AMOS is not “circumventing” impossibility results such as CAP, but governing their trade-offs as explicit commitments with auditable lineage and controlled convergence back to coherent state.

Article
Computer Science and Mathematics
Probability and Statistics

Gonçalo Melo de Magalhães

Abstract: Machine learning's dominant paradigm—whether model-centric or data-centric—treats intelligence as the extraction of statistical patterns from behavioral records. This approach has delivered remarkable engineering feats. Yet something foundational is missing. Data is not reality: it is a finite record of trajectories through reality. A photograph of a river is not the river's law. This paper argues that the data paradigm conflates measurement with mechanism, capturing where systems have been rather than why they go there. We propose an alternative grounded in the Architecture of Freedom Intelligence (AFI), which identifies navigability—the structural availability of paths—as the primary organizing principle of all complex systems. The Law of Freedom, F = P/D, states that navigational capacity equals differentiation capacity (Perception, P) divided by structural resistance (Distortion, D). Under this framework, intelligence is not pattern memorization but distortion navigation: all systems move according to dx/dt = −P(x)·∇D(x), following gradients of resistance scaled by perceptual capacity. We demonstrate that this gradient law is structurally identical to Fick's diffusion, Berg–Brown chemotaxis, Ohm's law, and gradient descent—revealing a deep structural unity that the data paradigm treats as coincidental analogy. Nature does not train on labeled datasets: ants, neurons, immune cells, and ecological populations navigate through calibrated heuristics on Perception and Distortion fields, not through backpropagation over historical trajectories. This observation motivates a fundamental reconceptualization of what training should accomplish. We propose Freedom Intelligence Training (FIT): a learning paradigm oriented toward learning P and D fields directly, rather than fitting statistical correlations over behavioral snapshots. FIT rests on five predictions: (i) models trained on P–D fields require exponentially less data than pattern-extraction models; (ii) generalization improves because P–D fields encode causal structure; (iii) out-of-distribution performance improves because navigability laws transfer across domains; (iv) interpretability is natural since every prediction decomposes into ΔP and ΔD contributions; (v) the exploration–exploitation transition is quantifiable as the coefficient of variation of the Freedom field crossing 1.0. We provide ten falsification criteria and position FIT within the emerging landscape of world models, physics-informed learning, and causal inference. This is a theoretical proposal; a complete experimental roadmap is provided.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Arda Yunianta

Abstract: Current implementation of pneumonia diagnosis remains challenging to achieve better performance and improve to get better result. Convolutional neural networks (CNNs) have demonstrated the successful automation of pneumonia diagnosis through the analysis of chest X-ray images, which can be combined with other methods to improve prediction and classification accuracy rates. The aim of this research is to propose an innovative framework for pediatric pneumonia diagnosis that unites three fine-tuned pre-trained CNN models through feature fusion at the EfficientNetB0, RestNet50, and MobileNetV2 to achieve better performance. The mixed-model architecture framework provides an ideal solution for time-sensitive clinical applications operating in resource-constrained environments. The proposed framework model demonstrates successful performance in maintaining excellent sensitivity and specificity measures because clinical use requires minimal false-negative and false-positive results. Furthermore, the proposed framework model outperformed individual models and compared favorably to previous studies related to pneumonia classification, achieving an accuracy level of 96.14%, a precision of 94.10%, a recall of 96.92%, and an F1-score of 94.97%.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Ling Yue

,

Ching-Yun Ko

,

Pin-Yu Chen

,

Shimin Di

,

Shaowu Pan

Abstract: Large language models (LLMs) are evolving from chatbots with limited tool-using capabilities to agentic AI systems that can perform deep research, assist in proposing hypotheses, help design experiments, automate data analysis, and draft scientific reports. However, there are currently two bottlenecks limiting LLMs' real-world impact on the broader scientific research community beyond academic demonstrations: lack of interoperability (repetitive manual tool-integration is required across scenarios) and the need for scalable coordination (unstructured communication and memory become brittle as the number of agents grows). In this Perspective, we argue that the next phase of agentic scientific discovery requires the development of an \emph{ecosystem} of protocol-native agents and tools organized through hierarchies inspired by human society, beyond the current paradigm of a single monolithic ``AI scientist''. We use Model Context Protocol (MCP) as a concrete example of an emerging interoperability layer for scientific tool and context exchange, and we propose three complementary pathways to increase the scaling capabilities of an MCP-native scientific ecosystem by addressing the composability issues: (1) MCP servers for high-value scientific tools maintained by domain experts, (2) automated transformation of existing code repositories into MCP services, and (3) autonomous invention and evolution of new agents and workflows. Finally, we provide a practical roadmap for scaling AI-driven scientific discovery by expanding tool supply and coordination in MCP-native scientific ecosystems.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Zi-Han Huang

,

Chen-Wei Liang

,

Mu-Jiang-Shan Wang

Abstract: RGB–infrared (IR) fusion is an effective way to improve object detection robustness for automotive perception under low-light and adverse-weather conditions. Yet, practical multi-modal detectors still face three issues: imperfect cross-modal alignment, inefficient long-range interaction, and unstable query initialization when modalities exhibit inconsistent evidence. This paper presents CMAFNet, a deployment-oriented cross-modal alignment and fusion network with three key designs. (1) A Dynamic Receptive Backbone (DRB) extracts multi-scale features with adaptive receptive fields for both modalities. (2) A Channel-Split Mamba Block (CSM-Block) models long-range cross-modal dependencies using selective state-space modeling with linear complexity in token length, enabling an efficient accuracy–latency trade-off. (3) A Global Multi-modal Interaction Network (GMIN) performs fine-grained alignment and adaptive fusion via dual-branch cross-attention guided by global average/max pooling. In addition, an uncertainty-minimal query selection strategy and a separable dynamic decoder further enhance detection stability and efficiency. Experiments on M3FD and FLIR-Aligned show that CMAFNet achieves 83.9% mAP50 and 84.2% mAP50, respectively, while maintaining competitive inference efficiency, supporting real-time automotive deployment on compute-constrained platforms.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Abdul Sittar

,

Mateja Smiljanic

,

Alenka Guček

,

Marko Grobelnik

Abstract: The proliferation of fake news across social media, headlines, and news articles poses major challenges for automated detection, particularly in multilingual and cross-media settings affected by data imbalance. We propose a fake news detection framework based on LLM-driven, feature-guided text augmentation. The method generates realistic synthetic samples across languages, media types, and text granularities while preserving factual structure and stylistic coherence. Experiments with classical and transformer-based models (Random Forest, Logistic Regression, BERT, XLM-R) across social media, headline, and multilingual news datasets show consistent improvements in performance. LLM-based augmentation improves overall accuracy by up to 1.6% over imbalanced baselines and increases minority-class F1-scores by up to 2.4% in low-resource languages such as Swahili. Hybrid fact- and style-based models achieve up to 93.8% accuracy with more balanced class-wise F1-scores and reduced language-related disparities, demonstrating improved robustness and cross-lingual generalization.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Jonathan P. Bowen

Abstract: Formal methods are software engineering approaches with a rigorous mathematical basis that can be used in helping to ensure the correctness of software systems, especially where safety or security is critical. Artificial Intelligence (AI) has developed very rapidly in the area of Generative AI (GenAI), where questions can be answered with increasingly impressive but with potentially unreliable and variable results. This paper surveys research in integrating the two approaches in a synergistic manner. Traditionally, such explorations have required significant manual efforts in searching for and evaluating existing research. However, most relevant publications are now accessible online, and AI tools are increasingly good at answering research questions with more and more reliability. This paper takes the approach of using GenAI to evaluate research questions on combining formal methods and AI-related techniques. The paper assesses the usefulness and validity of these results. With recent improvements in AI search tools, the approach is now a useful aid to researchers, significantly reducing the time needed to survey existing research, while always needing human checking by an expert. In addition, the combination of formal methods and AI approaches looks to be an interesting and beneficial research area with potential industrial-scale applications in the future.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Lameck Mbangula Amugongo

,

Lena Schaller

,

Maarten van Dijk

,

Helene Wendt

,

Claudia Neumann

,

Andreas Freisinger

,

Jaroslaw Deska

Abstract: Background: Regulatory frameworks such as the Belmont Report, the Common Rule, and the Declaration of Helsinki require informed consent to ensure participants understand a study’s purpose and can make voluntary decisions about their involvement. Regulations including the General Data Protection Regulation (Regulation (EU) 2016/679) further emphasise that consent must be freely given and revocable without disadvantage. Although informed consent forms (ICFs) are intended to be clear and accessible, they have become increasingly lengthy and complex. Large language models (LLMs) offer potential to navigate and interpret this complexity and have shown promise in biomedical information extraction tasks. However, their susceptibility to hallucinations limits reliability in high stakes settings. Retrieval augmented generation (RAG) can mitigate such errors. This study evaluates the integration of LLMs with RAG for reviewing data reuse language in ICFs and their ability to interpret complex textual structures. Methods: Firstly, we processed 438 ICFs from different trials, including multi-countries, languages and versions of ICFs. Using expertly curated prompts, we extracted information about data reuse using GPT-4.1. Comparing the LLM-generated data reuse outputs with human expert ground truth, we evaluated accuracy and the time required to extract information for each ICF. To further validate the workflow, we evaluated an independent set of 488 ICFs spanning additional trials, languages, and regions. For this cohort, we assessed the correctness of LLM outputs along with the quality of supporting evidence provided by the model. Results: Across 438 ICFs, the system achieved 81.6% accuracy, which increased to 90% in a subsequent evaluation of additional 488 ICFs after prompt optimisation. Using a RAG-based approach, the system accurately extracted data reuse information across multiple languages and identified nuanced international regulatory requirements. Conclusion: This approach has the potential to significantly alleviate administrative burdens by automating labour-intensive processes, while also generating insights that could inform the standardisation of ICF creation. Ultimately, these advancements may contribute to reduce the complexity of ICFs, thereby improving their readability and comprehensibility for participants.

Article
Computer Science and Mathematics
Discrete Mathematics and Combinatorics

Ibar Federico Anderson

Abstract: For every prime p and every integer a, the backward finite difference δp(a) := a^p − (a − 1)^p equals the cyclotomic binary form Φp(a, a − 1), where Φp(X, Y) is the homogenisation of the p-th cyclotomic polynomial, and hence equals the norm NQ(ζp)/Q(a − ζp(a − 1)). For p = 3 this specialises to the identity δ3(a) = NZ[ω](a − ω(a − 1)), where ω = e^(2πi/3), connecting the individual cubic finite difference obtained by differencing the classical sum formula of Nicomachus of Gerasa (~100 CE) with the Eisenstein norm that appears in Euler's factorisation of a^3 + b^3. We develop this identity in three directions: (a) General cyclotomic framework. For each prime p, every prime divisor q of δp(a) satisfies q ≡ 1 (mod p), imposing an arithmetic sieve whose density ~1/(p−1) grows increasingly severe with p. (b) Arithmetic density. The values {δ3(a)}a≥1 form a thin subfamily of the Löschian numbers (norms in Z[ω]), with counting function ~√(N/3) versus the Landau-Ramanujan asymptotic CN/√log N for all Löschian numbers up to N. (c) Three-language equivalence. For the cubic case we prove a precise equivalence among: (i) divisibility of δ3(a), (ii) multiplicative order modulo q, and (iii) splitting of q in Z[ω]. We also give an elementary proof of the base case 1 + b^3 = c^3 (no positive-integer solutions), and derive 3-adic constraints on any hypothetical solution to a^3 + b^3 = c^3 via the Lifting-the-Exponent Lemma, without invoking unique factorisation in Z[ω].

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Arjun K. Sharma

,

Priyanka Dasgupta

,

Rajesh V. Iyer

Abstract: This study proposes SWEET-RL, a reinforcement learning framework for training LLM agents in multi-turn collaborative reasoning tasks involving human or agent partners. A step-wise critic is trained using intermediate evaluation signals derived from task progression rather than final answers. The method is evaluated on ColBench, consisting of 3,800 multi-turn collaboration sessions across software development and design tasks. SWEET-RL improves long-horizon task success rates by 24.3% and reduces dialogue-level error accumulation by 35.1%, demonstrating stronger robustness in extended collaborative interactions.

Article
Computer Science and Mathematics
Applied Mathematics

Alicia Cordero

,

Miguel A. Leonardo Sepúlveda

,

Juan R. Torregrosa

,

Antmel Rodríguez Cabral

,

María P. Vassileva

Abstract: A novel two–stage procedure for approximating solutions of nonlinear systems is introduced. The scheme employs two evaluations of the vector function F together with a single Jacobian computation, followed by the resolution of two linear subproblems that share an identical coefficient matrix. This structure reduces the computational burden and enhances the adaptability of the method with respect to existing alternatives. The design of the algorithm is motivated by criteria relating efficiency to the total number of functional evaluations, ensuring that the resulting strategy achieves the optimal convergence order permitted within this framework. A proof of the local convergence order is provided, and its accuracy is supported by a series of experiments on distinct nonlinear models, including problems arising from differential equations. The numerical evidence confirms that the developed technique reaches the theoretical convergence rate and performs favorably when compared with other methods of equal order. Moreover, we examine the dynamical features of the related parametric variant, offering additional understanding of its stability properties and iterative behavior.

of 672

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated