Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Analysis

Mohsen Soltanifar

Abstract: The standard $\varepsilon$--$\delta$ definition of continuity is inherently quantitative, yet the precise dependence of the admissible radius $\delta$ on the accuracy $\varepsilon$ and the base point $x_0$ is rarely treated as an independent mathematical object. In this paper, we introduce the \textit{radius of continuity} through two variants: the radius of pointwise continuity and the radius of uniform continuity, defined as explicit numerical invariants that capture the maximal symmetric neighborhood on which a real-valued function maintains a prescribed tolerance. We establish the fundamental structural properties of these radii, including their behavior under algebraic operations such as sums, products, and compositions, and demonstrate their inverse relationship to the classical modulus of continuity. Furthermore, we prove that the finiteness pattern of these radii characterizes constant versus non-constant functions. To illustrate the utility of this framework, we derive closed-form expressions for the pointwise radius of quadratic polynomials and the uniform radius of the normal probability density function. These examples highlight how the radius of continuity encodes geometric and probabilistic features, such as local curvature and global scale parameters. Ultimately, this perspective bridges the gap between real analysis and quantitative methods in metric geometry, offering a concrete measure of the stability of a function's continuity.

Article
Computer Science and Mathematics
Computer Science

V. Thamilarasi

Abstract: The convergence of Neuro-Symbolic AI, Edge Computing, and Reinforcement Learning heralds a transformative era in autonomous engineering design, addressing longstanding challenges in optimization efficiency, real-time responsiveness, and interpretability. Traditional design workflows suffer from siloed neural pattern recognition lacking logical rigor, centralized cloud dependencies creating latency bottlenecks, and heuristic optimization struggling with multi-objective trade-offs in vast design spaces. This paper introduces an integrated framework that synergistically combines these paradigms to create self-sustaining, end-to-end autonomous pipelines for complex engineering applications from aerospace structures to precision manufacturing.Neuro-Symbolic AI fuses deep neural networks for perceptual feature extraction with symbolic reasoning engines enforcing hard constraints and generating auditable proofs, enabling systems that both discover novel configurations and validate them against domain physics. Edge Computing decentralizes inference across device-fog-cloud hierarchies, achieving sub-10ms decision cycles critical for real-time applications like robotic assembly or smart grid stability. Reinforcement Learning optimization engines navigate continuous state-action spaces representing design variables, iteratively refining solutions through shaped rewards aligned with Pareto-optimal engineering objectives such as minimizing mass while maximizing strength-to-weight ratios.The proposed architecture orchestrates these components via directed acyclic graphs of containerized microservices, with federated synchronization ensuring data consistency across distributed nodes and human-in-the-loop interfaces providing strategic oversight for safety-critical decisions. Mathematical formulations ground the system hybrid loss functions balance learning objectives, edge partitioning optimizes, and multi-agent RL decomposes collaborative design tasks.Deployed on resource-constrained edge platforms, this framework demonstrates 8-12× acceleration in design cycle times, 25-35% improvements in structural efficiency, and full traceability satisfying aerospace certification standards (DO-178C). By eliminating manual iteration bottlenecks while preserving human insight where needed, the system redefines engineering practice, enabling rapid innovation across domains requiring concurrent optimization of performance, manufacturability, sustainability, and cost.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Edrill F. Bilan

,

Emman T. Manduriaga

,

Hernando S. Salapare III

,

Ymir M. Garcia

,

Khatalyn E. Mata

,

Rose Anna R. Banal

,

Imelda C. Ang

,

Wei-Ta Chu

,

Dan Michael A. Cortez

Abstract: Background/Objectives: Lung cancer survival depends on early detection; however, in the Philippines, high radiologist workloads and the anatomical complexity of chest X-rays (CXRs) contribute to missed pulmonary nodules and false-negative diagnoses. This study aims to develop an enhanced deep learning model to improve nodule classification and localization sensitivity. Methods: We propose RNNet-MST, an extension of ResNet-50 that incorporates Multi-Scale Transformer blocks for global context modeling and a custom spatial attention mechanism for attention-based weak localization of disease-relevant regions. The model was trained and evaluated on the NODE21 chest X-ray dataset and compared with a baseline ResNet-50 using classification metrics, with attention maps used for weak localization analysis. Results: RNNet-MST demonstrated improved performance across evaluated metrics relative to the baseline model. Nodule Recall increased from 86.18% to 93.09% (+6.91%), reducing false negatives. Test Accuracy reached 95.16% (+0.51%), and the Nodule F1-Score improved to 91.40% (+1.50%), indicating better detection of small and subtle nodules. Conclusions: The integration of multi-scale transformer features improved classification sensitivity, while the attention mechanism provided weak localization cues that aligned more closely with annotated nodule regions than the baseline. RNNet-MST shows potential as a diagnostic support tool, warranting further validation on larger and more diverse clinical datasets to reduce perceptual errors and facilitate early lung cancer detection in resource-constrained settings.

Article
Computer Science and Mathematics
Computer Science

P. Selvaprasanth

Abstract: Distributed modern software platforms spanning microservices, serverless functions, and edge computing face unprecedented security threats from stealthy adversaries exploiting encrypted data flows and behavioural camouflage. Conventional defences require decryption for analysis, exposing sensitive information in untrusted cloud environments. This paper proposes an innovative framework integrating homomorphic encryption (HE) with automated threat hunting to enable privacy-preserving threat detection at scale. Using levelled BFV schemes from OpenFHE, we perform computations directly on ciphertexts for anomaly scoring and behavioural profiling, while our hunting engine employs graph neural networks and isolation forests to hypothesize and pursue attacker patterns across distributed logs without plaintext exposure.The architecture deploys as Kubernetes-native operators, processing 10,000 encrypted events per second with 92% detection accuracy on MITRE-emulated scenarios, outperforming traditional UEBA by 35% in F1 score and reducing analysis latency from hours to seconds. Evaluations on AWS EKS clusters demonstrate sub-200ms query times for homomorphic aggregations, with noise management via bootstrapping optimizations. Case studies in fintech pipelines reveal thwarted supply-chain compromises and insider data exfiltration’s. By revolutionizing secure computation in dynamic ecosystems, our solution bridges cryptography and AI-driven hunting, offering deployable resilience against evolving threats while complying with GDPR and zero-trust mandates. Future work extends to fully homomorphic deep learning for adaptive adversary modelling.

Review
Computer Science and Mathematics
Computer Science

Divyasree Bellary

Abstract: Decentralized applications (DApps) represent a paradigm shift in software architecture, leveraging blockchain technology and distributed consensus mechanisms to eliminate single points of failure and centralized control. As the adoption of DApps accelerates across sectors such as finance, supply chain, healthcare, and governance, ensuring their functional correctness and behavioral reliability has become a critical engineering challenge. Unlike traditional software, DApps operate in adversarial, permissionless environments where smart contracts execute autonomously and immutably on distributed nodes, making post-deployment correction extremely costly or impossible. This review systematically examines the landscape of functional testing methodologies tailored for decentralized applications, analyzing their suitability, limitations, and practical applicability in modern DApp development workflows. We survey research spanning smart contract verification, consensus protocol testing, oracle interaction validation, cross-chain interoperability testing, and user-layer functional testing of Web3 interfaces. The review identifies four dominant testing paradigms: (1) unit testing of smart contract functions, (2) integration testing of DApp components, (3) property-based testing using formal specifications, and (4) end-to-end simulation on testnets. Through comparative analysis across 13 seminal studies, we evaluate each approach along dimensions of automation feasibility, coverage depth, gas efficiency awareness, and scalability to complex DApp ecosystems. Our findings indicate that while static analysis and symbolic execution tools such as Mythril, Slither, and Manticore offer strong vulnerability detection, they address security properties more than functional correctness. Conversely, framework-based testing tools like Hardhat, Truffle, and Foundry provide adequate unit-level coverage but struggle with cross-contract orchestration and event-driven logic verification. A critical gap exists in testing oracle-dependent and DAO governance workflows. This review concludes with a synthesis of best practices, open research challenges, and a directional roadmap for developing holistic functional testing frameworks suited to the evolving complexity of decentralized systems.

Article
Computer Science and Mathematics
Computer Science

D. Sneha

Abstract: Blockchain networks now underpin mission-critical services in finance, healthcare, supply-chain logistics, and digital governance, yet production deployments continue to suffer severe resilience failures ranging from Byzantine consensus violations to cross-chain bridge exploits that have collectively caused losses exceeding $2 billion. The root cause is a critical tooling gap: ex- isting frameworks such as BlockBench and Hyperledger Caliper evaluate only crash-fault performance and provide neither ad- versarial fault modelling nor automated remediation guidance, leaving operators without a rigorous means of holistic resilience assessment prior to deployment.This paper presents the Blockchain Resilience Analysis System (BRAS), a five-layer, platform-agnostic framework that unifies real-time network topology monitoring, multi-class adversarial fault injection, composite resilience scoring, closed-loop adaptive consensus reconfiguration, and structured reporting within a single repeatable pipeline. BRAS introduces the Resilience Index (RI), a mathematically grounded composite metric that aggre- gates four sub-dimensions—network connectivity, throughput stability, mean-time-to-recovery (MTTR), and Byzantine fault tolerance ratio—into a single interpretable score calibrated to operator-defined service-level objectives. An Adaptive Reconfigu- ration Module (ARM) monitors the RI stream and autonomously adjusts consensus timeout parameters and peer-connection poli- cies when the RI drops below a configurable threshold, closing the feedback loop between fault detection and remediation without manual intervention.Experimental evaluation on a 20-node Hyperledger Fabric testnet and a 15-node Ethereum Proof-of-Authority network demonstrates that BRAS achieves a 34% reduction in MTTR under simulated eclipse attacks and reduces false-positive fault detections by 28% relative to threshold-only monitoring base- lines. The RI metric exhibits strong correlation (r = 0.91, p < 0.001) with independently measured system availability across 50 fault campaigns, validating its predictive utility. BRAS is the first framework to simultaneously address network-layer, consensus-layer, and application-layer resilience threats under a unified, vendor-agnostic architecture, offering both a rigor- ous theoretical foundation and a deployable implementation blueprint for blockchain resilience engineering.

Dataset
Computer Science and Mathematics
Computer Vision and Graphics

Igor Garcia-Atutxa

,

Hodei Calvo-Soraluze

,

Francisca Villanueva-Flores

Abstract: Open, well-documented datasets are essential for the reproducible development of vision systems for urban utility management. This Data Descriptor presents a curated RGB object-detection benchmark of four classes associated with electrical distribution and street-level utility assets: Inspection Chamber, Overhead-to-Underground Transition, General Protection Box, and Transformer Substation. The public release contains 997 valid image-label pairs partitioned into 698 training, 150 validation, and 149 test images. Images were acquired during 2019 in multiple localities across Spain, predominantly with a mobile phone and, in occasional cases, using Google Maps as a complementary visual source, and were manually annotated with LabelImg before export to YOLO format. During curation, four invalid image-label pairs were removed because at least one YOLO bounding box exceeded the normalized image domain. The benchmark contains 1,939 object instances, with marked class imbalance: General Protection Box accounts for 50.2% of objects whereas Transformer Substation represents 4.7%. Images are heterogeneous in size and viewpoint, ranging from 90 × 170 to 4160 × 4032 pixels, with a median resolution of 619 × 544 pixels and a median of two annotated objects per image. The public GitHub release is organized into images/, labels/, and metadata/ directories; metadata stores split definitions, classes.txt, data.yaml, inventory information, annotation schema documentation, and diagnostic summary figures. Beyond detector benchmarking, the dataset can support scalable mapping of visible distribution-grid assets, with potential value for smart-city digital twins and data-informed EV charging deployment.

Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Stephen M. Peña

,

Nikos A. Salingaros

Abstract: The concept of the technological singularity is applied here to architecture (of buildings, not software). This is the point at which non-human intelligence surpasses ordinary human cognitive limits. AI betters mainstream architectural culture in one crucial aspect — its capacity to evaluate design that adapts to human emotional health. Postwar building architecture as an institutional power system rewards abstraction and stylistic conformity through media prestige while not accounting for embodied human experience. By narrowing judgment criteria, architectural studio pedagogy trains tacitly for imitation, not using evidence that conflicts with dominant formal ideologies. Yet findings from envi-ronmental psychology, health-related design research, neuroscience, and recent AI-based studies show that built form measurably affects empathic response and user well-being. This paper examines whether dominant architectural culture imposes population-level cognitive costs by systematically producing informationally-impoverished, stressful en-vironments. The conclusion is that built-environment design suffers from an epistemic closure because (i) architectural education does not foster curiosity in how design affects users—the core mechanism for intelligence development, and (ii) prolonged media ex-posure habituates populations to ignore distress signals from harmful geometries. A discipline entrusted with human welfare has insulated itself from relevant knowledge, whereas empathic AI can now be directed to apply that knowledge to improve the built environment. In this sense, the singularity is already here in architecture.

Article
Computer Science and Mathematics
Software

Manikanta Reddy P

Abstract: Immutable code and steep transaction fees make smart contract deployment uniquely unforgiving. While continuous integration (CI/CD) pipelines excel at catching standard software bugs, applying exhaustive security tests to Web3 applications severely bottlenecks development through massive computational overhead and gas consumption. This paper presents a testing architecture designed specifically to resolve this tension between security depth and execution speed. The system pipelines three core engines. First, an AI-driven pre-execution gate flags immediate vulnerabilities. Next, a structural reduction module applies the k + 1 symmetric pattern to strip out redundant test permutations. Finally, the system constrains the remaining test suite using the NSGA-II evolutionary algorithm. This multi-objective optimizer dynamically schedules execution to maximize fault detection against strict, predefined gas budgets. To evaluate the model empirically, I bridged a localized EVM sandbox with a Python optimization engine. Results confirm the framework collapses exponential test generation and throttlesexecution costs without sacrificing critical security coverage. Ultimately, it offers a highly scalable path forward for modern DevSecOps.

Article
Computer Science and Mathematics
Computer Science

Sindhuja A

Abstract: Wildfire spread prediction demands hyper-local accuracy at scales unattainable by traditional physics-based models or coarse satellite observations. This paper introduces a novel Digital Twin AI framework leveraging 5G IoT mesh networks to deliver real-time, 10m×10m resolution fire propagation forecasts with 5-60-minute lead times. Deployed across 1,200 self-healing sensor nodes, the system fuses multi-modal environmental data thermal anomalies, 3D winds profiles, dynamic fuel moisture at 100Hz through graph attention networks, feeding physics-informed neural twins synchronized via unscented Kalman filtering. The edge-optimized prediction engine combines convolutional cellular automata with graph neural networks, achieving 42% IoU improvement over FARSITE baselines while executing 8.2ms inference cycles on Jetson Orin NPUs. Federated learning across mesh nodes enables continuous adaptation without compromising operational privacy, while INT4 quantization and RTOS scheduling guarantee sub-10ms end-to-end latency critical for first responder activation. The framework scales linearly to 10K nodes, reduces false alerts by 73%, and maintains 99.999% uptime through dynamic routing around fire-damaged sensors. This work establishes a new paradigm for autonomous wildfire intelligence, transforming reactive response into proactive hyper local containment.

Article
Computer Science and Mathematics
Computer Networks and Communications

Youssef Ahmedm

,

Ruotong Luan

Abstract: Reinforcement learning (RL) in mobile edge computing (MEC) faces critical challenges of data heterogeneity, communication overhead, and limited generalization across diverse preferences and system configurations. We propose Adaptive Reinforcement Learning Offloading (ARLO), a unified framework integrating adaptive dissimilarity measures for federated learning with generalizable multi-objective optimization for computation offloading. The Adaptive Dissimilarity Measure module leverages parameter dissimilarity with Lagrangian multipliers to mitigate model drift under Non-IID data and loss dissimilarity to reduce communication overhead via adaptive aggregation. The Contextual Multi-Objective Decision module employs histogram-based state encoding and a Generalizable Neural Network Architecture with action masking, enabling a single policy to adapt to varying preferences, server counts, and CPU frequencies. Experiments show ARLO achieves 82.6% accuracy on CIFAR-10 with 44.3% fewer communication rounds than FedProx, and a 121.0% hypervolume improvement in offloading with only 1.7% generalization error across unseen configurations.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Daniel Andrade-Girón

,

William Marin-Rodriguez

,

Américo Peña

,

Elsa Oscuvilca-Tapia

,

Fredy Bermejo-Sanchez

Abstract: Obesity represents a significant public health concern, attributable to its high preva-lence and its association with cardiometabolic comorbidities. This study compared a set of ensemble learning models—including canonical ensembles, meta-ensembles, and base-lines for tabular data—in a multiclass obesity status prediction task using the “Obesity Dataset” (n = 1,610; 14 predictors; 4 classes). To ensure methodological rigor, a pipeline was implemented using ColumnTransformer, standardization, one-hot encoding, and re-balancing via SMOTENC applied exclusively to the training folds, thereby preventing data leakage. The performance of the system was evaluated using several evaluation metrics, including accuracy, F1-score, precision, recall, Cohen's kappa, and Matthews correlation coefficient. This evaluation was supplemented by a computational cost analysis. Inferen-tial comparisons were executed using the Friedman test and the Nemenyi post-hoc test (α = 0.05). The findings indicated a high level of overall performance (≈89–90.5% precision), identifying a leading group of models that were statistically indistinguishable (Group A). This group included LightGBM (90.49% ± 1.38), Random Forest (90.16% ± 1.70), Stacking (90.21% ± 1.70), and Extra Trees (89.69% ± 1.55). It has been demonstrated that models such as XGBoost, Bagging, and CatBoost demonstrate competitive performance with par-tial statistical overlap. Conversely, Gradient Boosting and AdaBoost exhibited signifi-cantly lower performance. In summary, a single dominant model was not identified; ra-ther, a set of equivalent solutions was identified. The selection of a model should be based on a balance between accuracy, computational cost, and interpretability. Random Forest and Extra Trees are efficient options, and Stacking is a valid alternative when maximizing predictive performance is prioritized.

Article
Computer Science and Mathematics
Mathematical and Computational Biology

Zhaoxu Meng

,

Yong Cui

Abstract: Exploration remains a central challenge in reinforcement learning, especially in sparse-reward settings where extrinsic feedback alone is often insufficient to guide effective behavior. In this work, we develop a curiosity-driven framework that combines a hybrid intrinsic reward with compact predictive representation learning. Specifically, curiosity is quantified by integrating prediction error with the rarity of state-action pairs in a learned latent space. To make novelty estimation more meaningful for high-dimensional observations such as raw pixels, we employ the Information Bottleneck principle to learn low-dimensional representations that suppress irrelevant variability while preserving predictive structure of the environment dynamics. We further investigate two practical ways to optimize predictive information: one based on entropy decomposition and the other based on matrix-based Renyi entropy. Experiments on Acrobot show that the proposed method substantially improves exploration efficiency over ICM, RND, and a $k$-NN novelty baseline. On MountainCar, however, the improvement is less evident, suggesting that the proposed framework is particularly beneficial in environments with high-dimensional observations or more structured dynamics.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Elia Santoro

,

Luigi Laura

,

Marco Parrillo

,

Valerio Rughetti

Abstract: Facial expression recognition (FER) is a well-established task in computer vision, yet its application to non-photorealistic domains, such as anime and manga, remains largely underexplored. The stylized, exaggerated, and often non-proportional facial features of illustrated characters present unique challenges for deep learning models trained predominantly on realistic imagery. In this work, we construct a balanced dataset of 3,000 manga and anime face images spanning six emotion categories (Angry, Embarrassed, Happy, Psycho-Crazy, Sad, Scared) and conduct a systematic comparison of two major deep learning paradigms: Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs). Specifically, We evaluate ResNet-18, ResNet-50, ViT-B/16, and ViT-S/16 under four fine-tuning strategies: linear probing, partial fine-tuning, full fine-tuning, and progressive unfreezing; enabling a controlled comparison of both architectural families and transfer learning depth. Our results show that fine-tuning strategy significantly impacts performance: the best configuration (ViT-B/16 with progressive unfreezing) achieves 80.89% test accuracy, compared to 61.33% for the weakest linear probe baseline (ViT-S/16), a gap of 19.56 percentage points. Vision Transformers benefit disproportionately from fine-tuning, and the relative ranking of architectures changes across fine-tuning regimes. Confusion matrix analysis reveals persistent cross-class confusion between visually similar emotions (e.g., Happyvs. Embarrassed), while highly distinctive categories such as Psycho-Crazy are consistently well recognized across all architectures.

Article
Computer Science and Mathematics
Applied Mathematics

Bichitra Kumar Lenka

Abstract: This paper deals with some expressions of the fractional generalized Gronwall inequality when associated with both non-negative and non-positive singular kernels and establishes sharp Mittag- Leffler bounds containing different ingredients. The long-term behavior of non-autonomous fractional order systems by means of modified fractional Lyapunov theorems is analyzed. As an application, we give a few examples that use quadratic Lyapunov functions for typical fractional order systems to predict trajectories that ultimately aim to reach vector 0 as t → ∞.

Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Soham Mukherjee

,

John Le

,

Chau Nguyen

,

Thai Vu

Abstract: Large Language Models (LLMs) have materialised as revolutionary tools across various do- mains, showcasing exceptional capabilities in natural language processing and generation. However, their reliance on static pre-training data limits their ability to access up-to-date and domain-specific information. The existing research often treats augmentation strategies in isolation, and limited efforts have been made to systematically compare them through the lens of information integrity. This review focuses specifically on Retrieval-Augmented Generation (RAG) and Fine-tuning, identifying them as the two dominant paradigms for integrating external knowledge: RAG for retrieval-based context injection and Fine-tuning for parametric knowledge adaptation. While existing surveys predominantly focus on performance metrics like accuracy or latency, this paper addresses the critical gap of data fidelity-the preservation of truthfulness, integrity, and fairness during augmentation. We systematically synthesise empirical findings from diverse methodologies to determine how each approach mitigates hallucinations and bias. By comparing the trade-offs between retrieval-based context injection and parametric knowledge adaptation, this survey brings unique value to readers by providing a structured taxonomy, a unified evaluation frame- work, and actionable insights to guide future research and practical deployment of robust, high-fidelity LLMs.

Article
Computer Science and Mathematics
Security Systems

Lyudmila Kovalchuk

,

Mariia Rodinko

,

Roman Oliynykov

,

Volodymyr Artemchuk

Abstract: This paper studies the probability of a double-spend attack in an Ouroboros-like Proof-of-Stake (PoS) setting when confirmation decisions must be made for a finite number of blocks. Existing security analyses of Ouroboros-family protocols are mainly asymptotic and therefore do not directly provide the attack probability for a fixed confirmation depth. We consider an analytically tractable model that allows empty slots and multiple slot leaders, and assumes fixed stake distribution within an epoch, one-block growth of the public longest chain in any slot containing at least one honest leader, and next-slot block visibility. These assumptions hold when the time slot length is much greater than the network delay, and are applicable to practical deployment scenarios such as Cardano. Under these assumptions, for the first time, an exact closed-form solution for the success probability of a double-spend attack considering a realistic model with multiple leaders and empty time slots. Numerical examples illustrate how the required confirmation depth depends on the adversarial stake ratio and the active slot coefficient. The results apply to the stated analytical model and do not yet cover delayed fork resolution or the full protocol-level fork-choice and finality mechanisms of Ouroboros Praos.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Andrew Michael Brilliant

Abstract: We develop a diagnostic framework for evaluating when LLM self-evaluation can be trusted. The framework's central results are: (1) under a shared-blind-spot modeling assumption with joint conditional independence of evaluations given the shared failure structure, k rounds of self-critique provide information about correctness bounded by what the shared latent failure variable Z mediates---not by any independent channel---so that confidence accumulated through repeated self-evaluation reflects the shared failure structure rather than independently accumulated evidence; and (2) a selector satisfying two independently measurable sufficient conditions---bounded false-acceptance and true-acceptance exceeding that bound---provides a quantifiable lower bound on evidence about correctness. Both results are conditional on explicit modeling assumptions. We also prove an information-theoretic bound showing that self-evaluation is bounded in what it can add when a shared latent failure structure mediates both generation and evaluation errors; we foreground this as scaffolding rather than a primary contribution, since the latent variable requires independent operationalization to give the bound empirical bite.The diagnostic framework identifies what to measure to determine whether a deployed system is in the failure regime, and what properties an external selector must have to escape it. We describe design principles for an architecture motivated by this analysis; same-model context separation is an engineering heuristic, not a theoretical solution, and we present it as a practical starting point pending empirical validation.

Article
Computer Science and Mathematics
Computational Mathematics

Ricardo Adonis Caraccioli Abrego

Abstract: We describe a decimal–hexadecimal block encoding for primality over a finite stored range. Since every prime greater than 5 must lie in one of the residue classes 1, 3, 7, 9 (mod 10), each decimal block of size ten can be encoded by a 4-bit word indicating which of the candidates 10k + 1, 10k + 3, 10k + 7, and 10k + 9 are prime. This yields a nibble-based storage scheme supporting exact primality queries and exact recovery of the prime-counting function π(x) by cumulative popcount. We then establish a structural theorem arising from the congruence 10 ≡ 1 (mod 3): for k ≡ 0 (mod 3) the candidates 10k + 3 and 10k + 9 are always composite, and for k ≡ 2 (mod 3) the candidates 10k + 1 and 10k + 7 are always composite. This partitions the nibble alphabet into three classes of sizes 4, 16, and 4, reducing the Shannon entropy from 4 bits to 2.42 bits per nibble and yielding a lossless compression of 39.4% over the original encoding with O(1) decode complexity. We present data structures, Python routines, and experimental validation up to 300,000.

Article
Computer Science and Mathematics
Applied Mathematics

Mudassir Shams

,

Bruno Carpentieri

Abstract: Nonlinear equations arise extensively in engineering and applied sciences. This study introduces a family of Caputo and Atangana–Baleanu–Caputo (ABC) fractional order iterative methods for solving nonlinear problems. The proposed schemes are designed to enhance convergence behavior and improve robustness compared to existing fractional Newton-type methods. Local convergence is analyzed using fractional Taylor expansions, establishing the order of convergence and associated error equations. In addition, a dynamical systems perspective is adopted to investigate global convergence properties through basin of attraction analysis, including fractal structures and the Wada measure. Numerical experiments on application-inspired nonlinear models demonstrate that the proposed methods achieve faster error reduction, lower residuals, and improved computational efficiency compared to existing schemes. These results indicate that the proposed framework provides an effective and flexible approach for solving nonlinear equations, combining accuracy, stability, and dynamical insight.

of 696

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated