Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Computer Science

Maurizio Giacobbe

,

Salvatore Distifano

Abstract: The transition from smart to intelligent cities allows for the deployment and management of information and communication technologies in the urban context to be driven by holistic sustainability requirements rather than technical ones such as feasibility and fragmented, siloed operational patterns. This work proposes a multi-dimensional decision-making framework to manage a smart-intelligent city as an urban Cyber-Physical System across environmental, economic, and social sustainability pillars, metrics and their tradeoffs. A methodology based on Deep Reinforcement Learning and reward-shaping mechanisms is proposed to represent and assess sustainability pillar dependencies and their interplay. A case study on a Low-Power Wide-Area Network planning, deployment and management in a Sicilian municipality has been developed to demonstrate the effectiveness of the proposed approach in dealing with the dynamics and the non-linear dependencies of the sustainability pillars. The results thus obtained provide a blueprint for urban planners to develop sustainable, resilient, cost-effective, and environmentally friendly smart-intelligent city frameworks.

Article
Computer Science and Mathematics
Computer Science

Shuang Li

,

Ka-Cheng Choi

Abstract: Existing travel planning systems lack user participation in itinerary scoring and apply coarse, binary weather treatment that risks excluding high-quality outdoor attractions under mild precipitation. This paper presents a multi-objective genetic algorithm (GA)-based itinerary planning system that addresses both limitations. The system incorporates a weather-adaptive POI scoring framework mapping nine weather conditions to three strategies, applying intensity-proportional rain penalties and a geographic flexibility bonus scaled by local indoor alternative density. User preferences are encoded via three integer sliders whose normalised values directly set the GA fitness weights for POI quality, traveling efficiency, and preference satisfaction. The system is evaluated on 144 attractions in Macao. Results show that outdoor POI representation decreases proportionally with precipitation intensity across all nine weather conditions and is substantially suppressed under official extreme weather alerts, while itinerary quality is preserved through the flexibility bonus. Slider adjustment experiments confirm that amplifying each weight produces statistically consistent, direction-correct improvements in its target sub-objective without degrading the others. These findings validate the functional independence of the three-objective fitness formulation and demonstrate that graduated weather treatment and direct user weight control together yield a more responsive and robust itinerary planning system.

Article
Computer Science and Mathematics
Computer Science

Jie Li

,

Usman Adeel

,

Safwan Akram

Abstract: Healthcare analytics is often limited by amounts of data and strict privacy requirements, which make it difficult to share patient-level records across organisations and to build robust predictive models. Federated learning (FL) provides an alternative by keeping data local and exchanging model updates instead of raw records. However, many existing FL solutions remain difficult to deploy in healthcare settings, as they provide limited support for auditability, governance-oriented evidence, and system-level transparency. This paper presents MediVault, an auditable and secure federated learning-based system for privacy-preserving healthcare collaboration. MediVault combines round-based federated training, protected update exchange, audit-ready telemetry, and an interactive dashboard that exposes non-sensitive evidence of collaboration, model progress, and protocol execution. In addition, the system supports controlled reporting to improve stakeholder communication during pilot deployments. We evaluate MediVault on two public healthcare classification datasets, Breast Cancer Wisconsin (Diagnostic) and Heart Disease, under settings designed to reflect multi-site heterogeneity. Experiments are conducted using two interpretable linear models, logistic regression and linear SVM, under matched settings. Results show that federated training remains competitive with centralised training across both datasets. These findings suggest that an auditable and secure FL workflow can preserve predictive utility while also supporting the transparency, governance readiness, and practical system behaviour needed for privacy-preserving multi-organisation healthcare collaboration.

Article
Computer Science and Mathematics
Computer Science

Isaac Kofi Nti

Abstract: Behavioral ransomware detection often achieves high accuracy in standard evaluations; however, these results frequently fail to generalize under distribution shifts or when encountering previously unseen families. This study evaluates detection performance on the MLRan dataset (4,880 samples across 64 families) using four rigorous evaluation protocols: stratified, temporal, family-disjoint, and open-set. To ensure a strict separation of learned features, the family-disjoint and open-set splits were executed at the family level. We propose the Hierarchical Sparse Neural Network (HSNN), a taxonomy-aligned model with group-level and branch-level gating for structured interpretability. Unlike flat architecture, HSNN introduces a hierarchical gating mechanism aligned with a predefined behavioral taxonomy, enabling structured interpretability and modality-level analysis. The baseline FlatMLP had a slightly higher average macro-F1 score (0.9860 vs. HSNN's 0.9839), but the HSNN was better calibrated and more parameter efficient. The HSNN reduced calibration error by 34.1% (absolute reduction of 0.0056 in ECE) and model complexity by 42% in terms of parameter count. HSNN showed slightly lower variability than FlatMLP and broadly stable gate patterns across seeds. The proposed HSNN achieved one of the highest performances under the paper’s open-set family protocol (0.9930 vs. 0.9913) using a maximum-softmax novelty baseline. Our feature analysis shows that string-based artifacts remain strong predictors, but the HSNN’s hierarchical structure encourages a more balanced weighting across behavioral modalities, reducing reliance on any single feature type. These results indicate that structured, sparse architecture presents a competitive and well-calibrated alternative to conventional dense models under the evaluated settings.

Article
Computer Science and Mathematics
Computer Science

Shuriya B

Abstract: 5G‑connected mobile respiratory units introduce a paradigm shift in acute crisis management for lung cancer patients by merging portable ventilatory support with ultra‑low‑latency, high‑bandwidth connectivity. These units integrate on‑board ventilators, non‑invasive and invasive respiratory interfaces, vital‑sign monitors, and 5G modems to enable real‑time transmission of flows, pressures, oxygen saturation, and high‑definition video to remote critical‑care hubs. During transport or in‑field deployment, intensivists and respiratory specialists can guide airway management, titrate ventilator settings, and override parameters via cloud‑based control interfaces, effectively extending ICU‑grade care into ambulances, rural clinics, and home‑based acute episodes. For lung cancer patients experiencing sudden respiratory failure due to tumour‑related airway obstruction, pleural effusion, or post‑procedure complications, such units reduce time‑to‑intervention and improve stabilization before hospital arrival. This article discusses the system architecture, 5G‑enabled tele‑ICU connectivity, safety protocols, and clinical workflows that position 5G‑connected mobile respiratory units as a scalable, technology‑driven solution for managing acute respiratory crises along the lung cancer care continuum.

Article
Computer Science and Mathematics
Computer Science

Emily Curl

,

Kofi Ampomah

,

Md Erfan

,

Sayanton Dibbo

Abstract: While deep learning systems are becoming increasingly prevalent in medical image analysis, their vulnerabilities to adversarial perturbations raise serious concerns for clinical deployment. These vulnerability evaluations largely rely on Attack Success Rate (ASR), a binary metric that indicates solely whether an attack is successful. However, the ASR metric does not account for other factors, such as perturbation strength, perceptual image quality, and cross-architecture attack transferability, and therefore, the interpretation is incomplete. This gap requires consideration, as complex, large-scale deep learning systems, including Vision Transformers (ViTs), are increasingly challenging the dominance of Convolutional Neural Networks (CNNs). These architectures learn differently, and it is unclear whether a single metric, e.g., ASR, can effectively capture adversarial behavior. To address this, we perform a systematic empirical study on four medical image datasets: PathMNIST, DermaMNIST, RetinaMNIST, and CheXpert. We evaluate seven models (VGG-16, ResNet-50, DenseNet-121, Inception-v3, DeiT, Swin Transformer, and ViT-B/16) against seven attack methods at five perturbation budgets, measuring ASR, Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and $L_2$ perturbation magnitude. Our findings show a consistent pattern: perceptual and distortion metrics are strongly associated with one another and exhibit minimal correlation with ASR. This applies to both CNNs and ViTs. The results demonstrate that ASR alone is an inadequate indicator of adversarial robustness and transferability. Consequently, we argue that a thorough assessment of adversarial risk in medical AI necessitates multi-metric frameworks that encompass not only the attack efficacy but also its methodology and associated overheads.

Article
Computer Science and Mathematics
Computer Science

Karthick R

Abstract: Wearable smart inhalers represent a transformative approach to chronic respiratory management in advanced lung cancer, integrating sensor‑based monitoring, real‑time connectivity, and patient‑centric feedback loops. These devices track inhalation technique, dosing frequency, and timing, then transmit encrypted data to cloud‑based platforms for analysis by clinicians and AI‑driven algorithms. For advanced‑stage lung cancer patients, this enables continuous surveillance of bronchospasm, dyspnoea, and medication adherence outside the intensive care setting, thereby reducing uncontrolled exacerbations and unplanned hospitalizations. Smart inhalers also support tele‑follow‑ups and personalized adjustment of bronchodilator or palliative regimens based on individual patterns of use and symptom burden. When embedded within an end‑to‑end care pathway from intensive care discharge to home care these wearables facilitate seamless transitions, improve self‑management, and empower multidisciplinary teams with objective, longitudinal respiratory data. This article explores design principles, clinical integration, and emerging digital‑health frameworks that position wearable smart inhalers as a cornerstone of modern, technology‑driven chronic respiratory support in advanced lung cancer.

Article
Computer Science and Mathematics
Computer Science

Shampa Banik

Abstract: With the integration of the Internet of Things (IoT), the Advanced Metering Infrastructure (AMI) plays a key role in improving grid efficiency and consumer awareness, and has transformed the traditional grid into a new intelligent, efficient paradigm, the Smart Grid (SG). However, the increasing dependencies of Information and Communication Technology (ICT) for machine-to-machine communications expose the AMI system to a wide range of cyber-physical intrusions or threats, such as data tampering, denial-of-service attacks, and unauthorized access. Vulnerabilities in various AMI’s cyber-physical systems (CPSs) components might compromise the integrity and confidentiality of the SG systems. In addition to other defensive mechanisms, the Intrusion Detection System (IDS) acts as a robust countermeasure to safeguard the AMI against cyber-attacks and threats. Though designing an effective IDS and deploying it in a highly distributed AMI network is greatly hindered by the growing number of heterogeneous, multi-sourced, and interconnected system components, as well as the evolving nature of various recent cyber-physical intrusions. This paper presents a comprehensive survey of IDSs in a structured way to address the key challenges of system scalability, heterogeneity, deployment constraints, and, most importantly, detection of various evolving attack patterns tailored for AMI in SG. Unlike current surveys, this study ushers a unified taxonomy of IDS applied in AMI across categories such as data sources, detection mechanisms, and deployment techniques and architectures. A comparative analysis of contemporary methods is provided to highlight their strengths, limitations, and applicability in real-world smart grid scenarios. This paper identifies critical gaps by analyzing contemporary methods used in the current IDSs to tackle various cyber-physical system security vulnerabilities. applicable to distributed system dynamics, AMI in SG. To address key challenges of existing IDSs, a multimodal intrusion detection system (MIDS) is proposed, featuring data-driven, adaptive security solutions for the next-generation AMI system. The technical insights for developing the experimental framework presented in this study aim to guide future research and development of data-driven, robust, scalable, and intelligent IDS solutions for securing AMI infrastructure in the SG system.

Article
Computer Science and Mathematics
Computer Science

Xuanfei Zhou

,

Yinxuan Huang

,

Sining Han

,

Jiangyao Bai

,

Qianzhen Zhang

Abstract: Controllable symbolic music generation must preserve a reference melody while remaining responsive to style prompts. Existing hierarchical diffusion systems typically reuse a shared condition vector across harmony, rhythm, and timbre stages, which can entangle stylistic factors and weaken melody preservation. We present HCDMG++, a hierarchical diffusion framework that addresses these two limitations through Stage-Aware Style Routing and Differentiable Melody Regularization. The routing module uses a residual Multi-Layer Perceptron (MLP) to project text-derived style embeddings into stage-specific subspaces, whereas the regularization branch aligns soft pitch histograms and contour trajectories with the conditioning melody during training. We evaluate the integrated system on a 384-sample benchmark covering four melodies, eight styles, four random seeds, and three denoising budgets. HCDMG++ produces valid four-track outputs in all runs and reaches a peak pitch-histogram similarity of 0.508 under a 64-step budget. A matched legacy-compatible reference further shows substantially stronger pitch-histogram alignment than Legacy-HCDMG. These results indicate that stage-specific conditioning and differentiable structural guidance improve controllability in symbolic music diffusion.

Article
Computer Science and Mathematics
Computer Science

Yuxia Qian

,

Yiwen Liang

,

Lei Shang

,

Xinqi Dong

,

Yincheng Liang

Abstract: Network access control and identity legitimacy verification have been implemented by establishing a secure foundation for the trusted establishment of communication entities. However, successful identity authentication alone does not guarantee secure communication. In open-network environments, it remains essential to establish a secure session key via a robust key agreement mechanism—one that prevents explicit disclosure of identity information while ensuring post-quantum security. To address these requirements, we propose a lattice-based key agreement protocol. The protocol integrates identity binding, implicit authentication, and session key establishment into a single ciphertext exchange. Furthermore, it supports secure key evolution and revocation verification through a version-control mechanism and a blockchain-maintained revocation list—thus realizing a comprehensive, post-quantum-secure key agreement scheme under reasonable computational and communication overhead.

Article
Computer Science and Mathematics
Computer Science

Songtao Hu

,

Liang Chen

,

Qianyue Zhang

,

Wenchao Liu

Abstract: The Automatic Identification System (AIS) generates massive volumes of real-world ship trajectory data, providing a critical foundation for maritime ship type classification. However, existing methods often struggle to simultaneously capture long-range temporal dependencies, maintain computational efficiency, and ensure model interpretability, which makes accurate multi-class classification challenging in real-world maritime environments. To address these limitations, this study proposes a robust and efficient hybrid framework. The proposed architecture integrates a Feature Transformer module for deep temporal feature extraction with a LightGBM model for efficient ensemble classification. Specifically, the multi-head self-attention mechanism within the Feature Transformer captures long-range dependencies in preprocessed AIS sequences to generate compact trajectory fingerprints. These deep temporal representations are then concatenated with carefully designed statistical and kinematic tabular features and fed into the LightGBM classifier for final ship type identification. To validate the proposed framework, we construct a comprehensive real-world AIS dataset consisting of 2,196 trajectories collected between 2019 and 2023, encompassing diverse ship types that reflect authentic maritime scenarios. Experimental results show that the proposed method achieves 82.42% overall accuracy and 77.35% Macro-F1, significantly outperforming comparative baseline models, including LSTM (64.85% accuracy), GRU (64.85%), vanilla Transformer (61.21%), and standalone LightGBM (59.09%). Furthermore, the hybrid model offers ultra-fast inference (1.58 ms per batch) and enhanced interpretability through SHAP-based analysis, making it highly suitable for near real-time maritime traffic monitoring and decision-support applications.

Article
Computer Science and Mathematics
Computer Science

Yanyan Jia

,

Siyi Wang

Abstract: Traffic sign detection in autonomous driving faces challenges including multi-scale objects, complex backgrounds, and limited edge-computing power. To address insufficient multi-scale feature representation and high false negatives for small traffic signs in YOLOv8n, this study proposes an improved algorithm integrating the VoVGSCSP module with a Multi-scale Contextual Attention (MCA) mechanism. The original C2f module is replaced with VoVGSCSP, enhancing feature representation through parallel residual branches and cross-stage connections. A lightweight neck, SlimNeck, is designed and combined with MCA, employing multi-branch pooling and dynamic weight fusion to capture geometric features and color semantics. The PAN-FPN path is optimized with cross-level connections and learnable weights for adaptive multi-scale fusion. Experiments on the GTSRB dataset show that the improved model reduces parameters to 2.66 M (an 11.6% decrease) and computational complexity to 7.49 GFLOPs, while mAP@0.5 increases from 94.7% to 96.3% and FPS improves from 82.3 to 90.6. The proposed algorithm achieves comprehensive gains in lightweighting, accuracy, and speed, demonstrating its effectiveness and practical applicability.

Article
Computer Science and Mathematics
Computer Science

Chang Chia Wei

Abstract: In the current internet era, the number of information security vulnerabilities has increased dramatically. Image and text encryption have become critical preprocessing steps in secure information transmission. Sensitive information can be transmitted through encrypted images, facilitating the implementation of various secure communication systems. This paper proposes an image encryption scheme that employs Sudoku as a cryptographic key matrix, combined with a strong diffusion mechanism to enhance pixel confusion and diffusion effects. The proposed method achieves high-level pixel scrambling through multiple rounds of iterative threshold encryption, pixel padding with random shuffling, and Sudoku-based permutation. Additionally, rotation operations are applied to further increase the irreversibility of the encrypted image. The core keys include the iterative threshold sequence, row-column diffusion keys, and random premutation parameters, ensuring that the encryption is fully reproducible. Experimental results demonstrate that, while preserving reversibility, the proposed method achieves significant confusion and diffusion performance. For the Lena image, the method attains NPCR ≈ 99.22% and UACI ≈ 33.30%, indicating its effectiveness as a robust image encryption approach.

Article
Computer Science and Mathematics
Computer Science

Ganglong Duan

,

Haonan Sun

,

Sijia Zhong

,

Hongquan Xue

Abstract: In precision mold manufacturing, the machining of HRC52 hardened steel causes se-vere tool wear and high noise in multi source sensor signals, making accurate remain-ing useful life (RUL) prediction challenging. To address this, we propose a hybrid mod-el that integrates one dimensional deep convolution (DCNN), low resolution self attention (LRSA) with 1D 2D spatiotemporal reconstruction, and a multi channel bidirectional long short term memory network (McBiLSTM). A Gaussian smoothing filter is first applied to denoise the 50 kHz signals, followed by physical period sliding windows for feature extraction. A multi strategy fusion pooling layer (mean, max, and last quarter features) further improves prediction accuracy. Using the PHM 2010 milling cutter dataset under leave one out cross validation, the proposed model achieves a mean absolute percentage error (MAPE) of 1.45% and a root mean square error (RMSE) of 2.76 mm, reducing prediction error by up to 75.6% compared to Transformer, LSTM, and GRU baselines. These results demonstrate that the model ef-fectively extracts degradation features even during the accelerated wear stage, offer-ing a reliable solution for tool health monitoring and predictive maintenance under complex cutting conditions.

Article
Computer Science and Mathematics
Computer Science

R. Senthilkumar

Abstract: Soft robotic grippers excel in unstructured manipulation but suffer catastrophic failure rates (72%) when grasping deformable organics, fabrics, and mixed debris due to hyperchaotic pneumatic dynamics. This paper introduces the first Lyapunov stability controller for soft robotics, deploying real-time maximal Lyapunov exponent estimation (λ_MLE) from fibre-optic strain sensor arrays running at 100Hz on Intel Loihi 2 neuromorphic chips. The system reconstructs 12D phase space embeddings via Takens theorem, detecting chaos onset 187ms early during dual-material transitions (tomato → bolt), enabling pre-emptive damping that transforms strange attractors into stable limit cycles. Experimental validation across USDA organic datasets (tomatoes, grapes, leafy greens) and MRF waste streams demonstrates 94.2% grasp success 3.7× improvement over PID baselines with 2.3× faster cycles (2.1 grips/second) and 67% energy savings. Neuromorphic acceleration achieves 187μs latency for 12D divergence computation, 28× faster than GPU methods. Field deployments confirm robustness, agricultural harvesting sustains 3 clusters/minute, waste sorting handles mixed-material chaos, and medical tissue manipulation achieves sub-micron precision under arterial pulpability. Theoretical contributions include event-triggered Lyapunov redesign guaranteeing exponential stability (λ_1<-0.1) despite 24dB vibration and 47% moisture variance. Phase space visualization reveals Kaplan-Yorke dimension collapsing from 8.2D hyper chaos to 2.1D stable manifolds, providing online stability margins. This work establishes chaos quantification as a foundational primitive for next-generation soft robotics, transforming nonlinearity from failure mode to control parameter across agriculture, recycling, and minimally-invasive surgery.

Article
Computer Science and Mathematics
Computer Science

Taehyun Yang

,

Eunhye Kim

,

Zhongzheng Xu

,

Fumeng Yang

Abstract: Generative AI tools have lowered barriers to producing branded social media images and captions, yet small-business owners (SBOs) still struggle to create on-brand posts without access to professional designers or marketing consultants. Although these tools enable fast image generation from text prompts, aligning outputs with a brand’s intended look and feel remains a demanding, iterative creative task. In this position paper, we explore how SBOs navigate iterative content creation and how AI-assisted systems can support SBOs’ content creation workflow. We conducted a preliminary study with 12 SBOs who independently manage their businesses and social media presence, using a questionnaire to collect their branding practices, content workflows, and use of generative AI alongside conventional design tools. We identified three recurring challenges: (1) translating brand “feel” into effective prompts, (2) difficulty revisiting and comparing prior image generations, and (3) difficulty making sense of changes between iterations to steer refinement. Based on these findings, we present a prototype that scaffolds brand articulation, supports feedback-informed exploration, and maintains a traceboard of branching image iterations. Our work illustrates how traces of the iterative process can serve as workflow support that helps SBOs keep track of explorations, make sense of changes, and refine content. CCS Concepts: Human-centered computing → Human computer interaction (HCI).

Article
Computer Science and Mathematics
Computer Science

V. Thamilarasi

Abstract: The convergence of Neuro-Symbolic AI, Edge Computing, and Reinforcement Learning heralds a transformative era in autonomous engineering design, addressing longstanding challenges in optimization efficiency, real-time responsiveness, and interpretability. Traditional design workflows suffer from siloed neural pattern recognition lacking logical rigor, centralized cloud dependencies creating latency bottlenecks, and heuristic optimization struggling with multi-objective trade-offs in vast design spaces. This paper introduces an integrated framework that synergistically combines these paradigms to create self-sustaining, end-to-end autonomous pipelines for complex engineering applications from aerospace structures to precision manufacturing.Neuro-Symbolic AI fuses deep neural networks for perceptual feature extraction with symbolic reasoning engines enforcing hard constraints and generating auditable proofs, enabling systems that both discover novel configurations and validate them against domain physics. Edge Computing decentralizes inference across device-fog-cloud hierarchies, achieving sub-10ms decision cycles critical for real-time applications like robotic assembly or smart grid stability. Reinforcement Learning optimization engines navigate continuous state-action spaces representing design variables, iteratively refining solutions through shaped rewards aligned with Pareto-optimal engineering objectives such as minimizing mass while maximizing strength-to-weight ratios.The proposed architecture orchestrates these components via directed acyclic graphs of containerized microservices, with federated synchronization ensuring data consistency across distributed nodes and human-in-the-loop interfaces providing strategic oversight for safety-critical decisions. Mathematical formulations ground the system hybrid loss functions balance learning objectives, edge partitioning optimizes, and multi-agent RL decomposes collaborative design tasks.Deployed on resource-constrained edge platforms, this framework demonstrates 8-12× acceleration in design cycle times, 25-35% improvements in structural efficiency, and full traceability satisfying aerospace certification standards (DO-178C). By eliminating manual iteration bottlenecks while preserving human insight where needed, the system redefines engineering practice, enabling rapid innovation across domains requiring concurrent optimization of performance, manufacturability, sustainability, and cost.

Article
Computer Science and Mathematics
Computer Science

P. Selvaprasanth

Abstract: Distributed modern software platforms spanning microservices, serverless functions, and edge computing face unprecedented security threats from stealthy adversaries exploiting encrypted data flows and behavioural camouflage. Conventional defences require decryption for analysis, exposing sensitive information in untrusted cloud environments. This paper proposes an innovative framework integrating homomorphic encryption (HE) with automated threat hunting to enable privacy-preserving threat detection at scale. Using levelled BFV schemes from OpenFHE, we perform computations directly on ciphertexts for anomaly scoring and behavioural profiling, while our hunting engine employs graph neural networks and isolation forests to hypothesize and pursue attacker patterns across distributed logs without plaintext exposure.The architecture deploys as Kubernetes-native operators, processing 10,000 encrypted events per second with 92% detection accuracy on MITRE-emulated scenarios, outperforming traditional UEBA by 35% in F1 score and reducing analysis latency from hours to seconds. Evaluations on AWS EKS clusters demonstrate sub-200ms query times for homomorphic aggregations, with noise management via bootstrapping optimizations. Case studies in fintech pipelines reveal thwarted supply-chain compromises and insider data exfiltration’s. By revolutionizing secure computation in dynamic ecosystems, our solution bridges cryptography and AI-driven hunting, offering deployable resilience against evolving threats while complying with GDPR and zero-trust mandates. Future work extends to fully homomorphic deep learning for adaptive adversary modelling.

Review
Computer Science and Mathematics
Computer Science

Divyasree Bellary

Abstract: Decentralized applications (DApps) represent a paradigm shift in software architecture, leveraging blockchain technology and distributed consensus mechanisms to eliminate single points of failure and centralized control. As the adoption of DApps accelerates across sectors such as finance, supply chain, healthcare, and governance, ensuring their functional correctness and behavioral reliability has become a critical engineering challenge. Unlike traditional software, DApps operate in adversarial, permissionless environments where smart contracts execute autonomously and immutably on distributed nodes, making post-deployment correction extremely costly or impossible. This review systematically examines the landscape of functional testing methodologies tailored for decentralized applications, analyzing their suitability, limitations, and practical applicability in modern DApp development workflows. We survey research spanning smart contract verification, consensus protocol testing, oracle interaction validation, cross-chain interoperability testing, and user-layer functional testing of Web3 interfaces. The review identifies four dominant testing paradigms: (1) unit testing of smart contract functions, (2) integration testing of DApp components, (3) property-based testing using formal specifications, and (4) end-to-end simulation on testnets. Through comparative analysis across 13 seminal studies, we evaluate each approach along dimensions of automation feasibility, coverage depth, gas efficiency awareness, and scalability to complex DApp ecosystems. Our findings indicate that while static analysis and symbolic execution tools such as Mythril, Slither, and Manticore offer strong vulnerability detection, they address security properties more than functional correctness. Conversely, framework-based testing tools like Hardhat, Truffle, and Foundry provide adequate unit-level coverage but struggle with cross-contract orchestration and event-driven logic verification. A critical gap exists in testing oracle-dependent and DAO governance workflows. This review concludes with a synthesis of best practices, open research challenges, and a directional roadmap for developing holistic functional testing frameworks suited to the evolving complexity of decentralized systems.

Article
Computer Science and Mathematics
Computer Science

D. Sneha

Abstract: Blockchain networks now underpin mission-critical services in finance, healthcare, supply-chain logistics, and digital governance, yet production deployments continue to suffer severe resilience failures ranging from Byzantine consensus violations to cross-chain bridge exploits that have collectively caused losses exceeding $2 billion. The root cause is a critical tooling gap: ex- isting frameworks such as BlockBench and Hyperledger Caliper evaluate only crash-fault performance and provide neither ad- versarial fault modelling nor automated remediation guidance, leaving operators without a rigorous means of holistic resilience assessment prior to deployment. This paper presents the Blockchain Resilience Analysis System (BRAS), a five-layer, platform-agnostic framework that unifies real-time network topology monitoring, multi-class adversarial fault injection, composite resilience scoring, closed-loop adaptive consensus reconfiguration, and structured reporting within a single repeatable pipeline. BRAS introduces the Resilience Index (RI), a mathematically grounded composite metric that aggre- gates four sub-dimensions—network connectivity, throughput stability, mean-time-to-recovery (MTTR), and Byzantine fault tolerance ratio—into a single interpretable score calibrated to operator-defined service-level objectives. An Adaptive Reconfigu- ration Module (ARM) monitors the RI stream and autonomously adjusts consensus timeout parameters and peer-connection poli- cies when the RI drops below a configurable threshold, closing the feedback loop between fault detection and remediation without manual intervention. Experimental evaluation on a 20-node Hyperledger Fabric testnet and a 15-node Ethereum Proof-of-Authority network demonstrates that BRAS achieves a 34% reduction in MTTR under simulated eclipse attacks and reduces false-positive fault detections by 28% relative to threshold-only monitoring base- lines. The RI metric exhibits strong correlation (r = 0.91, p < 0.001) with independently measured system availability across 50 fault campaigns, validating its predictive utility. BRAS is the first framework to simultaneously address network-layer, consensus-layer, and application-layer resilience threats under a unified, vendor-agnostic architecture, offering both a rigor- ous theoretical foundation and a deployable implementation blueprint for blockchain resilience engineering.

of 66

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated