Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Security Systems

Zhen Li

,

Kexin Qiang

,

Yiming Yang

,

Zongyue Wang

,

An Wang

Abstract: In side-channel analysis, simple power analysis (SPA) is a widely used technique for recovering secret information by exploiting differences between operations in traces. However, in realistic measurement environments, SPA is often hindered by noise, temporal misalignment, and weak or transient leakage, which obscure secret-dependent features in single or very few power traces. In this paper, we provide a systematic analysis of moving-skewness-based trace preprocessing for enhancing asymmetric leakage characteristics relevant to SPA. The method computes local skewness within a moving window along the trace, transforming the original signal into a skewness trace that emphasizes distributional asymmetry while suppressing noise. Unlike conventional smoothing-based preprocessing techniques, the proposed approach preserves and can even amplify subtle leakage patterns and spike-like transient events that are often attenuated by low-pass filtering or moving-average methods. To further improve applicability under different leakage conditions, we introduce feature-driven window-selection strategies that align preprocessing parameters with various leakage characteristics. Both simulated datasets and real measurement traces collected from multiple cryptographic platforms are used to evaluate the effectiveness of the approach. Experimental results indicate that moving-skewness preprocessing improves leakage visibility and achieves higher SPA success rates compared to commonly used preprocessing methods.

Article
Computer Science and Mathematics
Security Systems

Yeongseop Lee

,

Seungun Park

,

Yunsik Son

Abstract: In multi-turn conversational AI, individually innocuous personally identifiable information (PII) fragments disclosed across successive turns can accumulate into a re-identification risk that no single utterance reveals on its own. Existing PII detectors operate on isolated utterances and therefore cannot track this cross-turn evidence build-up. We propose a stateful middleware guardrail whose core design principle is speaker-attributed entity isolation: every extracted PII fragment is classified by its originating conversational participant (first-person USER vs. incidentally mentioned third parties), and evidence is accumulated in entity-isolated subgraphs that structurally prevent cross-entity contamination. A three-tier extraction pipeline (Tier-0 deterministic regex; Tier-1 Presidio/spaCy NER with zero-shot NER independent verification; Tier-2 independent zero-shot NER; plus rule-based post-processing) refines noisy NER candidates, and an evidence-gated Commit Gate writes only corroborated cues to entity state, firing a re-identification onset signal tpred at the earliest turn where combination-based onset rules grounded in the re-identification uniqueness literature are satisfied. On a 184-record template-synthetic evaluation corpus, the system achieves OW@5= 70.7% with MAE= 2.442 turns, reducing naïve accumulation MAE by 56% (BL2 MAE= 5.522). We confirm structural robustness on a 300-record mutation stress set and sanity-check RULE_B generalization on the ABCD external corpus (OW@0= 97.1%, MAE= 0.011). The pipeline requires no modification to the underlying conversational model and serves as a drop-in runtime guardrail for existing dialogue systems.

Article
Computer Science and Mathematics
Security Systems

Dina Ghanai Miandoab

,

Brit Riggs

,

Nicholas Navas

,

Bertrand Cambou

Abstract: In this paper we study the performance and feasibility of integrating a novel key encapsulation protocol into Quantum Key Distribution (QKD). The key encapsulation protocol includes a challenge-response pair (CRP). In our design, Alice and Bob derive identical cryptographic tables from shared challenges, allowing the ephemeral key to be encoded and recovered without disclosing helper data. Software simulations show error-free key recovery for quantum channel bit error rates up to 40% when using longer response lengths. Additionally, we designed the protocol to detect eavesdropping solely from the statistics of the received quantum stream, without sacrificing key bits for public comparison. We formalize the encoding and decoding model, analyze trade-offs between response length and latency, and report key recovery and error detection performance across different noise levels. The results indicate that this CRP-based multi-wavelength QKD protocol can reduce the reliance on classical reconciliation while preserving security in noisy settings.

Article
Computer Science and Mathematics
Security Systems

Javier Ruiz

,

Laura Fernández

,

María González

Abstract: In this study, we propose an RL-guided fuzzing scheduler that learns optimal mutation ordering and seed prioritization based on kernel coverage reward signals. The agent observes execution depth, subsystem transitions, and historical crash density to adapt exploration strategies. On Linux 5.10, the RL-fuzzer triggers 22% more unique crashes and 31% more deep paths compared with AFL-style schedulers. It identifies 7 previously unknown vulnerabilities, including mismanaged capability checks. Despite additional overhead from RL inference, throughput remains within 85% of baseline fuzzers. This study demonstrates the feasibility of applying RL-based policy learning to kernel fuzzing orchestration.

Article
Computer Science and Mathematics
Security Systems

Vyron Kampourakis

,

Michail Takaronis

,

Vasileios Gkioulos

,

Sokratis Katsikas

Abstract: Cyber Ranges (CRs) are complex socio-technical ecosystems, combining infrastructure resources, software services, learning mechanisms, and human-in-the-loop processes for cybersecurity training, education, and experimentation. However, their design and representation are conventionally described by diverse architectural representations and a lack of standardization, making them difficult to compare, integrate, and reason in an automated manner. This paper proposes a novel framework that uniquely integrates the structural, functional, informational, and decisional aspects of CR platforms, formalizing them into a common semantic framework. It models the architectural and learning characteristics of CRs, allowing the representation of design choices, operational processes, information resources, and capability development. The ontology is implemented using OWL 2 DL, which includes logical constraints and enables consistency checking and automated reasoning. Validation through instantiation and competency question assessment shows that the model allows for structured querying, traceability across abstraction levels, and capability-level reasoning. The findings indicate that ontology-based modeling can serve as a basis for more formalized CR configuration analysis and capability-focused evaluation of diverse CR platforms.

Review
Computer Science and Mathematics
Security Systems

Yinggang Sun

,

Haining Yu

,

Wei Jiang

,

Xiangzhan Yu

,

Dongyang Zhan

,

Lixu Wang

,

Siyue Ren

,

Yue Sun

,

Tianqing Zhu

Abstract: The rapid evolution of Large Language Models (LLMs) from static text generators to autonomous agents has revolutionized their ability to perceive, reason, and act within complex environments. However, this transition from single-model inference to System Engineering Security introduces unique structural vulnerabilities—specifically instruction-data conflation, persistent cognitive states, and untrusted coordination—that extend beyond traditional adversarial robustness. To address the fragmented nature of the existing literature, this article presents a comprehensive and systematic survey of the security landscape for LLM-based agents. We propose a novel, structure-aware taxonomy that categorizes threats into three distinct paradigms: (1) External Interaction Attacks, which exploit vulnerabilities in perception interfaces and tool usage; (2) Internal Cognitive Attacks, which compromise the integrity of reasoning chains and memory mechanisms; and (3) Multi-Agent Collaboration Attacks, which manipulate communication protocols and collective decision-making. Adapting to this threat landscape, we systematize existing mitigation strategies into a unified defense framework that includes input sanitization, cognitive fortification, and collaborative consensus. In addition, we provide the first in-depth comparative analysis of agent-specific security evaluation benchmarks. The survey concludes by outlining critical open problems and future research directions, aiming to foster the development of next-generation agents that are not only autonomous but also provably secure and trustworthy.

Article
Computer Science and Mathematics
Security Systems

Seema Sirpal

,

Pardeep Singh

,

Om Pal

Abstract: Digital signatures serves as a crucial cryptographic primitive in an e-governance system for the authentication of citizen-government interactions. Traditional methods (DSA, ECDSA) pose computational overheads at resource-limited endpoints and centralized verification servers. While complex-number cryptography provides theoretical efficiency through the Complex Discrete Logarithm Problem (CDLP), prior works often fail to meet the requirements for real-world applications. This paper advances the knowledge in lightweight cryptography by introducing LDSEGoV, a lightweight digital signature scheme for e-governance infrastructure. The proposed method overcomes the shortcomings of previous methods by incorporating sound modular arithmetic for consistent verification, using NIST-approved hash functions. Furthermore, we provide a comprehensive security analysis that provides the formal security proofs of existential unforgeability (EUF-CMA) of the proposed scheme in the Random Oracle Model. Additionally, the experimental results show a 6.5× improvement in signing performance and a 24.76× improvement in verification performance over ECDSA, with a 61% reduction in signature size. These results demonstrate that LDSEGoV is suitable for large-scale digital governance systems for authentication scenarios.

Article
Computer Science and Mathematics
Security Systems

Stefan Ivanov Stoyanov

,

Maria Marinova

,

Nikolay Rumenov Kakanakov

Abstract: Preserving critical data, preventing unauthorized access, securing communication are aspects of information security. To implement them as hardware is more reliable than software. There are various hardware solutions that suggest using a separate computational unit which is capable of providing various security enhancements. This article describes a heterogeneous security architecture with a tightly coupled security core to the CPU. A security interface that allows direct control and monitory of the security core over the CPU is proposed. In the article analysis of how the interface interacts with the controlled and monitored CPU is done. This analysis explains the benefits and why for certain aspects control is implemented seeking performance while for others - using less logic.

Article
Computer Science and Mathematics
Security Systems

David Cropley

,

Paul Whittington

,

Huseyin Dogan

Abstract: Everyday, people regularly log into websites and applications without too much thought for the process and with an end-goal or task in mind to be achieved with the service that they are accessing. In many cases this is not an issue, but some people find this step hard, frustrating, or virtually impossible. For people who have a disability, complications can arise in this process, and we examine the nature of these problems, not only to create an empirical record but also with a view to diagnosing and to reme-diate limiting factors. A series of interviews (n=15) is analyzed with Grounded Theory (GT) coding to produce a set of theorems directly from applying Constructivist princi-ples to the data. As anticipated, results illustrate that most disabled users find that their capability to authenticate effectively is reduced due to various accessibility bar-riers. By way of inductive theorem building, this paper categorizes common traits that participants have revealed during interviews. The main goal of this paper is to lead the way towards the development of a Framework which suggests ways in which to rem-edy the root causes of these accessibility complications that hinder our disabled com-munity. It was noted during the study that most participants felt hindered when log-ging in due to their disability, which could imply a lack of accessibility for those using traditional authentication techniques. Also, maintaining security was found to be im-portant, so future work should find ways to make sure that disabled users are not left vulnerable when improving usability for them.

Article
Computer Science and Mathematics
Security Systems

Emily C. Rogers

,

Daniel K. Foster

,

Sarah L. Chen

,

Michael J. Turner

Abstract: Memory objects in the kernel often remain accessible long after their safe lifetime, leading to use-after-free exploits. We present a lifetime-aware isolation model that assigns temporal protection windows to kernel objects. PKS permissions are revoked when objects exit valid lifetime states. Applied to six kernel allocators (SLUB, SLOB, SLAB), the technique eliminates 67% of UAF exploitability cases** and shortens exposure windows by 72%. Benchmarking shows ≤4% overhead across memory-intensive workloads. This temporal model adds a new dimension to kernel compartmentalization by aligning memory protection with object lifecycles.

Article
Computer Science and Mathematics
Security Systems

Ananya Kapoor

,

Vikram Subramanian

,

Rohan Mehta

Abstract: We present a formal verification framework for PKS-governed isolation rules in the kernel. A state-transition model is derived from kernel memory-access traces and checked against safety invariants using SMT solvers. On nine Linux subsystems, verification identifies 17 incorrect permission transitions in prototype isolation policies. After correction, the formally verified policy withstands all 28 injected attack attempts, demonstrating improved correctness. Overhead for model extraction and checking remains acceptable for offline validation workflows. This work shows that formal reasoning can significantly improve the reliability of kernel compartmentalization.

Article
Computer Science and Mathematics
Security Systems

Zhuldyz Tashenova

,

Askhatov Alim

,

Gabdullin Abzal

,

Abdikhaimov Yelnur

,

Raiskanov Rassul

,

Oryntay Al-Tarazi

,

Zhanat Abdugulova

,

Shirin Amanzholova

Abstract: Modern cybersecurity challenges span multiple layers, from human behavior and identity management to network communication and device security. This paper proposes a unified multi-layered security framework that integrates human-centric, identity-centric, and communication-centric defenses into a coherent architecture. Drawing on insights from diverse domains (industrial control systems, IoT, healthcare, blockchain, and quantum communications), we identify common defense-in-depth principles and interdependencies across layers. The study highlights the persistent gaps in current research, which often focuses on isolated layers or domain-specific models, and addresses these gaps by synthesizing a cross-domain framework. We develop a mixed-method methodology to compare and integrate multi-layer security mechanisms, and we implement a proof-of-concept risk assessment engine to evaluate the framework’s effectiveness. Preliminary results from this implementation demonstrate that combining layers yields significantly improved detection performance and resilience compared to single-layer baselines. The framework’s contributions include a comprehensive literature-driven model, an operational validation in a simulated environment, and guidelines for deploying multi-layer defenses in complex, interconnected infrastructures. Empirical findings confirm that an integrated multi-layer approach can adapt to varied threat scenarios and reduce vulnerabilities, underscoring the value of coordinated controls across technical and human factors. The proposed framework lays a foundation for future work on scalable, cross-layer cybersecurity architectures that better protect contemporary cyber-physical systems.

Article
Computer Science and Mathematics
Security Systems

José Israel Nadal Vidal

Abstract: Grover’s algorithm represents a cornerstone quantum attack against symmetric cryptography, offering a quadratic speedup for exhaustive key search. While its theoretical properties are well established, the practical robustness of Grover’s algorithm under non-ideal oracle execution remains insufficiently quantified, particularly in security-relevant contexts. This work presents a comprehensive quantitative analysis of Grover’s robustness against intermittent oracle failures, modeling realistic execution imperfections that may arise in practical quantum attack scenarios. Using a reduced two-dimensional representation of amplitude amplification, we define and measure a practical reliability threshold beyond which Grover’s success probability collapses abruptly. Our results demonstrate that this threshold decreases exponentially with problem size, scaling as \( p^{\ast}(n) \propto 2^{-n/2} \), implying that cryptographically relevant problem sizes require unrealistically high oracle reliability to preserve quantum advantage. For \( n=40 \), the oracle must fail less than once every \( \sim 800{,}000 \) iterations to preserve practical advantage, a requirement that becomes exponentially more stringent for larger key sizes. These findings do not contradict Grover’s theoretical correctness but highlight critical robustness limitations that must be explicitly considered when assessing quantum threats to symmetric cryptography. We provide actionable insights for post-quantum security assessment by translating abstract quantum assumptions into concrete reliability requirements, demonstrating that the practical feasibility of Grover-based attacks may be significantly lower than commonly assumed under realistic hardware constraints.

Article
Computer Science and Mathematics
Security Systems

Vimal Teja Manne

Abstract: Card tokenization in payment gateways is usuallyanchored on hardware security modules (HSMs) that store keysand execute sensitive cryptographic operations under PCI controls.This model is mature but difficult to scale elastically and not wellaligned with cloud native architectures that rely on horizontalgrowth on commodity servers. Trusted execution environments(TEEs), such as Intel SGX or confidential virtual machines, keeptokenization code and key material inside isolated enclaves whilethe surrounding gateway stack remains untrusted. This paperdesigns an enclave based token vault that sits in the real timecard authorization path, constructs a matching HSM backedbaseline that shares the same key hierarchy, token format, andstorage layout, and compares them under trace driven workloadsfrom a public anonymised credit card dataset. The core noveltyis a head to head experimental comparison of HSM backedand enclave backed token vaults under a shared workflow andreplayed transaction trace, which isolates the effect of the isolationboundary itself. We record end to end authorization latency,sustained throughput, and CPU utilisation for both designs, andstudy sensitivity to HSM round trip delay and operation mix.In our testbed configuration, the enclave backed vault deliversroughly 45% higher maximum stable throughput than the HSMbacked baseline while keeping 99th percentile latency withinonline authorization budgets, at the cost of higher CPU utilisationon gateway hosts. The comparison highlights where enclave tokenvaults can approach or surpass HSM based deployments andwhere certified HSMs still offer stronger assurances or operationaladvantages, providing guidance for hybrid designs in PCI sensitivepayment environments.

Article
Computer Science and Mathematics
Security Systems

Rongjie Zhou

,

Huaqun Guo

,

Francis E C Teo

Abstract: The advent of quantum computing poses significant challenges to traditional cryptographic systems, threatening the confidentiality, integrity and authenticity of digital communications. This paper investigates the integration of Post-Quantum Cryptography (PQC) algorithms into mobile communication systems to address these challenges. The study focuses on evaluating key PQC algorithms shortlisted by the National Institute of Standards and Technology (NIST), including CRYSTALS-Kyber, CRYSTALS-Dilithium, Falcon and SPHINCS+, within the context of 5G and future mobile network architectures. The research encompasses the design and implementation of an experimental framework involving mobile devices, servers, and cloud-based infrastructure to simulate real-world communication scenarios. Performance metrics such as key generation time, signature generation, encryption and decryption speed, and resource consumption were analyzed across various devices to identify algorithms suitable for mobile environments. The findings reveal that lattice-based algorithms, such as Kyber and Dilithium, offer a promising balance between security and efficiency, making them ideal for resource-constrained devices. In contrast, hash-based algorithms like SPHINCS+ exhibit higher computational demands, limiting their practicality in certain applications. This work highlights the importance of algorithm selection and hardware optimization in ensuring secure and efficient communications in the quantum era. By integrating theoretical advancements in PQC with practical applications, this research lays the foundation for quantum-resistant security in mobile networks, ensuring secure and future-ready digital communications.

Article
Computer Science and Mathematics
Security Systems

Vimal Teja Manne

Abstract: Adversaries can extend the communication distance of contactless systems with relays to make unauthorized transactions. Contactless payment systems are becoming increasingly vulnerable to relay attacks. We describe how attackers may use low-cost devices to conduct relay attacks and present a new application-layer software defense. Using Round Trip Time (RTT), our software defense detects relay attacks with 100% success in more than 10,000 trials; at the same time, it provides a false positive rate of less than 0.86%. Unlike many hardware-based defenses, our defense is easy to deploy and increases transaction time by no more than 0.22 seconds, so users will see little, if any, degradation in performance. Our results show there are serious vulnerabilities in the contactless payment systems and we provide a viable and practical way to prevent relay-based fraud.

Article
Computer Science and Mathematics
Security Systems

Vadim Raikhlin

,

Ruslan Gibadullin

,

Alexey Boyko

Abstract: Opportunities to improve the effectiveness of associative protection in scene analysis can be found in changing the configurations of digital etalons (reference patterns) and in transition from a decimal to a hexadecimal system when encoding object names and their coordinates. The relevance of the research undertaken is determined by the need for a significant increase in the number of keys used and the advisability of further improvement of the security strength. Based on a preliminary analysis, a rule for selecting digital reference configurations has been formulated from the condition of uniform distribution of bit inclusions in the pseudorandom sequence (GAMMA) container when using the decimal and hexadecimal systems for encoding purposes. Algorithms for forming a complete and limited test list of permutations for experimental research purposes have been developed. Results of the computational experiment confirmed validity of the formulated rule. For the accepted configurations, estimates of the expected number of preserved bits of the etalon were obtained.

Article
Computer Science and Mathematics
Security Systems

Ioannis Dermentzis

,

Georgios Koukis

,

Vassilis Tsaoussidis

Abstract: As the threat landscape advances and pressure to reduce the energy footprint grows, it is crucial to understand how security mechanisms affect the power consumption of cloud-native platforms. Although several studies in this domain have investigated the performance impact of security practices or the energy characteristics of containerized applications, their combined effect remains largely underexplored. This study examines how common Kubernetes (K8s) safeguards influence cluster energy use across varying security configurations and workload conditions. By employing runtime and network monitoring, encryption, and vulnerability-scanning tools under diverse workloads (idle, stressed, realistic application), we compare the baseline system behavior against the energy consumption introduced by each security configuration. Our findings reveal that always-on security mechanisms impose a persistent baseline energy cost—occasionally making an idle protected cluster comparable to a heavily loaded unprotected one, while security under load results in substantial incremental overhead. In particular, service meshes and full-tunnel encryption show the largest sustained overhead, while eBPF telemetry, network security monitoring, and vulnerability scans add modest or short-lived costs. These findings provide useful security-energy insights and trade-offs for configuring K8s in resource-constrained settings, including IoT/smart city deployments.

Article
Computer Science and Mathematics
Security Systems

Kenan Sansal Nuray

,

Oren Upton

,

Nicole Lang Beebe

Abstract: This study presents a quantitative evaluation of the EMBA firmware security analysis tool applied to Internet of Things (IoT) and embedded device firmware in two deployment environments: a standalone personal computer and a Microsoft Azure cloud-based virtual machine. The study addresses a gap in existing research regarding how deployment choices affect performance, cost, and operational characteristics of firmware security analysis. Using identical EMBA configurations and analysis modules, firmware images of varying sizes were analyzed, while execution time, detected vulnerabilities, and resource utilization were systematically recorded. The results demonstrate that scan duration is influenced by both firmware size and deployment environment. Specifically, using EMBA v1.5.0, a 25.5 MB firmware image required approximately 14 hours on a standalone system and over 25 hours on Azure Cloud, whereas a 30.2 MB image completed in approximately 18 hours locally and 17 hours on Azure Cloud. Despite these differences in execution time, the type and number of identified vulnerabilities were largely consistent across both environments, indicating comparable analytical coverage. A cost assessment shows that cloud-based execution incurred approximately US $250 for a limited set of analyses, while standalone deployment required higher initial investment but provided predictable long-term costs. Overall, this deployment-focused evaluation offers empirical information into performance, cost, and operational trade-offs, supporting informed decision-making for IoT security practitioners selecting local or cloud-based firmware analysis environments.

Article
Computer Science and Mathematics
Security Systems

Mehrnoush Vaseghipanah

,

Sam Jabbehdari

,

Hamidreza Navidi

Abstract: Network operators increasingly rely on abstracted telemetry (e.g., flow records and time-aggregated statistics) to achieve scalable monitoring of high-speed networks, but this abstraction fundamentally constrains the forensic and security inferences that can be supported from network data. We present a design-time audit framework that evaluates which threat hypotheses become non-supportable as network evidence is transformed from packet-level traces to flow records and time-aggregated statistics. Our methodology examines three evidence layers (L0: packet headers, L1: IP Flow Information Export (IPFIX) flow records, L2: time-aggregated flows), computes a catalog of 13 network-forensic artifacts (e.g., destination fan-out, inter-arrival time burstiness, SYN-dominant connection patterns) at each layer, and maps artifact availability to tactic support using literature-grounded associations with MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK). Applied to backbone traffic from the MAWI Day-In-The-Life (DITL) archive, the audit reveals non-monotonic transformation: inference coverage decreases from 9 to 7 out of 9 evaluated ATT&CK tactics, while coverage of defensive countermeasures (MITRE D3FEND) increases at L1 (7→8 technique categories) then decreases at L2 (8→7), reflecting a shift from behavioral monitoring to flow-based controls. The framework provides network architects with a practical tool to configure telemetry systems (e.g., IPFIX exporters, P4 pipelines) to reason about and provision minimum forensic coverage.

of 20

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated