Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Computer Networks and Communications

Robert Campbell

Abstract: The impending threat of cryptographically relevant quantum computers (CRQCs) necessitates a comprehensive migration to post-quantum cryptography (PQC) across all computing domains. While commercial Cryptographic Asset Discovery and Inventory (CADI) tooling has emerged to support enterprise IT environments, embedded systems, which dominate defense platforms, tactical communications, and critical infrastructure, remain inadequately addressed. This paper presents a comprehensive framework for embedded systems-specific CADI, establishing a six-class taxonomy based on cryptographic characteristics and discovery feasibility. We show through feasibility analysis that fundamental constraints of embedded systems, including severe resource limitations, mission/operational continuity requirements (often including availability and safety imperatives), certification requirements, and hardware-bound cryptography, render IT-centric CADI approaches largely ineffective. Documentation-based discovery through vendor Cryptographic Bills of Materials (CBOMs) should typically serve as the primary methodology, with automated scanning relegated to supplemental verification. We analyze technical barriers to detection, including static linking, stripped binaries, cryptographic hardware offload, and proprietary implementations. The framework addresses lightweight cryptography considerations for constrained devices that are unable to accommodate standard PQC algorithm sizes, and examines lifecycle and certification constraints, including those related to DO-178C, IEC 62443, and Common Criteria. We establish planning-assumption discovery accuracy expectations (Table 6) ranging from 55–99% by embedded system class, and propose detection methodologies calibrated to each class. The paper concludes with integration pathways for Department of Defense Risk Management Framework processes and PQC migration planning.

Technical Note
Computer Science and Mathematics
Computer Networks and Communications

Daisuke Sugisawa

Abstract: Since approximately 2005, major processor manufacturers have shifted their architectural focus from instruction-level parallelism (ILP) toward multicore and manycore parallelism to achieve higher performance.Rather than relying on deeper pipelines and speculative execution, performance gains have increasingly been realized through thread-level parallelism (TLP).Consequently, the responsibility for efficiently utilizing processor resources has transitioned from hardware mechanisms to software implementations. This technical note examines design strategies for achieving deterministic, high-throughput packet processing on manycore architectures using the Data Plane Development Kit (DPDK).It presents a simplified Packet Gateway (PGW) pipeline implementation, analyzing cache-coherence effects, NUMA-local memory allocation, and multicore scheduling patterns critical to maintaining per-packet processing budgets under nanosecond-level constraints.

Article
Computer Science and Mathematics
Computer Networks and Communications

Robert E. Campbell

Abstract: The Next-Generation Security Triad—integrating post-quantum cryptography (PQC), Zero Trust Architecture (ZTA), and AI security—provides comprehensive protection for autonomous sensing systems. However, existing frameworks assume enterprise connectivity is available in tactical environments operating under Disconnected, Intermittent, and Low-bandwidth (DIL) conditions. This paper presents the Tactical Edge Triad Architecture (TETA), adapting enterprise substrate components for disconnected operations through five modules: Edge Cryptographic Module (ECM), Tactical Identity Cache (TIC), Edge Analytics Engine (EAE), Mission Policy Store (MPS), and the Autonomous AI Governance Framework (AAGF). Three mechanisms address DIL-specific challenges: Authority Decay provides a DIL-specific operationalization of continuous verification through progressive privilege reduction with formal attack mitigations; Pre-Mission Consensus Packaging provides cryptographically signed governance envelopes satisfying human oversight requirements; and Triad Integration demonstrates cross-pillar security dependencies. The AAGF systematically adapts established governance mechanisms, behavioral envelopes, watchdog models, autonomy-downgrade, and consensus-backed approval for disconnected operations. Analytical evaluation across two tactical scenarios demonstrates feasibility: PQC overhead estimates derive from published pqm4 benchmarks; governance function estimates (policy evaluation, watchdog inference, audit logging) are engineering projections based on comparable embedded workloads. Combined governance latency is estimated at ~15 ms on Cortex-A53 class processors (±40%), with 0.5% steady-state bandwidth increase for PQC. TETA enables Triad implementation at the tactical edge while preserving security properties and governance accountability.

Article
Computer Science and Mathematics
Computer Networks and Communications

Vimal Teja Manne

Abstract: To improve the efficiency of decentralized pay-ment systems for microservices, this paper proposes the use ofblockchain technology in order to allow for parties to transactwith distrust and remove the need for central intermediaries.In order to do this, this paper proposes the use of automatedsmart contracts and scalable off chain technology to allowfor efficient transactions and reduced computational resourcecosts. It is proposed by empirical testing to show that thesystem outlined in this paper will show significant increase incosts and processing latencies when compared to traditionalcentralized payment processing systems. As a result this paperwill show that the system in this paper is a good alternative totraditional microservice based payments systems for real timemarket paymentsThis research will allow increased scalability and security inthe digital transaction environment.

Article
Computer Science and Mathematics
Computer Networks and Communications

Burke Geceyatmaz

,

Fatma Tansu Hocanın

Abstract: Vehicular Ad-hoc Networks (VANETs) face critical challenges regarding intermittent connectivity and latency due to high node mobility, often resulting in a performance trade-off between reactive and proactive routing paradigms. This study aims to resolve these inherent limitations and ensure reliable communication in volatile environments. We propose a novel context-aware framework, the Dynamic Hybrid Routing Protocol (DHRP), which integrates Ad hoc On-Demand Distance Vector (AODV) and Optimized Link State Routing (OLSR). Distinguished by a predictive multi-criteria switching logic and a hysteresis-based stability mechanism, the proposed method employs a synergistic cross-layer framework that adapts transmission power and routing strategy in real time. Validated through extensive simulations using NS-3 and SUMO, experimental results demonstrate that the protocol outperforms traditional baselines and contemporary benchmarks across all key metrics. Specifically, the system maintains a Packet Delivery Ratio (PDR) exceeding 90%, ensures end-to-end delays remain under the safety-critical 40 ms threshold, and achieves energy savings of up to 60%. In conclusion, DHRP successfully resolves the routing performance dichotomy, providing a scalable, energy-efficient foundation for next-generation Intelligent Transportation Systems (ITS) in which reliable safety messaging is paramount.

Technical Note
Computer Science and Mathematics
Computer Networks and Communications

Daisuke Sugisawa

Abstract: In real-time delivery of AV1 over MPEG2-TS using HTTP/3 over QUIC, two fundamental approaches can be identified for improving video quality. The first is bitrate adaptation, represented by Adaptive Bitrate (ABR), which estimates available bandwidth and selects the optimal resolution and frame rate within that range. The second approach focuses on low-latency control, reducing RTT and ACK delay to increase effective throughput. This study emphasizes the latter perspective and, through terminal-side QUIC observation, suggests that user-space buffer saturation occurring in high-RTT mobile environments under nginx-quic HTTP/3 termination implementation is an implementation-originated problem that may degrade real-time video QoS. Furthermore, this study aims to evaluate, through empirical measurements, the feasibility of edge designs that enable packet-level processing (duplication, FEC, pacing) at QUIC termination points near the terminal. Through empirical experiments conducted across domestic and international cloud environments, we evaluate how buffer design at distribution servers affects video quality. This paper shares experimental methods, observations, and guidelines for designing real-time streaming systems using HTTP/3 over QUIC.

Article
Computer Science and Mathematics
Computer Networks and Communications

Krishna Bajpai

Abstract: The evolution of high-performance computing (HPC) interconnects has produced specialized fabrics such as InfiniBand[1], Intel Omni-Path, and NVIDIA NVLink[2], each optimized for distinct workloads. However, the increasing convergence of HPC, AI/ML, quantum, and neuromorphic computing requires a unified communication substrate capable of supporting diverse requirements including ultra-low latency, high bandwidth, collective operations, and adaptive routing. We present HyperFabric Interconnect (HFI), a novel design that combines the strengths of existing interconnects while addressing their scalability and workload-fragmentation limitations. Our evaluation on simulated clusters demonstrates HFI’s ability to reduce job completion time (JCT) by up to 30%, improve tail latency consistency by 45% under mixed loads and 4× better jitter control in latency-sensitive applications., and sustain efficient scaling across heterogeneous workloads. Beyond simulation, we provide an analytical model and deployment roadmap that highlight HFI’s role as a converged interconnect for the exascale and post-exascale era.

Article
Computer Science and Mathematics
Computer Networks and Communications

Galia Novakova Nedeltcheva

,

Denis Chikurtev

,

Eugenia Kovatcheva

Abstract: While smart campuses continue to evolve alongside technological advancements, existing data models often fail to comprehensively integrate the diverse array of Internet of Things (IoT) devices and platforms. This study presents a unified data model tailored to the operational requirements of campus decision-makers, facilitating seamless interconnection across heterogeneous systems. By integrating IoT, cloud computing, big data analytics, and artificial intelligence, the proposed model seeks to advance campus operations, sustainability, and educational outcomes by fostering cross-system harmonization and interoperability. The analysis demonstrates that a layered architecture—comprising data acquisition, processing and storage, analytics and decision support, application presentation, and security and privacy—constitutes the foundation of robust smart campus data models. The system is structured to collect, refine, process, and archive raw data for future reference. Analytics and decision support mechanisms generate actionable insights; application presentation delivers results to end users, and security and privacy measures safeguard in-formation. The study further contends that artificial intelligence techniques, including predictive analytics (which forecasts outcomes using historical data), personalized learning (which customizes content to individual needs), and edge intelligence (which processes data at its source), are essential for advancing these models. These enhancements yield measurable benefits, including a 15% increase in student retention through personalized learning and a 20% reduction in energy consumption through predictive energy management [1]. Emerging technologies such as 5G networks, edge and fog computing, blockchain, and three-dimensional geographic information systems (3D GIS) are instrumental in enhancing campus intelligence. For example, the adoption of 5G has led to a 30% increase in data transmission speeds, thereby enabling real-time analytics and reliable connectivity (5G and IoT: How 5G is Transforming the Internet of Things, 2024). Building upon these technological advancements, innovative data models are shown to facilitate predictive energy management, resource optimization, and performance analytics within smart campuses. Nevertheless, ongoing challenges persist, including those related to system interoperability, scalability, and data governance. This study provides actionable design guidelines and offers a balanced evaluation of the achievements and challenges of smart campus implementations.

Article
Computer Science and Mathematics
Computer Networks and Communications

Robert E. Campbell

Abstract: The rapid evolution of the global threat landscape has necessitated a fundamental shift in the architectural foundations of federal cybersecurity. The emergence of cryptographically relevant quantum computers (CRQCs), sophisticated adversarial machine learning techniques, and the failure of perimeter-based defense models have rendered traditional frameworks insufficient. This paper presents the Next-Generation Security Triad—an integrated operational framework unifying post-quantum cryptography (PQC), Zero Trust Architecture (ZTA), and AI security—as a modernization substrate for federal compliance. Unlike prior conceptual integration efforts, this work delivers standards-aligned, modular overlays with explicit control mappings, quantitative benchmark criteria for each pillar, and reproducible pilot-ready artifacts enabling immediate federal adoption. The framework addresses the synchronization problem facing agencies managing these initiatives as independent compliance silos with distinct funding streams, timelines, and specialized workforces. Through a substrate-based architecture comprising Cryptographic Services Infrastructure, Identity and Access Management Fabric, Telemetry and Analytics Pipeline, and Policy Orchestration Engine, the triad establishes interoperable services enabling coordinated progress across all three security domains while satisfying NIST and DoD compliance requirements.

Article
Computer Science and Mathematics
Computer Networks and Communications

Paul Scalise

,

Michael Hempel

,

Hamid Sharif

Abstract: 5G systems have delivered on their promise of seamless connectivity and efficiency improvements since their global rollout began in 2020. However, maintaining subscriber identity privacy on the network remains a critical challenge. The 3GPP specifications define numerous identifiers associated with the subscriber and their activity, all of which are critical to the operations of cellular networks. While the introduction of the Subscription Concealed Identifier (SUCI) protects users across the air interface, the 5G Core Network (CN) continues to operate largely on the basis of the Subscription Permanent Identifier (SUPI)--the 5G-equivalent to the IMSI from prior generations--for functions such as authentication, billing, session management, emergency services, and lawful interception. Furthermore, the SUPI relies solely on the transport layer's encryption for protection from malicious observation and tracking of the SUPI across activities. The crucial role of the largely unprotected SUPI and other closely related identifiers creates a high-value target for insider threats, malware campaigns, and data exfiltration, effectively rendering the Mobile Network Operator (MNO) a single point of failure for identity privacy. In this paper, we analyze the architectural vulnerabilities of identity persistence within the CN, challenging the legacy "honest-but-curious" trust model. To quantify the extent of subscriber identities at flight in the CN, we conducted a study of the occurrence of SUPI as a parameter throughout the collection of 5G VNF (Virtual Network Function) API (Application Programming Interface) schemas. Our extensive analysis of the 3GPP specifications for 3GPP Release 18 revealed a total of 5,670 distinct parameter names being used across all API calls, with a total of 22,478 occurrences across the API schema. More importantly, it revealed a highly skewed distribution in which subscriber identity plays a pivotal role. Specifically, the SUPI parameter ranks as the second most frequent field. We found that SUPI occurs both as a direct parameter ("supi") and 43 other parameter names that are all related to the use of SUPI. For these 44 different parameter names we could track a total of 1,531 occurrences. At over 6.8\% of all parameter occurrences, this constitutes a disproportionately large share of total references. We also detail scenarios where subscriber privacy can be compromised by internal actors and review future privacy-preserving frameworks that aim to decouple subscriber identity from network operations. By suggesting a shift towards a zero-trust model for CN architecture and providing subscribers with greater control over their identity management, this work also offers a potential roadmap for mitigating insider threats in current deployments and influencing specific standardization and regulatory requirements for future 6G and Beyond-6G networks.

Article
Computer Science and Mathematics
Computer Networks and Communications

Nurul I. Sarkar

,

Sonia Gul

Abstract:

This paper presents a performance evaluation of IEEE 802.11ax (Wi-Fi 6) networks using a combination of real-world testbed measurements and simulation-based analysis. The paper investigates the combined effect of received signal strength (RSSI), application bitrate, and network topology on video playback delays of 802.11ax. The effect of frequency band and client density on system performance are also investigated. Testbed measurements and field experiments were conducted in indoor environments using dual-band (2.4 GHz and 5 GHz) ad hoc and infrastructure network configurations. OMNeT++ based simulations are conducted to explore scalability by increasing the number of wireless clients. Results obtained show that the infrastructure-based deployments consistently provide more stable video playback than ad hoc network, particularly under varying RSSI conditions. While the 5 GHz band delivers higher throughput at short range, the 2.4 GHz band offers improved coverage at reduced system performance. Simulation results further demonstrate significant degradation in throughput and latency as client density increases. To contextualize the observed performance, a baseline comparison with 802.11ac is incorporated, highlighting the relative improvements and remaining limitations of 802.11ax under comparable signal and load conditions. The findings provide practical deployment insights for video-centric wireless networks and inform the optimization of next-generation Wi-Fi.

Article
Computer Science and Mathematics
Computer Networks and Communications

Robert Campbell

Abstract:

The U.S. Department of Defense (DoD) faces three concurrent cybersecurity modernization mandates that together constitute what we term the Next-Generation Security Triad: post-quantum cryptography (PQC) migration by 2030--2035, Zero Trust Architecture (ZTA) implementation by FY2027, and AI system security assurance under CDAO governance. These Triad components operate under distinct timelines, funding streams, workforce competencies, and compliance frameworks---creating significant coordination challenges for CIOs, Commanding Officers, Program Management Offices, and Authorizing Officials. Current approaches treat these as separate migrations, resulting in duplicative investments, architectural misalignment, and uncoordinated risk exposure. This paper argues that the solution is not to merge the three Triad programs---each serves distinct operational purposes---but to establish a shared modernization substrate. We present a unified architectural framework comprising four substrate layers: (1) cryptographic services infrastructure, (2) identity and access management fabric, (3) telemetry and analytics pipeline, and (4) policy orchestration engine. This substrate-based approach enables each Triad component to proceed at its own pace while ensuring interoperability, reducing lifecycle technical debt, and providing measurable compliance pathways.

Article
Computer Science and Mathematics
Computer Networks and Communications

Rafe Alasem

Abstract: Wireless sensor networks (WSNs) have become a game-changing technology for healthcare patient monitoring, providing continuous and non-invasive monitoring in clinical and remote settings. However, a major obstacle still exists in the effective selection of sensor nodes in a WSN for patient monitoring while taking energy consumption into account. We offer an Energy-Efficient Forward Greedy Algorithm (EEFGA) for Sensor Node Selection in response to this problem. This algorithm aims to maximize patient coverage while minimizing energy use in order to carefully balance resource management and healthcare quality. The procedure involves a step-by-step selection of sensor nodes that, while remaining below predefined energy restrictions, deliver the most significant coverage increases. We investigate the mathematical foundations of the method, investigate its practical applicability in healthcare contexts, and demonstrate through comprehensive simulations that EEFGA extends network lifetime by 32% compared to HEED, reduces end-to-end delay to 2.3±0.3ms, and achieves 96.2% patient coverage while maintaining energy efficiency of 0.045 J/packet in 150-node healthcare monitoring networks.

Article
Computer Science and Mathematics
Computer Networks and Communications

Geetanjali Rathee

,

Hemraj Saini

,

Chaker Abdelaziz Kerrache

,

Ramzi Djemai

,

Mohamed Chahine Ghanem

Abstract: This paper proposes an efficient and automated smart healthcare communication framework that integrates a two-level filtering scheme with a multi-objective Genetic Algorithm (GA) to enhance the reliability, timeliness, and energy efficiency of Internet of Medical Things (IoMT) systems. In the first stage, physiological signals collected from heterogeneous sensors (e.g., blood pressure, glucose level, ECG, patient movement and ambient temperature) are pre-processed using an adaptive least-mean-square (LMS) filter to suppress noise and motion artefacts, thereby improving signal quality prior to analysis. In the second stage, a GA-based optimization engine selects optimal routing paths and transmission parameters by jointly considering end-to-end delay, signal-to-noise ratio (SNR), energy consumption and packet loss ratio (PLR). The two-level filtering strategy ensures that only denoised and high-priority records are forwarded for further processing, enabling real-time clinical decision support and prioritization of critical patients. The proposed mechanism is evaluated via extensive simulations involving 30–100 devices and multiple generations, and is benchmarked against two existing smart healthcare schemes. The results demonstrate that the integrated GA and filtering approach significantly reduces end-to-end delay, communication latency and energy consumption, while improving packet delivery ratio, throughput, SNR and overall Quality of Service (QoS). These findings indicate that the proposed framework provides a scalable and intelligent communication backbone for early disease detection, continuous monitoring and timely intervention in smart healthcare environments.

Article
Computer Science and Mathematics
Computer Networks and Communications

Khalid Kandali

,

Said Nouh

,

Lamyae Bennis

,

Hamid Bennis

Abstract: The convergence of Vehicular Ad-Hoc Networks (VANETs) and the Internet of Things (IoT) is giving rise to the Internet of Vehicles (IoV), a key enabler of next-generation intelligent transportation systems. This survey provides a comprehensive analysis of the architectural, communication, and computing foundations that support VANET–IoT integration. We examine the roles of cloud, edge, and in-vehicle computing, and compare major V2X and IoT communication technologies, including DSRC, C-V2X, MQTT, and CoAP. The survey highlights how sensing, communication, and distributed intelligence interact to support applications such as collision avoidance, cooperative perception, and smart traffic management. We identify four central challenges—security, scalability, interoperability, and energy constraints—and discuss how these issues shape system design across the network stack. In addition, we review emerging directions including 6G-enabled joint communication and sensing, reconfigurable surfaces, digital twins, and quantum-assisted optimization. The survey concludes by outlining open research questions and providing guidance for the development of reliable, efficient, and secure VANET–IoT systems capable of supporting future transportation networks.

Review
Computer Science and Mathematics
Computer Networks and Communications

Arshee Ahmed

,

Tan Kim Geok

,

Hajra Masood

Abstract: This study deeply examines the object detection techniques in autonomous vehicles. Two requirements should be met by autonomous vehicles’ object detection algorithms: First, a high level of accuracy is required. Second is real-time detecting speed. As autonomous vehicle’s core is object detection, which enables self-driving cars to precisely sense their environment and react appropriately to objects they detect. But in practical settings, developing a reliable and extremely precise system still presents significant difficulties because of restrictions such fluctuating ambient conditions, sensor limitations, and computational resource constraints. Degradation of the sensor in bad weather or low light, for instance, can significantly reduce the accuracy of detection. Considering this we have proposed to in corporate predictive maintenance in object detection. We also highlighted the performance metrics used in the proposed framework. Further a detailed literature review is included regarding object detection in autonomous vehicles specially in adverse weather condition following with analysis on the current work.

Article
Computer Science and Mathematics
Computer Networks and Communications

Michael Chen

,

Sofia Martínez

,

Daniel Hughes

Abstract: Fast fault handling is important for industrial IoT networks because short link breaks can interrupt sensors, robots, and shop-floor control. This work adds eBPF packet filters to a Go runtime so that routing can change on the device when a link fails. We tested several failure cases with mixed traffic. The setup switched routes in about 9.4 ms, while an SDN controller needed 22.8 ms. Packet loss during short breaks fell by 38%, and control traffic dropped by 27%. The design also lets users load simple rule updates to fit different field protocols without changing main code. These results show that placing fast reaction logic on edge devices can improve service stability in factory networks. The method fits small and mid-size systems, though wider field trials and long-term checks are still needed.

Article
Computer Science and Mathematics
Computer Networks and Communications

Emma L. Carter

,

Rui Zhang

,

Daniel P. Morris

Abstract: Factory systems often include many field units that speak different message formats such as OPC-UA and MQTT. Many software bridges can link these units, but they often add long wait time, extra link steps, or slow return after faults. We built a small Go-based gateway that joins both formats and uses short worker groups to handle tasks at the same time. Tests on 102 devices with mixed data show that round-trip delay fell by about 21%, bridging cost dropped by about 35%, and link recovery time improved from 290 ms to about 170 ms. The gateway kept steady message flow during brief bursts and did not require changes to field units, which helps when old and new devices must work together. These results show that placing a compact tool near edge nodes can improve message time and lower setup work in plant networks. Limits came from small boards, where heavy traffic raised CPU use. Future work will test larger setups and add timing support such as TSN or QUIC.

Article
Computer Science and Mathematics
Computer Networks and Communications

Robert Campbell

Abstract: The emergence of quantum computing threatens the security of classical cryptographic algorithms such as RSA and ECC. Post-quantum cryptography (PQC) offers mathematically secure alternatives, but migration is a complex, multi-year undertaking. Unlike past transitions (AES, SHA-2, TLS 1.3), PQC migration requires larger parameter sizes, hybrid cryptographic schemes, and unprecedented ecosystem coordination. This paper estimates migration timelines for small, medium, and large enterprises, considering infrastructure upgrades, personnel availability, budget constraints, planning quality, and inter-enterprise synchronization. We argue that realistic timelines extend well beyond initial optimistic estimates: 5–7 years for small enterprises, 8–12 years for medium enterprises, and 12–15+ years for large enterprises. PQC migration is not a siloed technical upgrade but a global synchronization exercise, deeply intertwined with Zero Trust Architecture and long-term crypto-agility. These timelines are contextualized against expected arrival windows for fault-tolerant quantum computers (FTQC), projected between 2028 and 2033 [1–3]. We further analyze the “Store Now, Decrypt Later” threat model, crypto-agility frameworks, and provide comprehensive risk mitigation strategies for enterprises navigating this unprecedented cryptographic transition.

Concept Paper
Computer Science and Mathematics
Computer Networks and Communications

Edet Ekpenyong

,

Ubio Obu

,

Godspower Emmanuel Achi

,

Clement Umoh

,

Duke Peter

,

Udoma Obu

Abstract: In blockchain ecosystems, maintaining transparency and privacy has become an ethical dilemma. This is because, while certain specific information of the user is shared to ensure transparency of transactions across networks, such information could be detrimental to the user, as there is a possibility of it being tampered with. For instance, in the Catalyst voting process in Cardano, users can still see the amount of ADA tokens being held by other users, which can influence their voting options, especially when large ADA holders vote in support of certain ideas or proposals. To discourage such challenges as voter manipulation and vote buying, this study proposed the implementation of zero-knowledge proof (ZKP) in blockchain ecosystems to enhance the transparency of the catalyst voting process and enhance efficiency and speed of result release. Using survey questionnaire and a multivocal literature review, this study was able to proof that ZKP cannot only be applied in the catalyst voting process to enhance its transparency, but also addressed potential challenges to its applications such as scalability, encourage trust and fairness of the voting system, and improve voter participation due to its user-friendliness. Mathematical models emphasize scaled voting as optimal for balancing inclusion and plutocratic control.

of 23

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated