Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Other

Suraiya Akter Sathi

,

Nashrah Hasan

,

Abdullah Al Mamun

,

Md Salim Sadman Ifti

,

Mohammad Shafiul Alam Khan

Abstract: 5G is the fastest-growing generation and the future of telecommunications. In the following years, when it is completely developed, it will be used by a large number of individuals all over the world. But 5G has a lot of security and privacy issues. As a result, it's critical to properly identify existing security weaknesses and provide effective solutions to address them. Some security and privacy issues still exist in the standard 5G AKA protocol, which have been identified and addressed in recent literature. By taking advantage of those vulnerabilities of the protocol, an adversary can perform some attacks such as SUCI replay attack, parallel session attack, linkability attack, etc. As a result, this standard protocol cannot ensure the location confidentiality of the subscriber. In the recent literature, researchers have provided some effective ways to mitigate the vulnerabilities of the protocol. However, the standard 5G AKA protocol still has a number of security issues that are either partially or entirely unsolved. In this paper, those issues have been addressed, and a security-enhanced 5G AKA protocol has been proposed which can mitigate the vulnerabilities of the AKA protocol. The proposed protocol may now be able to prevent such attacks and ensure the location confidentiality of the subscriber.
Article
Computer Science and Mathematics
Other

Addy Arif Bin Mahathir

,

Sivamuganathan Mohana Dass

,

Jerry Wingsky

,

Kelvin Chang

,

Joshua Loh Tze Han

,

Sai Rama Mahalingam

,

Noor Ul Amin

Abstract: This paper presents an extensive exploration of quantum computing as an emerging paradigm that operates on the principles of quantum mechanics to process information in ways unattainable by classical systems. It traces the evolution of quantum theory from its early conceptual foundations to its present-day technological applications, highlighting key milestones such as the development of qubits, quantum gates, and essential algorithms including Shor’s and Grover’s. The study examines the fundamental mechanisms of quantum superposition and entanglement, alongside the hardware and software innovations driving scalability and performance. By analysing experimental progress, programming models, and comparative advantages over classical and cloud computing, this paper underscores how quantum computing can transform industries such as data security, medicine, and artificial intelligence. Furthermore, it outlines future prospects involving error correction, neuromorphic integration, and commercialization trends that are shaping the next generation of computational technology.
Article
Computer Science and Mathematics
Other

Dimitrios Liarokapis

Abstract: This paper introduces the concept of hyper transfer—a qualitatively distinct form of knowledge transfer that operates through metaphorical, analogical, and allegorical mappings. While existing literature distinguishes between near transfer (application in similar contexts) and far transfer (application in dissimilar contexts), we argue that metaphorical knowledge transfer represents a third dimension that cannot be adequately captured by a linear continuum. We propose a triangular framework where hyper transfer occupies the apex, connecting to but remaining fundamentally distinct from both near and far transfer. This distinction is crucial for understanding how abstract conceptual structures are transferred across domains through cognitive mechanisms that prioritize relational correspondence over contextual similarity. We illustrate this framework with a pedagogical example from computer science education, demonstrating how technical concepts can be hyperbolically transferred to socio-political domains through metaphorical reasoning.
Review
Computer Science and Mathematics
Other

Suresh Neethirajan

Abstract: Artificial intelligence is transforming digital livestock farming, yet the same systems that improve welfare, efficiency, and emissions monitoring can impose large carbon costs from training, continuous inference, and hardware manufacture. This PRISMA-guided systematic review examines how Green AI can realign performance with environmental responsibility. We searched IEEE Xplore, Scopus, Web of Science, and the ACM Digital Library (January 2019–October 2025), screening 1,847 records and including 89 studies (61 with quantitative data). We address three questions: (RQ1) How do energy-efficient model designs reduce computational footprints while preserving accuracy? (RQ2) Which low-carbon machine-learning frameworks minimize training and inference emissions? (RQ3) How do sustainable infrastructures enable climate-positive deployments? Meta-analysis shows strong decoupling of performance from impact. Compression (pruning, quantization, distillation) achieves 70–95% parameter reductions with <5% accuracy loss. Lightweight architectures (e.g., MobileNet, EfficientNet) deliver 10–50× energy savings versus conventional CNNs, while neuromorphic systems achieve 200–1000× power reductions. Carbon-aware scheduling cuts emissions by ~70% via temporal and spatial workload placement; federated learning reduces communication energy by ~85% while preserving privacy; edge–fog–cloud hierarchies lower inference energy by ~87% by localizing computation. Six representative deployments report mean energy savings of 90.3% (85.9–99.96%) and cumulative CO₂ reductions of 2,175 kg with >91% accuracy retained. Key gaps remain: no ISO-aligned carbon metrics for agricultural AI; embodied emissions are rarely counted (17% of studies); accessibility for smallholders is limited; rebound effects are unquantified. We propose a roadmap prioritizing ISO-compliant accounting, low-cost solar or neuromorphic edge devices, rebound analysis, field validation, and multi-stakeholder Pareto optimization.
Article
Computer Science and Mathematics
Other

Evi M.C. Sijben

,

Vanessa Volz

,

Tanja Alderliesten

,

Peter A.N. Bosman

,

Berit M. Verbist

,

Erik F. Hensen

,

Jeroen C. Jansen

Abstract: Background: Paragangliomas of the head and neck are rare, benign and indolent to slow-growing tumors. Not all tumors require immediate active intervention, and surveillance is a viable management strategy in a large proportion of cases. Treatment decisions are based on several tumor- and patient related factors, with the tumor progression rate being a predominant determinant. Accurate prediction of tumor progression has the potential to significantly improve treatment decisions, by helping to identify patients who are likely to require active treatment in the future. It furthermore enables better-informed timing for follow-up, allowing early intervention for those who will ultimately need it, and optimization of the use of resources (such as MRI scans). Crucial to this is having reliable estimates of the uncertainty associated with a future growth forecast, so that this can be taken into account in the decision-making process. Methods: For various tumor growth prediction models, two methods for uncertainty estimation were compared: a historical-based one and a Bayesian one. We also investigated how incorporating either tumor-specific or general estimates of auto-segmentation uncertainty impacts the results of growth prediction. The performance of uncertainty estimates was examined both from a technical and a practical perspective. Study design: Method comparison study. Results: Data of 208 patients were used, comprising 311 paragangliomas and 1501 volume measurements, resulting in 2547 tumor growth predictions (a median of 10 predictions per tumor). As expected, the uncertainty increased with the length of the prediction horizon and decreased with the inclusion of more tumor measurement data in the prediction model. The historical method resulted in estimated confidence intervals where the actual value fell within the estimated 95% confidence interval 94% of the time. However, this method resulted in confidence intervals that were too wide to be clinically useful (often over 200% of the predicted volume), and showed poor ability to differentiate growing and stable tumors. The estimated confidence intervals of the Bayesian method were much narrower. However, the tumor volume fell only 78% of the time within its estimated 95% confidence interval. Despite this, the Bayesian method showed good results for distinguishing between growing and stable tumors, which has arguably the most practical value. When combining all growth models, the Bayesian method that uses tumor-specific auto-segmentation uncertainties resulted in an 86% correct classification of growing and non-growing tumors. Conclusions: Of the methods evaluated for predicting paraganglioma progression, the Bayesian method is the most useful in the considered context, because it shows the best discrimination between growing and non-growing tumors. To determine how these methods could be used and what its value is for patients, they should be further evaluated in a clinical setting.
Article
Computer Science and Mathematics
Other

Thuthukile Jita

,

Mamosa Thaanyane

Abstract: The rise of open distance e-learning (ODeL) has transformed the landscape of higher education, offering flexible learning opportunities to student-teachers to pursue academic programs without constraints. Literature shows an increasing adoption of ODeL models in higher education to expand access. However, despite the expansion of ODeL, deep-rooted inequalities in technological access remain a pressing concern, particularly for student-teachers in rural and historically disadvantaged communities. Therefore, the current study aimed to assess the impact of technology access and equity on student-teachers’ learning in ODeL to inform more inclusive and context-sensitive policy and practice. Grounded in the digital divide framework, the study examined how access to technologies influences opportunities for participation in the digital age. This study adopted a qualitative approach with semi-structured interviews with student-teachers to understand their views on these issues. Data were analyzed using thematic analysis, with results revealing that some student-teachers do not own personal digital devices and depend on shared campus access. On the other hand, unlike this group, student-teachers with a stable Internet connection and their own devices can access materials, attend virtual classes, and even meet academic deadlines. If these inequities are not addressed, implementing ODeL may inadvertently widen the educational divide.
Technical Note
Computer Science and Mathematics
Other

Daisuke Sugisawa

Abstract: This document is a Technical Note that reports on the design and proof-of-concept of a self-distributed Selective Forwarding Unit (SFU) architecture for real-time video streaming. Unlike a full research article, this note emphasizes the practical motivation, implementation details, and observed behaviors of the system. The central idea is to address the limitations of traditional prediction-based scaling by introducing a finite state machine (FSM)-based self-stabilizing method that enables autonomous scale-out and scale-in of distributed SFUs under dynamic traffic conditions.
Article
Computer Science and Mathematics
Other

Paulo Serra

,

Ângela Oliveira

Abstract: The integration of Artificial Intelligence (AI) into educational environments is rede-fining how digital resources support teaching and learning, highlighting the need to understand how prompting strategies can enhance engagement, autonomy, and per-sonalisation. This study explores the pedagogical role of prompt engineering in trans-forming static digital materials into adaptive and interactive learning experiences aligned with the principles of Education 4.0. A systematic literature review, conducted under the PRISMA protocol, examined the use of educational prompts and identified key AI techniques applied in education, including machine learning, natural language processing, recommender systems, large language models, and reinforcement learning. The findings indicate consistent improvements in academic performance, motivation, and learner engagement, while also revealing persistent limitations related to technical integration, ethical risks, and weak pedagogical alignment. Building on these insights, the article proposes a structured prompt engineering methodology encompassing in-terdependent components such as role definition, audience targeting, feedback style, contextual framing, guided reasoning, operational rules, and output format. A practical illustration demonstrates how embedding prompts into digital learning resources, exemplified through PDF-based exercises, enables AI agents to facilitate personalised, adaptive study sessions. The study concludes that systematic prompt design can repo-sition educational resources as intelligent, transparent, and pedagogically rigorous systems for knowledge construction.
Short Note
Computer Science and Mathematics
Other

Christian R. Macedonia

Abstract: Kosmoplex Theory proposes that physical reality emerges from a finite computational substrate: an 8-dimensional octonionic space structured by the Fano plane, projecting into observable 4-dimensional spacetime through a discrete transformation mechanism. The framework is built on triadic closure over the alphabet {−1, 0, +1} and enforces reversibility through an affine geometric model of eternal cosmic transformation. From these axioms, the theory derives fundamental constants, including the fine structure constant α−1 ≈ 137, the force hierarchy, and Planck-scale discreteness, as necessary consequences rather than free parameters. This work presents a systematic enumeration of falsifiable predictions, following the Popperian criterion that a scientific theory must specify conditions under which it could be experimentally refuted. Drawing on traditions from Cartesian methodological doubt to Popper’s demarcation principle, we demonstrate that theoretical strength derives not from unfalsifiable claims but from precise vulnerability to empirical test. Beginning with cosmological consistency checks (Olbers’ Paradox, dark matter/energy), we then detail decisive experimental protocols: altitude-dependent measurements of α at ∆α/α ∼ 10−18 precision, tests of the 7n force coupling hierarchy, searches for Planck-scale discreteness via Lorentz violation, quantum information capacity bounds at 137 bits/cycle, and ultra-high-precision spectroscopic searches for granular structure in fundamental constants. Each prediction provides explicit falsification criteria; contradiction of any would necessitate substantial revision or abandonment of the framework.
Review
Computer Science and Mathematics
Other

Fabio Cumbo

,

Davide Chicco

,

Sercan Aygun

,

Daniel Blankenberg

Abstract: Vector-Symbolic Architectures (VSAs) provide a powerful, brain-inspired framework for representing and manipulating complex data across the biomedical sciences. By mapping heterogeneous information, from genomic sequences and molecular structures to clinical records and medical images, into a unified high-dimensional vector space, VSAs enable robust reasoning, classification, and data fusion. Despite their potential, the practical design and implementation of an effective VSA can be a significant hurdle, as optimal choices depend heavily on the specific scientific application. This article bridges the gap between theory and practice by presenting ten tips for designing VSAs tailored to key challenges in the biomedical sciences. We provide concrete, actionable guidance on topics such as encoding sequential data in genomics, creating holistic patient vectors from electronic health records, and integrating VSAs with deep learning models for richer image analysis. Following these tips will empower researchers to avoid common pitfalls, streamline their development process, and effectively harness the unique capabilities of VSAs to unlock new insights from their data.
Article
Computer Science and Mathematics
Other

Manale Boughanja

,

Zineb Bakraouy

,

Tomader MAZRI

,

Ahmed SRHIR

Abstract: With the increasing complexity of autonomous vehicle (AV) networks, ensuring enhanced cybersecurity has become a critical challenge. Traditional security techniques often struggle to adapt dynamically to evolving threats. This study proposes a novel domain ontology to assess its coherence and effectiveness in structuring knowledge about AV security threats, intrusion characteristics, and corresponding mitigation techniques. Developed using Protégé 4.3 and the Web Ontology Language (OWL), the ontology formalizes cybersecurity concepts without directly integrating with an Intrusion Detection System (IDS). By providing a semantic representation of attacks and countermeasures, the ontology enhances threat classification and supports automated decision-making in security frameworks. Experimental evaluation demonstrated its effectiveness in improving knowledge organization and reducing inconsistencies in security threat analysis. Future work will focus on integrating the ontology with real-time security monitoring and IDS frameworks to enhance adaptive intrusion response strategies.
Short Note
Computer Science and Mathematics
Other

Evi Togia

Abstract: This research sets out to explore how data analytics can be harnessed to assess and promote diversity within cultural institutions. By examining patterns of engagement and representation across demographic groups, the study aims to provide a comprehensive understanding of inclusivity in cultural participation. The approach combines quantitative and qualitative methodologies, leveraging statistical analysis, machine learning, and thematic coding to extract insights from diverse data sources. Collaboration with cultural institutions will ensure the relevance and applicability of the findings. The anticipated outcomes include actionable recommendations for enhancing diversity and inclusion, as well as broader contributions to cultural informatics and policy development. Ultimately, the research aspires to foster more equitable and representative cultural ecosystems that reflect the richness of contemporary society.
Article
Computer Science and Mathematics
Other

Robert Campbell

Abstract: The transition to post-quantum cryptography poses an unprecedented challenge for Bitcoin and Ethereum, as it involves implementing a defensive downgrade that imposes immediate, severe costs with no tangible benefits. While quantum computers capable of breaking secp256k1 require between 523–2,500 logical qubits, with the author deriving 523 logical qubits as an algorithmic lower bound (not inclusive of arithmetic and ancilla qubits) for a canonical Shor/phase-estimation circuit using the formula QL = 2⌈log2(n)⌉ + 2 + ⌈log2(2 + 1/(2ε))⌉ for ε = 0.001, and conservative estimates ranging up to 2,500 logical qubits based on comprehensive resource models—significantly less than the 2,100–2,400 logical qubits es- timated for general elliptic curves—current systems achieve only ∼100 logical qubits. IBM’s quantum roadmap projects 500–1,000 logical qubits by 2029, placing the critical threshold within 4–10 years depending on which estimate proves accurate. This timeline collides with the reality that convincing decentralized communities to accept 50% capacity loss and 2– 3× fee increases may take 10–15 years in themselves, based on historical governance patterns where even beneficial upgrades required 2–5+ years. Current testnet implementations on per- missioned systems show measurable performance degradation. Critically, this data comes from fundamentally different architectures than permissionless networks, which will likely experience 30–50% additional performance degradation due to global verification requirements, heterogeneous hardware, and compounding propagation delays. This methodological limitation—extrapolating from permissioned to permissionless systems—represents a critical infrastructure failure that introduces massive uncertainty into migration planning. Com- pounding this challenge, secp256k1 is not officially approved by NIST under FIPS 186-5 or SP 800-186, creating additional regulatory vulnerabilities. Beyond transient impacts, PQC creates permanent state bloat, with quantum-resistant accounts requiring 59 times more storage (1,952 bytes / 33 bytes = 59.2× for ML-DSA-65), thereby accelerating centralization- tion. This paper presents a comprehensive framework acknowledging these harsh realities. While we propose specific BIP/EIP implementations and optimization strategies that might achieve 50–60% capacity retention, we recognize that the quantum threat timeline may now be shorter than even the minimum viable migration period. Unlike beneficial upgrades like SegWit (which took 20 months for activation and 5+ years for 50% adoption despite offering improvements), PQC migration is a purely defensive measure imposing only costs. The stark reality: blockchain communities must choose between accepting immediate emergency action or facing quantum vulnerability by 2029.
Article
Computer Science and Mathematics
Other

Inma Borrella

,

Eva Ponce-Cueto

Abstract: Learning Analytics Dashboards (LADs) are increasingly deployed to support self-regulated learning in large-scale online courses. Yet many existing dashboards lack strong theoretical grounding, contextual alignment, or actionable feedback, and some designs have been shown to inadvertently discourage learners through excessive social comparison or high inference costs. In this study, we designed and evaluated a LAD grounded in the COPES model of self-regulated learning and tailored to a credit-bearing Massive Open Online Course (MOOC) using a data-driven approach. We conducted a randomized controlled trial with 8,745 learners, comparing a control group, a dashboard without feedback, and a dashboard with ARCS-framed actionable feedback. Results showed that the dashboard with feedback significantly increased learners’ likelihood of verification (i.e., paying for the certification track), with mixed effects on engagement and no measurable impact on final grades. These findings suggest that dashboards are not uniformly beneficial: while feedback-supported LADs can enhance motivation and persistence, dashboards that lack interpretive support may impose cognitive burdens without improving outcomes. The study contributes to the learning analytics literature by (1) articulating design principles for theoretically and contextually grounded LADs, and (2) providing large-scale experimental evidence on their impact in authentic MOOC settings.
Review
Computer Science and Mathematics
Other

Satyadhar Joshi

Abstract: The rapid evolution of artificial intelligence has given rise to agentic AI systems—autonomous entities capable of perceiving their environment, making decisions, and executing actions with minimal human intervention. This work provides a systematic analysis of agentic AI frameworks, governance models, and implementation strategies. Drawing on a comprehensive review of the literature, we examine the current state of agentic AI technologies, highlight key challenges in governance, security, and ethical oversight, and compare architectural frameworks for responsible deployment.Our results, illustrated through detailed framework comparisons and governance analyses, demonstrate that while agentic AI holds transformative potential across multiple sectors, notable gaps persist in standardization, regulatory compliance, and interoperability. To address these issues, we propose a layered architecture that embeds governance and security across all system layers. An analysis of the competitive landscape further identifies critical interoperability challenges that could undermine U.S. leadership. Based on these insights, we outline a strategic framework for U.S. competitiveness, emphasizing accelerated standards development, international collaboration, and investment in interoperability research. Finally, emerging trends and future directions are explored to provide a comprehensive roadmap for responsible deployment of agentic AI.
Review
Computer Science and Mathematics
Other

Miriam Guillen-Aguinaga

,

Enrique Aguinaga-Ontoso

,

Laura Guillen-Aguinaga

,

Francisco Guillen-Grima

,

Ines Aguinaga-Ontoso

Abstract: Background: Data quality is a cornerstone of scientific integrity, reproducibility, and deci-sion-making. However, datasets often lack transparency in their collection and curation processes. Methods: We conducted a narrative review of the scientific and technical litera-ture published between 1996 and 2025, complemented with standards (e.g., ISO/IEC 25012, ISO 8000) and reports addressing data quality frameworks. Sources were retrieved from PubMed, Scopus, Web of Science, and grey literature. The review identifies core di-mensions, practical applications, and challenges in data quality management. Results: Across sectors, accuracy, completeness, consistency, timeliness, and accessibility emerged as universal dimensions of quality. Healthcare and business provide illustrative case studies where poor data quality leads to significant clinical and economic risks. Recent frameworks integrate data governance, FAIR (findability, accessibility, interoperability, and reusability) principles, and ethical considerations, including transparency and bias reduction in artificial intelligence. Conclusions: Data quality presents both technical and socio-organizational challenges. Embedding quality assurance into the full data lifecycle and aligning with FAIR and governance frameworks is essential for trustworthy, reusable datasets. This review provides a structured synthesis that can inform research, policy, and practice in managing high-quality data.
Review
Computer Science and Mathematics
Other

Zihao Cao

Abstract: Agent-based modeling (ABM) is a versatile and important tool for exploring the complexity of agricultural ecosystems. By representing heterogeneous agents such as pests, pollinators, plants, and farmers and their localized interactions, ABMs provide insights into emergent patterns that shape crop productivity and ecosystem services. This concise review highlights major applications of ABMs in agricultural ecosystems, including pest and disease spread, pollination dynamics, vegetation succession, nutrient cycling, and farmer decision-making. Together, these cases demonstrate how ABMs can link micro-level behaviors with system-level outcomes, offering both theoretical understanding and practical management guidance for agroecosystems.
Article
Computer Science and Mathematics
Other

Idowu Adewumi

,

Oluwaseyi Funmi Afe

,

Akintayo Ayoade

,

Uzoamaka Chizurum Elugbindin

Abstract: Malnutrition continues to be a significant global health issue, impacting approximately 148.1 million children under five years old in 2024 (WHO). Traditional screening techniques, like mid-upper arm circumference (MUAC) and body mass index (BMI), provide moderate coverage but face limitations in rural low-resource regions because they require trained personnel and equipment. To fill this gap, we suggest an ML-based framework that combines anthropometric image analysis with socio-economic and dietary intake information for the early detection of malnutrition. The research was carried out on a dataset involving 2,000 children from 3 rural centers in Nigeria, including 6,000 anthropometric images, 2,000 dietary assessments, and 2,000 socio-economic profiles. Convolutional neural networks (CNNs; ResNet50, MobileNetV3) that were trained on images reached an accuracy of 84.7%, with a precision of 0.81 and a recall of 0.83. Ensemble methods (Random Forest, XGBoost, LightGBM) applied to tabular data reached 87.2% accuracy and a 0.85 F1-score. A blended fusion layer integrating both modalities enhanced the outcomes to 92.5% accuracy, 0.89 precision, 0.91 recall, 0.90 F1-score, and 0.95 ROC-AUC. The efficiency of deployment was evaluated on affordable edge devices. On a GPU workstation (16-core CPU, 32 GB RAM, 8 GB VRAM), the inference duration per sample was 0.18 seconds. On a Raspberry Pi 4 (4 GB RAM), MobileNet took 1.24 seconds per sample, whereas ResNet took 2.37 seconds per sample. A comparative analysis involving human nutrition workers (n = 10) showed that the hybrid ML model exceeded manual screening by +14.2% in accuracy and decreased false negatives by 21.8%, indicating possibilities for scalable deployment in community settings. These results emphasize the practicality of ML-driven malnutrition assessment as an affordable, precise, and resource-efficient approach. By directly supporting SDG 2 (Zero Hunger) and SDG 3 (Good Health and Well-being), the suggested system provides a means for enhancing nutritional monitoring and actions in marginalized communities.
Article
Computer Science and Mathematics
Other

Yuan Lu

,

Jingying Chen

Abstract: This study proposes a novel SSA-EMS framework that integrates Singular Spectrum Analysis (SSA) with Effect-Matched Spatial Filtering (EMS), combining the noise-reduction capability of SSA with the dynamic feature extraction advantages of EMS to optimize cross-subject EEG-based emotion feature extraction. Experiments were con-ducted using the SEED dataset under two evaluation paradigms: "cross-subject sample combination" and "subject-independent" assessment. Random Forest (RF) and SVM clas-sifiers were employed to perform pairwise classification of three emotional states—positive, neutral, and negative. Results demonstrate that the SSA-EMS framework achieves RF classification accuracies exceeding 98% across the full frequency band, sig-nificantly outperforming single frequency bands. Notably, in the subject-independent evaluation, model accuracy remains above 96%, confirming the algorithm’s strong cross-subject generalization capability. Experimental results validate that the SSA-EMS framework effectively captures dynamic neural differences associated with emotions. Nevertheless, limitations in binary classification and the potential for multimodal exten-sion remain important directions for future research.
Article
Computer Science and Mathematics
Other

Leonid Shaikhet

Abstract: To readers attention two known theorems on the stabilization of a controlled inverted pendulum under stochastic perturbations in the form of a combination of white noise and Poisson's jumps are presented. As unsolved problems, a generalization of these theorems is proposed for a mathematical model, described two coupled controlled inverted pendulums.

of 11

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated