Sort by

Article
Computer Science and Mathematics
Security Systems

Marko Corn

,

Primož Podržaj

Abstract: Human-centered cryptographic key management is constrained by a persistent tension between security and usability. While modern cryptographic primitives offer strong theoretical guarantees, practical failures often arise from the difficulty users face in generating, memorizing, and securely storing high-entropy secrets. Existing mnemonic approaches suffer from severe entropy collapse due to predictable human choice, while machine-generated mnemonics such as BIP–39 impose significant cognitive burden. This paper introduces GeoVault, a spatially anchored key derivation framework that leverages human spatial memory as a cryptographic input. GeoVault derives keys from user-selected geographic locations, encoded deterministically and hardened using memory-hard key derivation functions. We develop a formal entropy model that captures semantic and clustering biases in human location choice and distinguishes nominal from effective spatial entropy under attacker-prioritized dictionaries. Through information-theoretic analysis and CPU–GPU benchmarking, we show that spatially anchored secrets provide a substantially higher effective entropy floor than human-chosen passwords under realistic attacker models. When combined with Argon2id, spatial mnemonics benefit from a hardware-enforced asymmetry that strongly constrains attacker throughput as memory costs approach GPU VRAM limits. Our results indicate that modest multi-point spatial selection combined with memory-hard derivation can achieve attacker-adjusted work factors comparable to those of 12-word BIP–39 mnemonics, while single-point configurations provide meaningful offline resistance with reduced cognitive burden.

Article
Biology and Life Sciences
Biochemistry and Molecular Biology

Abdulmohsen H. Alrohaimi

Abstract: Background Advances in genomics over the past two decades have revealed that only a small fraction of the genome actively encodes proteins, while the majority of genomic sequences remain transcriptionally inactive across most biological contexts. Early interpretations described these regions as “junk DNA” or evolutionary remnants accumulated through neutral processes. However, accumulating evidence from functional genomics, epigenetics, and pseudogene research increasingly indicates that many apparently inactive genomic regions retain structural integrity and regulatory potential. These observations raise fundamental questions about why genomes preserve large quantities of apparently unused genetic information.Aim This study introduces the concept of Biological Memory of the Genome as an extension of the Gene Latency framework proposed by Alrohaimi. The objective is to develop a conceptual theoretical model explaining how genomes preserve accumulated genetic information across evolutionary time and how latent genetic elements may represent stored biological potential.Methods A conceptual research design was employed using integrative literature synthesis across genomics, evolutionary biology, pseudogene research, regulatory genomics, and systems biology. Through a multi-stage conceptual modeling process, patterns related to genomic preservation and latent functional potential were analyzed and integrated into a unified theoretical framework describing genomic biological memory.Results The analysis suggests that genomes may function not only as systems executing active genetic programs but also as repositories of preserved genetic information accumulated across evolutionary history. Within this framework, pseudogenes, duplicated genes, regulatory sequences, and silent genetic pathways may represent components of a genomic biological memory system capable of storing latent functional potential. These preserved elements may remain inactive for extended evolutionary periods while retaining the capacity to participate in future regulatory or evolutionary processes.Conclusion The concept of Biological Memory of the Genome extends the Gene Latency framework by proposing that genomic architecture includes a long-term evolutionary information storage system. This perspective offers a new theoretical interpretation of genomic inactivity and suggests that preserved genetic elements may contribute to evolutionary adaptability and regulatory innovation. Future genomic and computational research may help clarify the mechanisms through which biological memory is preserved and mobilized within genomic systems.

Review
Medicine and Pharmacology
Clinical Medicine

Angela Boahen

,

Adrian I. Abdo

,

Neil McMillan

,

Guilherme Pena

,

Katharina Richter

Abstract: Globally, approximately 18.6 million individuals develop diabetic foot ulcers each year, with an estimated 50–60% of these cases subsequently becoming infected. Diabetes-related foot infections (DFI) are a common and serious complication for patients with diabetes, often resulting in gangrene, lower extremity amputation and eventually death. Multidrug resistance among DFI pathogens aggravates treatment failure, driving up healthcare costs and morbidity. Addressing this multifaceted challenge necessitates the development of novel, synergistic, and innovative therapeutic strategies. Plasma-activated water (PAW) is an emerging solution, produced by treating water with cold plasma; an ionized gas that generates a complex mixture of reactive oxygen and nitrogen species. PAW elicits potent broad-spectrum antimicrobial and antibiofilm activity against a wide range of pathogens implicated in chronic wound infections, including DFI. Moreover, PAW has been shown to accelerate wound healing through modulating immune cell activity and promoting epithelial cell proliferation and migration into wounds. In this review, we summarize: (i) the prevalence and recurrence of DFIs, (ii) methods for PAW generation and its physicochemical properties, (iii) the antimicrobial and antibiofilm efficacy of PAW against clinically relevant DFI pathogens, (iv) its effects on cellular behavior, including immune modulation and promotion of epithelial regeneration, (v) PAW as a stand-alone or synergistic therapy (vi) current limitations in PAW application, including standardization, delivery, and regulatory hurdles. Together, this rationale highlights the notion that PAW holds significant potential as a next-generation therapeutic approach for DFIs and other chronic wounds.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Yuliang Wang

Abstract: This study proposes a deep learning framework that combines graph neural networks (GNN) and temporal modeling to enhance the accuracy and stability of supply chain risk prediction and optimization in pharmaceutical enterprises. By modeling the pharmaceutical supply chain as a complex graph structure, this research effectively captures the dependencies between nodes while using temporal networks to capture long-term dynamic changes within the supply chain. We design a model incorporating a multi-head attention mechanism, which provides accurate risk predictions under different demand fluctuation scenarios. The experimental results demonstrate that the proposed model outperforms existing traditional machine learning models and deep learning methods across multiple evaluation metrics, including Precision, Recall, F1-Score, and AUC-ROC. Particularly in complex environments, the model effectively identifies potential supply chain risk events, such as logistics delays, supply disruptions, and inventory fluctuations. Compared to traditional rule-based or statistical supply chain risk prediction methods, the proposed model shows greater robustness and accuracy by deeply exploring the structural and temporal features between supply chain nodes. Sensitivity analysis of model performance under varying demand fluctuation intensities and environmental changes further validates the model's feasibility and stability in real-world applications, providing effective technical support for the pharmaceutical industry in areas such as resource scheduling, inventory management, and risk early warning.

Article
Social Sciences
Behavior Sciences

Gonzalo Hoyos-Bucheli

Abstract: Various disciplines have delved into the complex relationship between social behavior and its individual and collective benefits. In social psychology, many scholars have explored human behavior driven by altruistic and selfish actions. From the perspective of evolutionary biology, numerous studies have examined the positive and negative effects of the actor and recipient on behaviors. However, when viewed through an interdisciplinary lens, these approaches only partially capture the intricate interplay between the actors’ behaviors and the societal impact of their actions. In a collaborative spirit, this article considers the pivotal work of Carlo Cipolla’s "Laws of Human Stupidity," which sought to classify people based on the benefits for themselves and others. By comparing definitions and interpretations from different disciplines, the article demonstrates the theoretical compatibility of Cipolla’s types of people, behavioral definitions from evolutionary biology and social psychology, and understanding of human intentions behind behavior based on the "Theory of Planned Behavior." Finally, this article compiles the results in a "Social Impact" classification with integrated definitions based on human behaviors and their underlying intentions.

Review
Biology and Life Sciences
Neuroscience and Neurology

Eleazar Ramírez Hernández

,

Citlalli Netzahualcoyotzi

,

Gabriela Hurtado-Alvarado

,

José Luis Sánchez

,

Ali Pereyra Morales

,

David Arredondo-Zamarripa

,

Luis Fernando Hernández-Zimbrón

,

Dulce Papy-Garcia

,

Jorge Guevara

,

Natalia Gutiérrez Ponce

+4 authors

Abstract: Epidemiological and clinical research on neurodegenerative diseases have indicated that metabolic dysregulations increase the risk of developing Alzheimer's Disease (AD). Many metabolic alterations can be grouped within the metabolic syndrome (MetS), defined as the coexistence of three or more risk factors, such as insulin resistance, hyperglycemia, hypertension, central obesity, and dyslipidemia. These changes induce a systemic change that plays a critical role in inducing neuroinflammation and neurodegeneration as essential causes of AD pathogenesis. All these factors compromise peripheral tissues and brain energy metabolism through reduced glucose utilization, which contributes to alterations in O-GlcNAcylation, glycosylation, mitochondrial dysfunction, oxidative stress, chronic inflammation, synaptic dysfunction, impaired autophagy, and blood-brain barrier (BBB) dysfunction. However, these factors are modifiable elements that depend on lifestyle. A relatively new perspective proposes that exercise regularly plays an essential role in maintaining brain metabolism in ageing. Physical activity in MetS decreases the risk of developing Alzheimer's disease, is associated with better prognosis, and positively affects cognitive function in those patients. In this review, we discuss the mechanisms involved in MetS and their implication in AD and identify potential areas for preventive and therapeutic interventions.

Article
Computer Science and Mathematics
Information Systems

Nelson Herrera-Herrera

,

Estevan Ricardo Gómez-Torres

Abstract: The rapid proliferation of heterogeneous IoT sensor networks in urban public transporta-tion systems generates large volumes of real-time data that are often fragmented across in-dependent platforms, thereby limiting interoperability, scalability, and coordinated intel-ligence. Existing architectures typically treat sensing, edge processing, and artificial intel-ligence as loosely coupled components, lacking unified frameworks that support real-time adaptive decision-making in complex transportation environments. To address this gap, this study proposes a sensor-centric extension of the CAMS architec-ture that integrates semantic sensor interoperability, edge-enabled distributed processing, and embedded AI-driven coordination within a unified framework. The sensor-centric ex-tended CAMS framework introduces a distributed sensor integration layer combined with a native intelligent coordination module that enables real-time multi-sensor fusion and predictive analytics. A functional prototype is evaluated using hybrid real-world and simulated datasets representing vehicle telemetry, infrastructure sensing, and passenger demand across diverse operational scenarios. Experimental results demonstrate signifi-cant improvements in interoperability efficiency, predictive accuracy, scalability, and end-to-end latency compared with conventional centralized architectures. The results indicate that tightly integrating distributed sensing with embedded intelli-gence enhances robustness and scalability in smart transportation ecosystems. The pro-posed architecture provides a practical and extensible foundation for next-generation in-telligent urban mobility systems and advances the integration of IoT sensing and AI-driven decision-making in large-scale cyber–physical environments.

Article
Physical Sciences
Space Science

Changlong Wen

Abstract: This paper explores the fundamental nature of spacetime from the perspective of emergent causal structures, rooted in quantum mechanical principles. We propose a theoretical framework that treats spacetime not as a pre-existing background, but as a collective phenomenon arising from the interaction of quantum causal relations. By analyzing the constraints of causality and quantum coherence, we derive key implications for the emergence of classical spacetime geometry and the limits of local realism. Our results suggest a new way to bridge quantum mechanics and gravitational theory, providing a foundation for future studies in quantum gravity and spacetime physics.

Article
Business, Economics and Management
Economics

Xufeng Zhang

,

Han Li

,

Shenghui Bao

Abstract: In this paper we study competition between AI providers when users cannot fully verify model quality. Many AI services are not well described as standard search goods, and they are not pure experience goods either. Price, interface quality, and latency are usually observable, but reliability, hallucination risk, and the downstream cost of error are often only imperfectly observable even after use. Building on the emerging view that algorithmic advice can exhibit credence-good features, we embed that insight in a static industrial-organization model of vertical quality differentiation, costly certification, and limited user comprehension. Two firms choose whether to offer a low-quality or high-quality model. High-quality AI reduces error risk, but it is slower and costlier. A high-quality firm can purchase credible certification or disclosure, yet only a subset of users can interpret it. We characterize the pure-strategy equilibrium set, show how low-quality pooling can arise even when superior technology exists, and identify a quality-trap region in which the unique market equilibrium is low-quality pooling although an allocation with one high-quality provider is welfare superior. We then analyze policy. Standardized certification works through the demand side by increasing the fraction of users who can reward quality; minimum quality standards work directly but bluntly; liability shifts firms' cost incentives and weakly shrinks the region in which a low-quality industry outcome can be sustained. Contrary to a common rhetorical move in AI policy debates, these instruments are not interchangeable. The model also clarifies that our framework is a certification model rather than a full Grossman-Milgrom unraveling game: the key distortion comes from limited user comprehension of costly, truthful quality communication. The results offer a tractable industrial-organization foundation for current debates over hallucinations, model evaluation, AI documentation, and governance.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Mohsen Mostafa

Abstract: Physics-informed neural networks (PINNs) have emerged as powerful tools for solving partial differential equations, but their training remains challenging due to ill-conditioned loss landscapes. While adaptive methods like Adam dominate deep learning, they exhibit instability on stiff PDEs, and second-order methods are computationally prohibitive. We present EPANG-Gen, an optimizer that combines memory-efficient eigen-decomposition with lightweight Bayesian uncertainty quantification. EPANG-Gen introduces three ele- ments: (1) a randomized eigenspace estimator that approximates Hessian curvature with O(dk) memory (k ≪ d), (2) Bayesian R-LayerNorm for per-activation uncertainty estima-tion, and (3) adaptive rank selection (PASA) that dynamically adjusts to problem difficulty. We evaluate EPANG-Gen on four benchmark PDEs—Poisson 1D, Burgers’ equation, Darcy flow, and Helmholtz 2D—and on the Taylor-Green vortex at Re = 100, 000, a canonical 3D turbulence problem. All experiments were conducted under computational constraints (Kaggle, NVIDIA P100 GPU, limited epochs). Results show that EPANG-Gen achieves performance comparable to Adam on the toughest turbulent regime while eliminating the 25% catastrophic failure rate of ADOPT across 72 runs. Ablation studies confirm that eigen- preconditioning contributes to performance improvements of 11–35%. The built-in uncer- tainty estimates provide confidence metrics at negligible cost. This work represents an initial exploration of curvature-aware optimization for PINNs; further validation with larger com- pute resources is needed. Code is available at https://github.com/EPANG-Gen/EPANG-Gen.

Article
Engineering
Telecommunications

Basker Palaniswamy

Abstract: Radio signals carry information in three natural ways: by changing how strong the signal is (amplitude), how high or low its tone is (frequency), and how its timing shifts within the wave (phase). In most communication systems, engineers use only one of these features at a time. As a result, much of the signal’s potential to carry information remains unused. This paper explores a simple but powerful idea: using all three features of a radio wave simultaneously to transmit information on a single carrier signal. By combining amplitude, frequency, and phase modulation together, a single radio wave can carry far more information without requiring additional bandwidth.To explain and analyze this concept, the work introduces an intuitive geometric framework inspired by a four-dimensional shape called a \emph{tesseract}, often described as a “four-dimensional cube.” In this framework, three directions represent the three information channels—amplitude, frequency, and phase—while the fourth represents time. This geometric picture provides a clear way to visualize how the three channels coexist without interfering with each other.As a simple demonstration, the phrase “I Love You” is encoded by assigning each word to a different feature of the signal: “I” is carried by amplitude changes, “Love” by frequency variations, and “You” by phase shifts. Colourful waveform plots, three-dimensional visualizations, and a novel “tesseract slicing” illustration help make the four-dimensional behaviour easier to understand.The proposed framework has potential applications in satellite communication, future 5G/6G networks, radar systems, and signal-processing education. By using all three dimensions of a signal at once, this approach reveals previously unused communication capacity and shows how a single radio wave could deliver substantially more information without consuming extra spectrum.

Article
Computer Science and Mathematics
Robotics

Israel Kolaïgué Bayaola

,

Jean Louis Ebongué Kedieng Fendji

,

Blaise Omer Yenke

,

Marcellin Atemkeng

,

Ibidun Christiana Obagbuwa

Abstract: The rapid proliferation of unmanned aerial vehicles (UAVs) in energy-intensive applications (such as autonomous logistics, continuous surveillance, and mobile edge computing) has driven a critical need for highly reliable energy consumption models. However, selecting an appropriate modeling strategy remains an ad-hoc process; researchers must frequently navigate complex, undocumented trade-offs among required predictive accuracy, empirical data availability, and access to aerodynamic testing infrastructure. This study proposes a systematic, two-stage decision-making framework designed to standardize UAV energy model selection. In the first stage, a qualitative decision tree is inductively derived from a comprehensive corpus of recent literature (an 80% training split), explicitly mapping infrastructural and informational constraints to five distinct modeling regimes, ranging from novel white-box derivations to deep-learning black-box applications. This structural logic is subsequently validated against an independent 20% literature holdout set, achieving a 100% predictive match. In the second stage, the Analytic Hierarchy Process (AHP) is applied to quantitatively rank the feasible alternatives based on context-specific criteria: accuracy, interpretability, development cost, and customization adaptability. Crucially, this quantitative scoring introduces "fallback flexibility," allowing researchers to seamlessly pivot to mathematically adjacent alternative models when unforeseen experimental roadblocks occur. Embedded within an open-source Python graphical interface, this framework mitigates methodological ambiguity, prevents the over-allocation of research resources, and fosters greater reproducibility within the energy-aware UAV research community.

Article
Public Health and Healthcare
Other

Max Schmeling

,

Tomáš Fürst

,

Vibeke Manniche

,

Peter Riis Hansen

,

Jonathan D. Gilthorpe

Abstract: Background: Variation in suspected serious adverse events (SAEs) linked to different batches of COVID-19 vaccines has been reported in several countries, including the Czech Republic, Denmark, Sweden, and the USA. However, SAE data come from spontaneous reporting systems and are subject to under-reporting and other biases. We examined all-cause mortality (ACM) data to explore the temporal relationship between COVID-19 vaccine type and batch, and death, up to three months after vaccination. Methods: We analyzed nationwide data from the Czech Republic on vaccine type and batch, along with the corresponding three-month ACM data. Cluster analysis was used to assess differences in ACM across vaccine types and batches. Cluster-specific mortality rates were adjusted for age and sex and compared, with a focus on the timing of batch administration. We also investigated the relationship between ACM and SAEs for the same batches. Results: During a 21-month period (December 2020 to September 2022), vaccine batches were grouped according to three-month ACM rates for the four products administered (Comirnaty, SPIKEVAX, Vaxzevria, and Jcovden). For Comirnaty, SPIKEVAX, and Vaxzevria, a clear temporal pattern appeared, with earlier batches showing significantly higher ACM rates, even after adjusting for age and sex. A strong correlation was found between batches clustered by mortality and those previously identified to cluster by reported SAEs for all products except Jcovden. Conclusions: Data from the Czech Republic reveal a clear link between the most recently administered COVID-19 vaccine batch and short-term ACM rates. For three of the four vaccines, earlier batches were associated with notably higher ACM. The similar pattern observed between batch-associated mortality and SAE rates supports the existence of batch-related safety signals that warrant further investigation using individual-level patient data.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Hsiu-Chi Tsai

Abstract: We deploy a spiking neural network (SNN)-equivalent intrusion detection system (IDS) on the STM32N6570-DK, a commodity ARM Cortex-M55 MCU with the Neural-ART NPU. Exploiting the approximate equivalence between single-timestep (T=1) SNN inference and INT8 quantized ANN inference, we compile a lightweight MLP classifier to the NPU without neuromorphic hardware. Evaluated on NSL-KDD (5-class) and UNSW-NB15 (10-class) with 10 random seeds, the ReLU model achieves 78.86±1.32% and 64.75±0.61% overall accuracy, respectively. INT8 accuracy stays within 1 percentage point of FP32 across all 24 tested calibration configurations, and layer-wise analysis shows 99.0% final prediction agreement between FP32 and INT8 models. On the NPU, the INT8 model infers in 0.46 ms on NSL-KDD and 0.29 ms on UNSW-NB15 (100% NPU execution), 2.7–4.2× faster than the same model on the Cortex-M55 CPU, while occupying 120.6–137.7 KB Flash and 0.5–1.25 KB RAM. Tree-based baselines (Random Forest, XGBoost) achieve higher overall accuracy on UNSW-NB15 but cannot be compiled for the NPU at all. To our knowledge, this is the first publicly documented IDS deployment on an ARM Cortex-M NPU and the first publicly documented empirical validation of T=1 SNN–ANN equivalence on commercial NPU silicon.

Article
Engineering
Other

Zuo Tang

,

Xiaoheng Wang

,

Yefei Mao

,

Ruochen Zhao

,

Baozhen Zhao

,

Huicong Chang

,

Chang Yang

,

Lin Xiao

Abstract: Strong light interference severely degrades imaging system performance. This paper presents a novel Digital Micromirror Device (DMD)-based imaging system for robust strong light suppression and long-distance detection. Our design strategically places the DMD at the primary image plane, utilizing a large F-number objective for extended depth of field. The relay imaging system employs a tilted image plane in a near-symmetric configuration to effectively balance DMD-induced aberrations, simplifying alignment and achieving a compact, high-performance layout. The DMD's regional flipping capability enables precise, dynamic suppression of strong light. Experimental results from a fabricated prototype demonstrate superior imaging quality (MTF > 0.3 at 167.3 lp/mm) and exceptional suppression of intense laser interference, ensuring clear image acquisition in challenging lighting. This system offers an efficient solution for high-quality, long-range imaging in strong light environments.

Article
Biology and Life Sciences
Insect Science

Gaetan LeClair

,

Peter Mayo

Abstract: Insect attractant lures come in many formats, one of which utilizes tapered rubber sleeve stoppers, normally utilized to seal laboratory glassware openings. Their cup-shape top happens to be ideal to pipet a solution within this cavity, and, through permeation, load quantities of active ingredients. The expansion or swelling of the rubber facilitates the permeation of the active within its matrix, a role that dichloromethane performs well. Dichloromethane is also favored due to its volatility and broad chemical compatibility. However, this solvent is possibly on the verge of retirement, which would mean finding alternatives. It was found that several other common laboratory solvents could serve as replacement, and of those tested, tetrahydrofuran outperformed dichloromethane in terms of overall volume uptake and swelling. When loading the septum/sleeve with larger amounts of active, a full soaking methodology can disperse the active throughout the rubber sleeve as well as reduce labor requirement since batches can be processed compared to manually pipetting a solution to individual sleeves.

Review
Engineering
Automotive Engineering

Vanchha Chandrayan

,

Ignacio Alvarez

Abstract: In recent years we have seen Large Language Models (LLMs) demonstrating robust reasoning capabilities comparable to human performance. This makes them increasingly appealing for driver assistance, where adaptation to dynamic human context is essential. Yet, research in this area remains fragmented, often focusing on isolated applications, lacking utilization of LLM's full potential to deliver integrated, context-specific support and action. This survey synthesizes recent advancements in LLM-driven occupant monitoring systems, focusing on their capabilities for interpreting driver states and acting appropriately, enabling a new generation of intelligent driver assistance. We critically examine pioneering frameworks, benchmarks, and foundational datasets that employ techniques like reasoning chains, multimodality, and human-in-the-loop feedback to create personalized and safe driving experiences. We lay out the current trends, limitations, emerging patterns, in addition to a novel human-centered evaluation of the field, providing researchers with a roadmap towards transparent and trustworthy in-cabin systems, that bridge safety with driver experience.

Article
Medicine and Pharmacology
Gastroenterology and Hepatology

Ella Findling

,

Terrence Bissoondial

,

Prakash Narayan

Abstract: Metabolic dysfunction-associated steatotic liver disease (MASLD) is the most common chronic liver disease in children and is strongly associated with obesity and insulin resistance. In this study, we evaluated the clinical effects of GLP-1 RA therapy in a de-identified cohort of pediatric patients with MASLD and investigated potential molecular mechanisms using publicly available transcriptomic datasets from models of liver disease. Longitudinal FibroScan measurements from seven pediatric patients treated with GLP-1 RAs demonstrated significant reductions in controlled attenuation parameter scores, transient elastography scores and AST levels, indicating improvements in hepatic steatosis, liver stiffness and the liver inflammatory profile, respectively. To explore potential mechanisms underlying these observations, we analyzed transcriptomic datasets from methionine-choline deficient (MCD) and high-fat diet (HFD) mouse models of liver disease. A pattern-matching algorithm identified a core set of ten genes consistently upregulated in both models and downregulated following GLP-1 RA treatment in the HFD model. These genes are enriched in extracellular matrix remodeling, inflammatory signaling, and fibrogenic pathways associated with hepatic stellate cell activation. Collectively, these findings suggest that GLP-1 RA therapy may improve pediatric MASLD by attenuating fibrogenic and inflammatory transcriptional programs. Although limited by a small cohort size, this integrated clinical-transcriptomic approach supports further investigation of GLP-1 receptor agonists as a therapeutic strategy for pediatric MASLD.

Article
Engineering
Architecture, Building and Construction

Przemysław Konopski

,

Wojciech Bonenberg

,

Roman Pilch

Abstract: Despite advances in engineering, fire safety improvements have plateaued in developed nations, necessitating a reassessment of resource allocation. This study develops a comprehensive fire safety assessment model for the Polish context using the Analytic Hierarchy Process (AHP). A panel of ten experts—comprising fire safety inspectors, State Fire Service officers, and architects—evaluated safety through a two-dimensional framework: the Fire Hazard Index (FHI) and Fire Safety Index (FSI). The results reveal a critical asymmetry: human factors (0.228) and combustible materials dominate the hazard landscape, whereas intelligent AI/IoT systems (0.4133) and passive protection (0.2113) offer the highest potential for safety enhancement. A novel "convergence-divergence" phenomenon was identified: hazard-focused assessments produce convergent priorities across building types (span 0.116), implying universal mitigation needs (e.g., education), while protection-focused assessments yield divergent priorities (span 0.250), justifying targeted investment. Specifically, healthcare facilities (ZL II) require disproportionate protection investment (priority 0.310). The study concludes that sustainable fire safety strategies must combine universal hazard mitigation with targeted technological interventions, offering a data-driven framework for policy optimization in Poland.

Review
Physical Sciences
Other

Roberto Alvarez-Martinez

,

Pedro Miramontes

Abstract: Ecosystems can undergo abrupt, often irreversible transitions between alternative states —phenomena termed critical transitions or regime shifts— with profound consequences for biodiversity, ecosystem services, and human well-being. Early warning signals (EWS) derived from time series analysis offer the prospect of anticipating such transitions before they occur, potentially enabling preventive management intervention. This review provides a comprehensive synthesis of EWS methods for ecological systems, encompassing theoretical foundations, statistical indicators, empirical applications, and emerging methodological frontiers. We examine the dynamical basis of EWS in critical slowing down theory, wherein systems approaching bifurcation points exhibit characteristic statistical signatures including rising autocorrelation, increasing variance, and spectral reddening. We present a systematic overview of proposed indicators (Table 1), discuss moving-window frameworks for their computation, and critically evaluate preprocessing requirements and sensitivity to analytical choices. Empirical applications across major ecosystem types---including lakes, coral reefs, grasslands, forests, and marine fisheries---reveal both successes and limitations, with EWS performance depending critically on data quality, transition mechanism, and system-specific dynamics (Table 2). We address recent advances including machine learning approaches, non-equilibrium thermodynamic indicators, multivariate extensions, and the important distinction between bifurcation-induced, noise-induced, and rate-induced tipping. We conclude with recommendations for specialists, emphasizing the integration of EWS within broader monitoring frameworks, systematic sensitivity analysis, and the interpretation of indicators as probabilistic assessments of changing resilience rather than deterministic predictions of imminent collapse.

of 5,667

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated