Sort by

Review
Biology and Life Sciences
Biochemistry and Molecular Biology

Kyeong-Man Kim

Abstract: G protein-coupled receptors (GPCRs) form the largest family of cell-surface signaling proteins and remain the most exploited drug targets. Their regulation depends on desensitization, the attenuation of receptor responsiveness following sustained or repeated stimulation. The canonical framework, validated for rhodopsin and the β2-adrenergic receptor, describes desensitization as GRK-mediated receptor phosphorylation, arrestin recruitment, and steric occlusion of G protein coupling. This receptor-centric view has been supported primarily by time-course assays of second messenger generation, which capture rapid, reversible attenuation at the plasma membrane. Recent studies, however, reveal a receptor-distal program in which arrestin deubiquitination, regulated by EGFR–Akt–USP33 signaling, converts arrestin into a Gβγ-sequestering state and drives nuclear redistribution of signaling components. This perspective has been advanced through washout–rechallenge assays (pulse‑chase‑pulse), which demonstrate diminished responsiveness upon repeated stimulation even after receptor recovery. Integrating these complementary approaches, we propose a two-level framework for GPCR desensitization. Level 1 (proximal) reflects rapid receptor-core uncoupling, suited to transient stimulation. Level 2 (distal) reflects slower, trafficking-dependent redistribution of signaling machinery, producing more durable attenuation. This layered model reconciles divergent experimental findings, explains receptor subtype diversity, and highlights how desensitization extends beyond receptor blockade to spatial reorganization of signaling components. By linking methodological differences to mechanistic insights, this framework provides a unified view of GPCR desensitization. It informs future directions in biased agonism, acute tolerance, and the design of therapeutics with tailored desensitization profiles.

Article
Biology and Life Sciences
Agricultural Science and Agronomy

Teodor Ioan Trasca

,

Ioana Mihaela Balan

,

Nicoleta Mateoc-Sirb

,

Aisha Simbiat Hussaini

,

Annie Gaise Magomboane

,

Sorin Mihai Cimpeanu

Abstract: Background/Objectives: Food system assessments in Sub-Saharan Africa have pre-dominantly focused on caloric availability, often overlooking the structural and nutri-tional quality of the food supply. This study aims to analyze long-term changes in caloric availability and the structure of the protein supply, focusing on adjusted protein density, the balance between animal and plant proteins, and the diversification of animal protein sources. Methods: The analysis is based on data from the Food and Agriculture Organization of the United Nations for the period 1961–2023, covering Nigeria, the Republic of the Congo, the Democratic Republic of the Congo, Kenya, South Africa and Ethiopia. A longitudinal approach was applied where data were available, complemented by a recent comparative analysis (2010–2023). Derived indicators include total protein supply, ad-justed protein density, the share of animal and plant proteins and the Her-findahl-Hirschman index (HHI) to assess the diversification of animal protein sources. Results: Findings show that increases in caloric availability are not consistently associated with improvements in adjusted protein density or nutritional quality. Significant dif-ferences are observed across countries in protein supply levels, structural composition, and diversification patterns. Results reveal heterogeneous trajectories of food systems, from diversified to structurally constrained or highly concentrated systems. Conclusions: Beyond caloric availability, the structure and quality of the food supply are essential for assessing food system performance. Quantitative gains alone do not ensure improved nutritional or sustainable outcomes. The study highlights the importance of considering protein supply structure, diversification and nutritional efficiency in the context of SDG 2 (Zero Hunger) and broader food system transformation.

Article
Environmental and Earth Sciences
Water Science and Technology

Antonina P. Malyushevskaya

,

Olena Mitryasova

,

Michał Koszelnik

,

Ivan Šalamon

,

Andrii Mats

,

Andżelika Domoń

,

Eleonora Sočo

Abstract: Electric discharge cavitation is an effective method for water treatment that combines physical and chemical effects within a single process. It enables water disinfection, extraction acceleration, dispersion of solid particles, and enhancement of porous material permeability. Compared to conventional chemical treatment, it reduces the demand for reagents and minimizes secondary pollution. This new and developing technology significantly contributes to the preservation of natural aquatic ecosystems by providing a sustainable alternative to traditional decontamination methods, thereby reducing the overall anthropogenic pressure on the environment. This study focuses on developing a reliable method for assessing electric discharge cavitation intensity and controlling water purification processes. The proposed approach is based on the oxidation of iodide ions to molecular iodine by reactive species generated during electric discharge cavitation. The adapted iodometric method is sensitive, reproducible, and does not require complex optical or acoustic equipment. Experimental results confirmed that iodometry provides accurate evaluation of cavitation intensity, allowing control of specific energy consumption and optimization of treatment parameters. Optimal operating conditions were established to control the water processing by electric discharge cavitation: stainless-steel electrodes, specific input energy not exceeding 280 kJ·L-1, the presence of a free liquid surface in the working chamber, and a discharge pulse frequency below 10 Hz. The proposed method supports the development of energy-efficient, low-waste technologies for wastewater and natural water treatment and facilitates the integration of electric discharge systems into existing water treatment infrastructure, particularly under resource-limited conditions.

Article
Engineering
Mechanical Engineering

A. Syed

,

M.K. Samal

Abstract: Pressure tubes (channels containing fuel in CANDU type nuclear reactors) of Indian pressurized heavy water reactors are made from quadruple-melted Zr-2.5Nb alloy. Owing to the texture and crystal structure, these tubes exhibit anisotropy in mechanical properties. During postulated severe accident scenario such as loss of coolant accident, the temperature of the pressure tube may rise rapidly due to disruption in the heat removal porcess from the fuel bundles. The deformation behavior of the pressure tube under such high temperature shall effect the integrity of the coolant channel. Hence, it is crucial to model the high temperature deformation behavior of pressure tube under these conditions. For design and safety analysis of pressure tube, the high temperature properties are required. In this work, tensile tests were carried out using the specimens cut from quadruple melted Zr2.5%Nb pressure tube along longitudinal, transverse and radial directions at temperatures, i.e., from 25˚C to 800˚C. Shear properties were also evaluated by using the specimen machined from longitudinal-circumferential orientation of tube. It was observed that the specimen oriented along the circumferential direction has highest strength, while the radial specimen has the lowest strength as compared to other direction specimens at all temperature conditions. With the increase in temperature above 600 °C, the material undergoes superplastic deformation with strain values reaches above 400% at 800˚C. In addition, an algorithm has been developed to determine the anisotropic parameters of Hill’s yield function as a function of temperature and equivalent plastic strain using the experimental data. The equivalent stress-strain curves considering anisotropy have also been evaluated as a function of temperature. These data shall be useful for design and safety analysis of the pressure tube for different types of postulated loading and accidental conditions.

Concept Paper
Engineering
Electrical and Electronic Engineering

Divyasree Bellary

Abstract: Blockchain networks have emerged as foundational infrastructure for decentralised finance, supply chain management, healthcare data exchange, and digital identity systems. Despite their cryptographic foundations and distributed consensus mechanisms, blockchain deployments remain susceptible to a growing spectrum of security threats—including Sybil attacks, selfish mining, eclipse attacks, smart contract vulnerabilities, and routing-layer exploits [14]. This paper explores applications of graph theory for modeling blockchain networks to evaluate decentralization, security, privacy, scalability and NFT Mapping. We use graph metrics like degree distribution and betweenness centrality to quantify node connectivity, identify network bottlenecks, trace asset flows and detect communities.Traditional security evaluation methodologies, largely inherited from centralised network security assessment, fail to capture the topological, temporal, and game-theoretic complexity inherent to blockchain architectures. This paper proposes the Blockchain Security Evaluation Framework using Graph Models (BSEG), a structured five-component framework that applies graph-theoretic formalisms to model, analyse, and evaluate security properties across the full blockchain protocol stack. The proposed framework integrates transaction graph analysis, peer-to-peer network topology modelling, smart contract dependency graphs, consensus mechanism simulation through game-theoretic overlays, and a temporal anomaly detection layer that identifies structural deviations indicative of adversarial behaviour. A formal pseudo-algorithm details the core vulnerability propagation scoring pipeline, and two architectural diagrams illustrate the framework’s layered structure and end-to-end evaluation workflow. The framework is evaluated against fifteen representative studies spanning blockchain se- curity, graph-based anomaly detection, smart contract analysis, and distributed system resilience. Key contributions include a unified graph-model vocabulary applicable across heterogeneous blockchain platforms, a scoring function that integrates structural centrality, edge entropy, and consensus devi- ation metrics into a composite security index, and a governance model that accommodates privacy- preserving audit mechanisms. Critical challenges including computational scalability over billion- edge transaction graphs, adversarial graph poisoning, and cross-chain interoperability are discussed alongside directions for empirical validation. This work provides a replicable architectural blueprint for security auditors, protocol designers, and researchers seeking to apply rigorous graph-theoretic methods to blockchain security assessment.

Article
Engineering
Transportation Science and Technology

Aurelian Horia Nicola

,

Mihai Sorin Radu

,

Csaba Lorint

,

Mila Ilieva Obretenova

,

Nicolae Daniel Fita

Abstract: The rapid evolution of urban environments and the growing demand for efficient transportation systems have accelerated the transition toward smart cities. In this context, traffic modeling and urban mobility analysis play a critical role in understanding, predicting, and optimizing complex transportation dynamics. This study explores contemporary approaches to traffic modeling, integrating data-driven methodologies, simulation techniques, and intelligent transportation systems to enhance urban mobility in Petrosani city from Romania. Emphasis is placed on the use of big data, Internet of Things (IoT) technologies, and machine learning algorithms for real-time traffic monitoring, demand forecasting, and adaptive traffic management. The paper examines the interaction between traditional modeling frameworks and emerging smart city infrastructures, highlighting how advanced analytics can improve congestion mitigation, reduce environmental impact, and support sustainable mobility solutions. Furthermore, it discusses multimodal transportation integration, user behavior analysis, and policy implications for urban planners and decision-makers. A conceptual framework is proposed to bridge the gap between theoretical models and practical implementations within smart city ecosystems. The findings suggest that the convergence of digital technologies and traffic modeling significantly enhances the resilience, efficiency, and sustainability of urban mobility systems. The study contributes to the ongoing discourse by identifying key challenges, opportunities, and future research directions in the development of intelligent, data-driven transportation networks.

Article
Engineering
Other

Alekhya Mutyala

Abstract: Peer-to-peer (P2P) systems have emerged as a fundamental paradigm for decentralized resource sharing, communication, and computation across diverse application domains such as energy trading, distributed storage, social networking, and federated learning. Unlike traditional client–server architectures, P2P networks operate without centralized control, enabling nodes to act as both resource providers and consumers, thereby improving scalability, robustness, and system efficiency . However, the decentralized and dynamic nature of P2P systems introduces significant challenges in ensuring reliability, performance, and security, making testing a critical research area.This paper presents a comprehensive survey of peer-to-peer network testing techniques, integrating insights from existing literature on P2P architectures, simulation frameworks, and application-specific implementations. Early research primarily relied on analytical models and simulations to evaluate P2P systems, as real-world experimentation is often impractical due to large-scale deployment requirements. Simulation-based testing remains a widely adopted approach, though it suffers from limitations such as scalability constraints and lack of reproducibility in complex environments.Modern testing approaches for P2P systems include model-based testing, fault injection, stress testing, fuzzing, differential testing, and concurrency testing. These techniques address key challenges such as validating global system properties, handling peer churn, ensuring consistency, and managing heterogeneous network conditions . Additionally, load testing and performance evaluation methods have evolved to leverage decentralized testing frameworks, eliminating bottlenecks associated with centralized testing systems. Measurement-based techniques and sampling methods are also used to analyze large-scale P2P networks efficiently, though issues such as bias and incomplete data collection persist.The emerging application domains such as P2P energy trading and federated learning introduce new testing requirements, including real-time system validation, privacy preservation, and integration with physical systems. Advanced approaches like hardware-in-the-loop testing and AI-driven evaluation mechanisms are being explored to bridge the gap between simulation and real-world deployment. This survey consolidates key testing methodologies, identifies open challenges, and highlights future research directions in P2P network testing, emphasizing its critical role in enabling reliable, scalable, and intelligent decentralized systems.

Article
Engineering
Energy and Fuel Technology

Barbara Marchetti

,

Francesco Corvaro

,

Guido Castelli

,

Alberto Cavallito

Abstract: The management of European mountain landscapes is increasingly threatened by rural abandonment and escalating environmental risks. This study investigates an innovative Stewardship-Renewable Energy Communities model for the Central Apennines, exploring how post-seismic public reconstruction can serve as a financial engine for territorial maintenance. Utilizing Open Data Sisma administrative records and Photovoltaic Geographical Information System irradiation metrics, the research assesses the solar potential of 20 municipalities within the Sibillini seismic crater. To ensure a reliable baseline, a Building Suitability Coefficient was introduced as a conservative proxy for the public reconstruction sector. Results indicate that the establishment of public Renewable Energy Communities could generate approximately €1.08 million in annual revenue from 325 identified energy nodes. This economic surplus provides a Stewardship Capacity sufficient to fund the active maintenance of 789.77 hectares per year through Nature-based Solutions, assuming a standardized rate of 1,200 €/ha. The study concludes that distributed rooftop solar portfolios represent a non-invasive, self-funding mechanism for mountain resilience. By leveraging the Anthropo-systemic capital of reconstructed public hubs, mountain territories can transition from passive management neglect to active, energy-backed stewardship, offering a reproducible template for high-value cultural landscapes.

Article
Medicine and Pharmacology
Gastroenterology and Hepatology

Diana-Elena Floria

,

Andrei Neamțu

,

Radu Iliescu

,

Brîndușa Alina Petre

,

Batuhan Uzunoglu

,

Oana-Bogdana Bărboi

,

Vasile-Liviu Drug

Abstract: Background: Pepsin, a component of gastric refluxate, has been investigated as a potential salivary biomarker for gastro-esophageal reflux disease (GERD), although its diagnostic accuracy remains uncertain. This proof-of-concept study aimed to characterize the effects of pepsin on the human salivary peptidome using matrix-assisted laser desorption/ionization–time of flight (MALDI-ToF) mass spectrometry. Methods: Whole saliva samples were collected from ten healthy adult volunteers under fasting conditions and divided into untreated controls and aliquots digested with pepsin at acidic pH. MALDI-ToF MS was used to profile digestion-induced changes in peptide mass patterns. Spectral data were analyzed using multivariate statistical approaches, including principal component analysis (PCA), linear discriminant analysis (LDA), and hierarchical clustering. Results: Pepsin digestion increased peptide signal intensity and spectral complexity compared with controls. PCA demonstrated clear separation between native and digested samples along the first principal component. Ten peptide m/z features showed the strongest association with pepsin exposure based on PCA loadings. LDA and hierarchical clustering further supported this distinction, with the top 15 discriminative m/z features showing consistent enrichment in digested samples despite inter-individual variability. Conclusions: Pepsin exposure induces reproducible remodeling of the salivary peptidome detectable by MALDI‑ToF MS. Although this peptide‑level approach cannot resolve the full diversity of salivary proteoforms, the resulting signatures support the feasibility of identifying markers of reflux‑associated enzymatic activity and provide a basis for future validation in clinical GERD cohorts.

Article
Medicine and Pharmacology
Pharmacology and Toxicology

Anna W. Sobańska

,

Andrzej M. Sobański

,

Elżbieta Brzezińska

Abstract: Selected organic sunscreens from different chemical families were investigated in the context of their ability to inhibit butyrylcholinesterase using novel Multiple Linear Regression, Artificial Neural Network and Support Vector Regression models based on a set of six independent variables commonly associated with compounds’ absorption and distribution properties. It was established that the descriptors that have a particularly strong, positive influence on the ability of compounds to inhibit BChE expressed as pIC50 are the count of rotatable bonds (nRot) and lipophilicity (log D); pIC50 is negatively correlated with flexibility (Flex), fraction of sp3 carbon atoms (Fsp3), caco-2 permeability (caco2) and plasma protein binding ability (PPB). The sunscreens that are likely to be particularly strong BChE inhibitors are Ethylhexyl Triazone (ET), Diethylhexyl Butamido Triazone (DOBT), Octocrylene (OCR) and Diethylamino Hydroxybenzoyl Hexyl Benzoate (DHHB), although in must be stressed that ET and DOBT are outside the chemical space of the reference compounds.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Harris Wang

Abstract: The engineering of complex intelligent systems faces a fundamental challenge: theoretical frameworks for general intelligence and practical methodologies for enterprise system development have evolved along separate trajectories, creating a fragmentation that hinders progress toward unified solutions. This paper presents a comprehensive comparative analysis of three interrelated frameworks developed by the author: the Constrained Object Hierarchies (COH) 9-tuple, a mathematically rigorous formalism for modeling general intelligence; the Zoned Role-Based (ZRB) framework, a methodology for designing, implementing, and maintaining secure enterprise information systems; and the Constrained Zone-Object Architecture (CZOA), a unified formalism that integrates COH and ZRB. Through comparative analysis, we demonstrate that these frameworks are not competing alternatives but complementary tools at different levels of abstraction: COH provides a universal theory of intelligence, ZRB offers an engineering methodology for organizational systems, and CZOA bridges the two by enabling intelligent behavior within zoned enterprise architectures. Formal mapping theorems establish the relationships between frameworks, and a decision matrix guides practitioners in selecting the appropriate framework for their specific needs. We also provide an analytical study of key framework properties, including constraint enforcement mechanisms, to support evidence-based framework selection.

Article
Physical Sciences
Astronomy and Astrophysics

Ukshin Q. Rexhepi

Abstract: This work analyses 164 galactic rotation curves from the SPARC database and develops a field-based interpretation of the dark matter effect within the framework of the Universal Quantum Foam Hypothesis (UQSH). The empirical excess term C(r) = v2obs(r) - v2bar(r) reveals, after normalisation, a consistent structure of preferred dynamical regimes. 1 Global fits identify two dominant states: a peak regime with scale parameter q ≈ 0.5–1.0, encompassing mainly low-surface-brightness galaxies and dwarf galaxies, and a diffuse regime with q ≈ 3.0, dominated by more massive spiral galaxies. Individual fits yield a distribution of roughly 62% peak systems, 26% diffuse systems, and 12% in the transition zone. An analysis of the dynamic factor D = gobs/gbar as a function of the maximum rotation curve radius reveals a statistically significant negative correlation (r = -0.31, p = 0.0001). Beyond approximately 50–80 kpc, D converges systematically toward 1. This empirical instability boundary marks the spatial range within which coherent field organisation produces measurable amplification. In the UQSH, light is interpreted as a spherically propagating tension front that follows the accumulated field geometry. In this picture, the convergence κ does not measure the instantaneous mass density, but the projected field curvature. A UQSH model of the Bullet Cluster reproduces the characteristic order of magnitude of the offsets between gas centres and κ-peaks of 219 kpc and 228 kpc without requiring an additional non-baryonic matter component. In the UQSH, the dark matter effect is not a sign of missing particles but an intrinsic property of the field medium. Baryonic structures are stable field configurations that spatially pre-stress the field medium. Through continuous radiation they excite the field and generate persistent deformations that do not fully relax. The nonlinear superposition of these three sources — bound baryonic mass, continuous radiation, and the accumulated field pre-stress — produces a large-scale field tension that appears observationally as the dark matter effect. On galactic scales, the empirical instability boundary at approximately 50–80 kpc sets a natural spatial limit on this field tension. In galaxy clusters, the individual contributions of many saturated structures superpose into a collective field tension that systematically raises the lensing signal above the baryonic expectation. The universal fits show high internal consistency within each regime, with mean squared errors of MSE ≈ 0.016 in the peak regime and MSE ≈ 0.06–0.13 in the diffuse regime. This universality stands in contrast to the expectation from continuous halo models and supports the field-based interpretation of preferred dynamical states.

Article
Environmental and Earth Sciences
Geophysics and Geology

Klaudia Oleschko

,

María de Jesús Correa López

,

Andrey Khrennikov

,

Qiuming Cheng

,

José Luis Landa

,

Ramiro Guillermo Paz Cruz

,

Alejandro Romero

,

Paulina Patiño

,

Yesica Guerrero Amador

Abstract: Fracture networks strongly control fluid flow, reservoir connectivity, and production performance in carbonate systems, yet their multiscale architecture of complexity remains difficult to characterize from heterogeneous geological and geophysical datasets. Here, we introduce the Digital Transformer (DiT), a physics-informed computational framework that automatically analyzes and classifies fracture systems using spatially encoded visuonumerical primitives derived directly from physical measurements. Instead of relying on textual tokenization, the approach performs attention primitives tokenization of multiscale geophysical data. Clusters of absolute integer values act as computational tokens while preserving spatial topology and scale-invariant structure of the original system. The framework integrates two complementary environments: Muuk'il Kaab (MIK) for multidimensional metadata fusion and visualization, and SYM-Fractron, a hybrid binary-symbolic transformer for two-dimensional image analysis. Within this architecture, Digital Twins provide coupled visual and statistical representations of geological systems and their computational counterparts, enabling an interpretable taxonomy of natural fracture patterns while supporting well-trajectory optimization in the exploration of dolomitized carbonate reservoirs. In this view, fracture architectures become visionumerical primitives whose physics-informed tokenization opens a pathway from the architecture of natural complexity to its computational realization through Digital Twins.

Article
Medicine and Pharmacology
Internal Medicine

Ixchel Salter

,

Michaele-Francesco Corbisiero

,

Daniel B. Chastain

,

Chia-Yu Chiu

,

Leland Shapiro

,

Jose G. Montoya

,

Raymund R. Razonable

,

Andrés F. Henao-Martínez

Abstract: Cytomegalovirus (CMV) infects most of the population and remains dormant after a self-limited illness at the time of infection. It can cause severe illness in immunocompromised patients. CMV DNAemia in non-HIV-infected, non-solid organ/bone marrow transplant (NHNT) hosts is poorly characterized, with limited clinical insights. We aim to describe the clinical presentation, prognostic indicators, and outcomes of CMV DNAemia among NHNT patients. We used the TriNetX international patient database to identify adult patients diagnosed with CMV DNAemia from 2016 until March 2023. We evaluated hospitalization, intensive care unit (ICU) level care, and all-cause mortality at 30 days and 1 year. We also completed a post-propensity score analysis comparing clinical characteristics of survivors versus non-survivors at 90 days. We identified 1123 NHNT patients with CMV DNAemia, most commonly those with neoplasms (63%). Venous thromboembolism occurred in 31% of the population. The 30-day hospitalization and all-cause mortality rates were 35% and 14%, respectively. After propensity score matching, dyspnea, weakness, purpura, acute respiratory failure, malnutrition, encephalopathy, hypotension, CMV viral load, and ferritin were associated with increased 90-day all-cause mortality. CMV DNAemia in NHNT patients is associated with substantial morbidity and all-cause mortality. Further studies are warranted to clarify whether CMV DNAemia is a causative factor or an incidental finding in this population.

Article
Public Health and Healthcare
Public, Environmental and Occupational Health

Akhona Victress Mazingisa

,

Charles Shey Wiysonge

,

Moeti Kgware

Abstract: Water, sanitation, and hygiene (WASH) services are essential for learner health and equitable education. Persistent gaps in WASH infrastructure and hygiene provision, particularly those affecting girls, remain a major challenge in low- and middle-income countries. We assessed WASH interventions, learner knowledge and perceptions, and implementation challenges and opportunities in primary schools in eThekwini District, South Africa. We conducted a cross-sectional study among Grade 7 learners using a structured questionnaire adapted from the WHO Surveillance of WASH in Schools Tool, complemented by observational checklists. Stratified random sampling yielded 129 participants (76 girls, 53 boys); 72% response rate. Quantitative data were analysed using Chi-square, Fisher’s exact, and Kruskal–Wallis tests as appropriate. Although drinking water access was generally reliable, significant gaps were observed in sanitation privacy, soap and toilet paper availability, cleanliness, and menstrual hygiene facilities. Female learners consistently reported poorer conditions than males (p < 0.05). The Hygiene Access Index differed significantly across gender and age groups (p < 0.05), reflecting inequitable provision of hygiene materials. Despite educational initiatives, substantial shortcomings persist in school WASH infrastructure and hygiene provision, disproportionately affecting girls’ dignity, well-being, and school participation. Sustaining gender-responsive WASH systems is essential for improving learner health and promoting equitable educational environments.

Article
Computer Science and Mathematics
Mathematics

Igor Durdanovic

Abstract: Structural Execution Sequence (The Deductive Itinerary): This manuscript does not propose a continuous physical theory; it executes a formal mathematical theorem. By strictly bounding the scientific enterprise to the thermodynamic limits of finite computation, it outlines the exact deductive sequence that isolates the minimal necessary architectural class capable of compiling empirical reality. The theorem unrolls across four sequential proofs: 1. The Ontological Anchor (Axiom I: The Embedded Observer): The observer is formally defined as a finite physical sub-system. Scientific prediction evaluates strictly as the act of physical computation. It is governed by the Universal Cost Ledger (C_univ), which algorithmically penalizes static memory allocation (S) and dynamic execution trace (T). 2. The Syntactic Anchor (Axiom II: The Computable Boundary): Because the embedded agent is finite, any valid generative framework must execute safely on finite hardware. Empirical science evaluates exclusively as a formal closed system operating within the Computable Domain (M_TTG): the strict mathematical intersection of computability (Turing 1936), semantic exteriority (Tarski 1936), and bounded scope (Gödel 1931). 3. The Semantic Anchor (Axiom III: Data Supremacy): The raw observational array (D), extracted via thermodynamic collision with the environment, evaluates as the absolute ground truth. The syntactic injection of uncomputable continuous parameters or infinite-precision fields (Δθ) triggers a catastrophic memory leak (C_univ -> ∞) and is structurally forbidden by the Zero-Patch Standard. 4. The Hardware Compilation (m*): By enforcing these three axioms against the macroscopic Evidence Vector (E), we algorithmically deduce the exact hardware interface of reality. The minimal necessary architectural class (m*) compiles strictly as a discrete, local, deterministic Base-72 symplectic state-machine. The Checksum Protocol: We dynamically unroll this architectural class to verify that emergent Lorentz invariance, quantum measurement bounds, fractal self-similarity, objective causality, and thermodynamic irreversibility execute natively as the deterministic compiler artifacts of the discrete hardware itself.

Review
Biology and Life Sciences
Neuroscience and Neurology

Bruk Getachew

,

Matthew R. Miller

,

Harold E. Landis

,

Robert E. Miller

,

Yousef Tizabi

Abstract: Multiple Sclerosis (MS), a chronic, immune-mediated disease of the central nervous system (CNS) is typified by leukocyte infiltration into CNS, inflammation, demyelination, and neurodegeneration. Risk factors include genetic predispositions involving HLA-DR15 and various single nucleotide polymorphisms affecting T cell function. Early signs such as blurred/double vision, numbness, fatigue, and balance coordination, are later accompanied by cognitive as well as bladder and bowel dysfunction. Genetic models of neuroinflammation have helped development of drugs with significant effects on progression and relapse rate of MS. Nonetheless, MS continues to pose major challenges as the pathological mechanisms remain unclear. Recent studies highlight the crucial role of quality and organization of cytoskeletal proteins in maintaining complex cellular functions such as neuronal excitability and neuroinflammation. Understanding how changes in these proteins impact demyelination is key to drug development for MS. Systems biology, an interdisciplinary field of study posits that complex interactions within biological systems contribute to the inflammatory processes and suggests that Cav-1, an integral membrane protein of caveolae with crucial role in cell signaling may provide a novel target in MS. Herein, we examine potential genetic influences on Cav-1 and its role in inflammation and demyelination in relation to MS. Specifically, its roles in oxidative stress, inflammation, blood brain barrier (BBB) integrity, and autophagy are discussed. Nonetheless, we conclude that translational aspect of Cav-1 and hence its specific therapeutic targeting in MS requires further exploration.

Article
Medicine and Pharmacology
Orthopedics and Sports Medicine

Ahmet Serhat Aydin

,

Emre Kocazeybek

,

Ahmet Mücteba Yıldırım

,

Onur Kutlu

,

Serkan Bayram

,

Turgut Akgül

Abstract: Background: Adolescent idiopathic scoliosis (AIS) may influence pelvic orientation and lower limb alignment, but data after completion of spinal correction are limited . Methods: In this retrospective study, 70 consecutive AIS patients (61 females, 9 males; mean age 17.0 ± 3.2 years, range 13–30 years) treated surgically (n = 52) or with brace therapy (n = 18) between 2010 and 2020 were analyzed. Patients were grouped by main curve location as thoracic (n = 28), lumbar (n = 21) or thoracolumbar (n = 21). Pre treatment standing full spine radiographs were used to measure Cobb angles, coronal balance and pelvic coronal obliquity angle (PCOA). After completion of spinal correction, full length weight bearing lower limb radiographs were obtained to assess femoral and tibial lengths, mechanical axis deviation (MAD), femoral neck–shaft angle (NSA), and distal/proximal femoral mechanical and anatomical angles. Results: Mean PCOA for the whole cohort was 2.3 ± 1.9°, and mean MAD was −0.41 ± 10.2 mm on the right and −0.7 ± 8.0 mm on the left. PCOA, coronal balance, MAD, right anatomical lateral distal femoral angle (aLDFA) and right mechanical lateral distal femoral angle (mLDFA) differed significantly among the three groups (p<0.05). Thoracolumbar versus thoracic curves showed higher PCOA and greater coronal imbalance (p = 0.011 and p = 0.004). The lumbar group demonstrated bilateral valgus alignment with more negative MAD values than the thoracic group (right MAD −5.88 ± 8.8 mm, left MAD −3.5 ± 7.5 mm; p = 0.004 and p = 0.005). The thoracic group had higher right aLDFA and mLDFA than lumbar and thoracolumbar groups (all p<0.05). No between group differences were found in femoral or tibial lengths or NSA (p>0.05) Conclusions: After spinal correction, AIS patients show subtle but measurable differences in coronal lower limb alignment according to curve location. Pelvic obliquity and MAD are more pronounced in lumbar and thoracolumbar curves, whereas limb lengths and NSA remain comparable among groups. These small deviations may influence long term load distribution and should be considered in the clinical assessment of AIS, particularly in patients with distal curve patterns.

Article
Environmental and Earth Sciences
Environmental Science

Brigette C. Hinagdanan

,

Sonnie A. Vedra

,

Jaime Q. Guihawan

,

Peter S. Suson

,

Hilly Ann Maria Roa-Quiaoit

Abstract: Water and land are critical natural resources that require effective management, particularly in rapidly urbanizing areas such as Cagayan de Oro City, Philippines. This study aims to assess erosion susceptibility and prioritize conservation needs across nine watersheds using GIS-based morphometric analysis. The entire extent of each watershed was analyzed beyond political boundaries to ensure comprehensive evaluation of geomorphological characteristics. Key morphometric parameters, including drainage density, stream frequency, slope, and basin shape, were computed to determine watershed behavior and erosion risk. Results indicate that the Cagayan de Oro River Basin is the most erosion-prone, followed by the Umalag, Iponan, and Cugman watersheds, while the Bugo–Alae watershed exhibits the lowest susceptibility. Higher slope gradients and elongated basin shapes were associated with increased erosion risk, whereas higher drainage density and stream frequency corresponded to lower susceptibility. These findings provide a scientific basis for prioritizing watershed management and conservation strategies, supporting sustainable land use planning and erosion mitigation in the study area.

Article
Environmental and Earth Sciences
Water Science and Technology

Markus Köhli

,

Jannis Weimar

Abstract: Cosmic-Ray Neutron Sensing (CRNS) has become a standard method for non-invasive soil moisture monitoring at the field scale. With most CRNS sensors being derivatives from scientific nuclear equipment, the development of instruments based on alternative neutron detection technologies is a major development goal for CRNS. We present a modular instrument family based on boron-10-lined proportional counters, specifically designed for long-term autonomous field operation. The system is controlled by a data logger supporting various telemetry options and external SDI-12 environmental sensors and the frontend electronics with its pulse shape analysis effectively separates neutron signals from background and electronic noise. Our results show high energy efficiency, with the latest generation close to 50 mW, allowing solar-powered operation even in challenging environments. The performance of the instruments has been validated within long-term field deployments in different settings, showing that boron-10-based systems provide a scalable, cost-effective and reliable alternative for the next generation of CRNS monitoring networks.

of 5,778

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated