Sort by

Article
Medicine and Pharmacology
Endocrinology and Metabolism

Bogdan Mihai Pascu

,

Ana Maria Cula

,

Anca Bălănescu

,

Paul Cristian Bălănescu

,

Ioan Gherghina

Abstract: Background: Childhood obesity is associated with important alterations in body composition that may impair muscular strength and functional capacity. While higher body mass is often accompanied by greater absolute strength, the independent impact of adiposity on muscular strength after accounting for lean tissue remains insufficiently understood. The aim of this study was to examine the associations between adiposity, body composition, and muscular strength in children and adolescents, with particular focus on the independent effects of fat mass after adjustment for growth- and maturation-related factors. Methods: This cross-sectional study included 84 children and adolescents aged 5–18 years. Anthropometric measurements were used to calculate body mass index, waist-to-hip ratio, and waist-to-height ratio, with weight status classified according to World Health Organization BMI-for-age criteria. Body composition was assessed using bioelectrical impedance analysis (Tanita), providing estimates of body fat percentage and Tanita-derived muscle mass. Pubertal stage was assessed using Tanner classification. Muscular strength was evaluated using dominant handgrip strength, and habitual physical activity was recorded as hours per week. Associations between adiposity-related indices and muscular strength were explored using correlation and multiple linear regression analyses, with adjustment for age and Tanita-derived muscle mass. Results: Body mass index showed a positive association with handgrip strength, reflecting the contribution of overall body mass. Central adiposity indices demonstrated weak to modest associations with muscular strength. Body fat percentage showed only a limited association with handgrip strength in unadjusted analyses. However, in multivariable regression models adjusting for age and Tanita-derived muscle mass, higher body fat percentage emerged as an independent negative predictor of handgrip strength. Age did not show an independent association with muscular strength in adjusted models. Conclusions: Excess adiposity is independently and negatively associated with muscular strength in children and adolescents, even after accounting for age and Tanita-derived estimates of muscle mass. These findings suggest that increased fat mass may impair neuromuscular performance beyond its effects on body size or lean tissue. Pediatric obesity interventions should therefore focus not only on weight reduction but also on improving body composition and preserving functional strength.

Review
Medicine and Pharmacology
Other

Filippo Tilli

,

Giorgio Tamborrini

,

Felix Margenfeld

Abstract: Background: Human cadaveric models provide a controlled experimental setting to investigate the anatomical basis and mechanical behavior underlying musculoskeletal ultrasound findings. In recent years, both B-mode ultrasound and shear wave elastography have been applied in cadaveric studies to explore muscle architecture, aponeurotic structures, and passive mechanical properties under standardized conditions [3]. Objective: The aim of this scoping review was to map and synthesize cadaveric studies using ultrasound and shear wave elastography to investigate lower-limb muscles and their aponeurotic structures, with emphasis on methodological applications, anatomical insights, and limitations relevant to clinical interpretation. Material and Methods: A scoping review was conducted according to PRISMA-ScR principles. Studies included if ultrasound imaging (B-mode and/or shear wave elastography) was applied directly to human cadaveric lower-limb muscles or aponeurotic structures. Data were extracted and synthesized descriptively by anatomical region and ultrasound technique [8]. Results: A total of 11 studies met the inclusion criteria and were included in the final qualitative synthesis, all of which applied ultrasound imaging, with or without shear wave elastography, directly to human cadaveric muscle tissue (Table 1). Among these, seven studies specifically investigated lower-limb skeletal muscles and their aponeurotic structures using ultrasound-based techniques to describe muscle architecture, internal connective tissue anatomy, or passive mechanical behavior [5]. These studies focused on the quadriceps femoris, hamstrings, adductor longus, and the gastrocnemius–soleus complex [1]. The remaining four studies were considered relevant and therefore included in the scoping review because, although they did not focus on a specific lower-limb muscle group, they addressed key methodological factors influencing ultrasound and elastography derived measurements in cadaveric muscle tissue [2,4]. These investigations examined the effects of tissue layering, specimen-related characteristics, and measurement conditions, thereby providing essential methodological context for the interpretation of ultrasound-based outcomes across different anatomical regions. Conclusion: Cadaveric ultrasound studies provide essential anatomical context for interpreting musculoskeletal ultrasound, while cadaveric shear wave elastography supports controlled exploration of passive muscle mechanics. At the same time, these studies highlight important methodological sensitivities that should be acknowledged before translating elastography findings to clinical decision-making [2].

Article
Business, Economics and Management
Business and Management

Tatjana Apanasevic

,

Anna Fjällström

Abstract:

Urban freight transport is responsible for creating negative transport externalities in the form of noise and congestion and has a significant environmental impact. One solution is to establish a freight consolidation centre, which could offer benefits such as shorter delivery distances, and fewer delivery routes. However, this would require collaboration between actors with conflicting interests and goals. In this study, we propose a collaborative business model framework for freight consolidation centres. This framework was tested through a pilot project in Gothenburg, using the principles of engaged scholarship. Our results show that last-mile consolidation significantly increases efficiency and enables sustainability gains to be achieved. However, a number of structural, economic and organisational barriers need to be addressed in order to realise the full benefit of the collaborative business model. There is a need for a deeper institualisation of new norms, procedures and policies in the business models of the individual actors involved.

Article
Engineering
Other

SungJin Jeon

,

Woojun Jung

,

Keuntae Cho

Abstract: The mobile industry has experienced long-run changes in its knowledge structure, including identifiable transition points observable through meaning-based analysis. Using abstracts from 86,674 mobile-industry publications published between 2005 and 2024, we embed documents with SPECTER2, build year-specific embedding distributions, and derive knowledge regimes by combining change-point detection with inter-year distribution distances. We then extract regime-specific topics via clustering and reconstruct topic lineages by aligning topic similarities to classify inheritance, differentiation, convergence, and disappearance. The analysis delineates three regimes spanning 2005 to 2012, 2013 to 2019, and 2020 to 2024, with pronounced transitions around 2012 to 2013 and 2019 to 2020. Regime 1 centers on foundational technologies such as wireless communication, power, sensors, and reliability. Regime 2 expands toward platforms, apps, and data analytics alongside cross-domain convergence. Regime 3 is characterized by strengthened 5G operations and data-driven services, together with the independent rise of policy, governance, and regulation topics. Transitions reflect recombination built on inherited knowledge rather than abrupt replacement, and post-transition topics display distinct growth typologies by network position and growth pattern. By integrating embedding-based change-point detection with topic-lineage reconstruction, we provide a reproducible account of regime transitions and quantitative evidence to inform the timing of corporate R&D, standard and platform strategies, and policy and regulatory design.

Article
Medicine and Pharmacology
Clinical Medicine

Dora Intagliata

,

Maria Luisa Garo

Abstract: Background: Cellulite is a highly prevalent aesthetic concern characterized by structural remodeling of subcutaneous adipose tissue and fibrous septa, resulting in visible skin irregularities. Despite the availability of many injectable treatments with documented efficacy, most standard approaches adopt uniform protocols that overlook interindividual anatomical variability, potentially limiting treatment precision and clinical outcomes. This retrospective case–control study evaluated the Modulated Insertion of Regenerative Activation (MIRA), a technique that individualizes needle length and injection angle according to ultrasound findings, modulating insertion parameters to stimulate regen-erative responses within dermal and subcutaneous layers. Methods: Clinical and ul-trasonographic data from 120 women with stage 3 cellulite were analyzed. Stage 3a pa-tients received carbon dioxide therapy (CDT), whereas stage 3b patients underwent in-jectable solution therapy (IST). Within each treatment, patients were allocated to MIRA or control groups. Results: Compared with controls, MIRA showed greater reductions in adipose tissue thickness (CDT: −1.6 mm; IST: −1.5 mm; padj = 0.002), nodules, pain, edema, and fibrosis, with improved fascia regularity. Patient satisfaction was higher in MIRA (CDT: 8.1 ± 1.6; IST: 8.5 ± 1.4; padj = 0.002), and over 76% reported improved skin quality. Conclusion: Ultrasound-guided modulation of needle parameters with MIRA may en-hance structural and esthetic outcomes compared with standard approaches.

Review
Business, Economics and Management
Economics

Feng Wang

,

Hongzhe Cao

Abstract: Against the backdrop of accelerating low-carbon transformation in the global energy system and decarbonization in the transportation sector, the widespread adoption of electric vehicles has intensified grid load imbalances and highlighted challenges in integrating intermittent renewable energy generation. Vehicle-to-Grid (V2G) technology has emerged as a key solution to these challenges. This paper systematically traces the global evolution of V2G technology from conceptualization to large-scale deployment, focusing on localized practices in China's scaled V2G applications. It dissects the logic behind policy evolution, identifies three distinct Chinese V2G models-centralized, distributed and battery-swapping, and validates the practical outcomes of representative pilot projects. Research reveals three core constraints hindering China's large-scale V2G adoption: the absence of battery capacity degradation management mechanisms, fragmented standardization systems, and rigid market mechanisms. Based on this, the paper proposes recommendations for scaling V2G in China across three dimensions: power battery second-life utilization, standardization system construction, and market mechanism optimization. Furthermore, aligning with the global demand for large-scale V2G implementation, this paper proactively proposes innovative market models. These include establishing a coordinated trading mechanism between green power and V2G, developing a digitally driven distributed trust and transaction system, and exploring financialization and risk hedging models for battery assets. These concepts provide theoretical foundations and decision-making references for achieving high-quality, large-scale V2G applications worldwide.

Hypothesis
Medicine and Pharmacology
Otolaryngology

Franklyn R. Gergits

Abstract: Objective: To propose Posterior Sinonasal Syndrome (PSS) as the etiological precursor to a defined subset of chronic rhinosinusitis (CRS), establish pepsin as a field carcinogen across the upper aerodigestive mucosal surface, and define the biological imperative for mucosal-preserving surgery in PSS-CRS patients. Methods: Synthesis of peer-reviewed evidence across four domains: pepsin endocytosis mechanisms in upper airway epithelium; pepsin detection in sinonasal, nasopharyngeal, and middle ear tissue; epidemiological trends in pediatric upper airway disease; and clinical outcomes in refractory CRS. Evidence is stratified as established, strongly inferred, or proposed requiring confirmatory study. Results: Pepsin, delivered via laryngopharyngeal reflux along a defined anatomical concentration gradient, produces receptor-mediated intracellular injury in posterior nasal epithelium — a mechanism established in laryngeal cells and strongly inferred in nasal cells. This injury lowers the posterior nasal mucosal inflammatory threshold, creating PSS as a priming state preceding clinical CRS. Pepsin has been detected within malignant tissue at two anatomically distinct sites: laryngeal and hypopharyngeal carcinoma, and nasopharyngeal carcinoma in 85.7% of cases versus 17.2% of controls — the two-site molecular fingerprint of a field carcinogen across the full upper aerodigestive surface. Pepsin detection in 83% of pediatric middle ear effusions and its correlation with adenoid hypertrophy grade establish that this process begins in childhood. PSS represents a third inflammatory driver of CRS, independent of allergy and anatomy, unrecognized by the 2025 AAO-HNS guideline. Five confirmatory studies and a nasal lavage pepsin assay validation pathway are defined. Conclusion: PSS is the etiological precursor to a misidentified subset of treatment-resistant CRS. Pepsin is both the primary driver of posterior nasal mucosal priming and a field carcinogen across the upper aerodigestive surface. Aggressive tissue-resecting FESS in this population is biologically counterproductive. The confirmatory studies are named, the clinical tools are within reach, and the patients are in rhinology practices now.

Article
Environmental and Earth Sciences
Environmental Science

Wellinton Cupozak-Pinheiro

,

Francine Piubeli

,

Kauanny Plenz

,

Maricy Bonfá

,

Rodrigo Pereira

Abstract: Fipronil is a phenylpyrazole agrochemical widely used in agriculture and livestock production, posing persistent challenges of environmental contamination due to its toxicity and the formation of stable transformation products. Genome-based analyses provide a powerful framework for exploring the biotechnological potential of environmental microorganisms. The G2.8 isolate, obtained from fipronil-contaminated soil, was initially classified as Enterobacter chengduensis; however, taxonomic reassessment based on whole-genome sequencing combined with average nucleotide identity and digital DNA-DNA hybridization (ANI/dDDH ≈97%) reclassified this strain as Enterobacter pseudoroggenkampii. The occurrence of this species in a contaminated environmental niche highlights its relevance beyond previously reported clinical or plant-associated contexts and supports its potential role in bioremediation. The draft genome of E. pseudoroggenkampii G2.8 was assembled and subjected to rigorous quality assessment and functional annotation using genome-scale approaches. Functional analyses revealed 14 biosynthetic gene clusters, including non-ribosomal peptide synthetases, hybrid NRPS/polyketide synthases, and siderophore-related clusters, indicating potential for secondary metabolite production. In addition, genes encoding oxidoreductases, hydrolases, and esterases associated with xenobiotic transformation were identified, supporting the experimentally observed capacity of this strain to degrade fipronil and its toxic metabolites. Within a One Health framework, the genome exhibited only intrinsic antimicrobial resistance determinants, mainly related to efflux systems and chromosomal β-lactamases, with no evidence of mobile resistance elements, supporting an environmental safety profile. Overall, genome-guided functional and comparative analyses provide a robust foundation for identifying metabolic pathways involved in both biosynthesis and biodegradation, positioning E. pseudoroggenkampii G2.8 as a promising genome-guided candidate for metabolite-driven environmental biotechnology and reinforcing the value of microbial genomics in the development of sustainable bioprocesses.

Article
Public Health and Healthcare
Public Health and Health Services

Qianyi(Sinyee) Lu

,

Yaqiang Qi

Abstract: The income inequality hypothesis (IIH) posits that greater income dispersion harms indi-vidual health through psychosocial pathways. Yet decades of empirical re-search—especially in cross-national settings—have yielded inconsistent findings. This study revisits the IIH by distinguishing three temporal dimensions of inequality: immedi-ate (current levels), cumulative (long-run averages), and comparative (recent change). Us-ing harmonized Gini series linked to repeated cross-sections of the World Values Survey (1981-2016) across more than 90 countries and regions, we estimate multilevel models that adjust for individual and national covariates. Results reveal a consistent negative as-sociation between worsening inequality over the prior decade and self-rated health—supporting a comparative, time-sensitive specification of the IIH. In contrast, im-mediate and cumulative inequality often show null or even positive associations, particu-larly in less developed contexts and under random-effects estimation. These patterns suggest that inequality’s health consequences are temporally contingent, and that long-run deterioration in distributional conditions poses a particular threat to population health and should be closely monitored in future research and policy.

Article
Computer Science and Mathematics
Computational Mathematics

Basker Palaniswamy

Abstract: In 1972, computer scientist Richard Karp made a remarkable discovery: twenty-one very different problems—from routing networks and planning schedules to packing items efficiently—are all equally difficult in a deep mathematical sense. These problems are now called NP-complete, and for more than fifty years researchers have shown their connection by carefully transforming one problem into another step by step. While this approach proves that the problems are related, it often hides the bigger picture of why they share the same level of difficulty. This paper proposes a new way of understanding these problems through geometry. We introduce the Karp Algebraic Reduction Manifold Architecture (KARMA), aframework that places all 21 problems inside a single mathematical “landscape.” In this landscape, each problem describes a different region of the same terrain of computational difficulty, and moving from one problem to another becomes like traveling smoothly across this terrain. The framework naturally groups the problems into three families—graph-theoretic, set-theoretic, and number-theoretic problems. In this geometric interpretation, distances represent how difficult it is to transform one problem into another, while the curvature of the landscape reflects their inherent computational hardness. By revealing this hidden geometric structure, the KARMA framework provides a new perspective on computational complexity. Instead of studying hard problems individually, researchers can explore the entire landscape of computational difficulty at once, potentially inspiring new algorithms, better hardness predictions, and intelligent systems that can automatically reason about problem transformations.

Article
Medicine and Pharmacology
Oncology and Oncogenics

Mariam Sh. Manukyan

,

Valeriya I Pavlova

,

Maxim S. Kirsanov

,

Aydar Akhmetzyanov

,

Rukiyat Sh. Abdulaeva

,

Marianna O. Mandrina

,

Yana V. Belenkaya

,

Ivan S Stilidi

,

Tigran G. Gevorkyan

,

Sergey S. Gordeyev

Abstract: Background: Accurate prediction of outcomes in colorectal cancer (CRC) is essential for personalized treatment. Conventional prognostic tools, including TNM staging, have limited accuracy. Machine learning (ML) may better capture complex prognostic patterns. Methods: In a retrospective multicenter cohort of 7,253 non-metastatic CRC patients after radical surgery, we compared prognostic accuracy for predicting recurrence and mortality using: a baseline TNM stage model; a logistic regression model with six clinicopatho-logical variables; and ML algorithms (Logistic Regression, Random Forest, XGBoost, LightGBM, CatBoost) with hyperparameter optimization (Optuna) and iterative feature selection. Binary outcomes (recurrence and all-cause mortality at 1 and 3 years) were used for ML training. Performance was assessed using area under the ROC curve (AUC). Results: The stage-only model showed poor discrimination (weighted AUC: 0.541 for mortality, 0.528 for recurrence). Logistic regression improved predictions (AUC: 0.759 and 0.645, respectively). Among ML models, CatBoost achieved the best performance. After iterative feature selection, the optimized CatBoost model utilizing 17 clinical var-iables demonstrated superior cross-validated AUCs of 0.81 for mortality and 0.84 for recurrence, consistently outperforming both baseline models across all time horizons. External validation on 1,452 held-out patients confirmed robustness with AUCs of 0.83 for mortality and 0.91 for recurrence. Conclusion: An optimized CatBoost model significantly outperforms traditional TNM staging and logistic regression in predicting recurrence and mortality in CRC using 17 routinely available variables. This parsimonious, data-driven tool offers improved individualized risk assessment for guiding post-operative man-agement. Prospective validation is warranted.

Article
Biology and Life Sciences
Biochemistry and Molecular Biology

Venkadesh Sarkarai Nadar

,

Dinesh Devadoss

,

Thiruselvam Viswanathan

,

Barry P Rosen

,

Hitendra S Chand

Abstract: Background: Cancer cells exhibit metabolic reprogramming characterized by increased dependence on glutamine to sustain rapid proliferation and biosynthetic demands. Kidney-type glutaminase (KGA), which catalyzes the first and rate-limiting step of glutamine metabolism, represents a promising therapeutic target, particularly in triple-negative breast cancer (TNBC), an aggressive subtype lacking effective targeted therapies. This study evaluated 2-amino-4-boronobutyric acid (ABBA), a boronic acid-containing glutamine analog, as a potential KGA inhibitor with anticancer activity. Methods: KGA inhibition was assessed using a fluorometric enzymatic assay. Cytotoxic effects were examined in multiple TNBC cell lines. Covalent docking analysis was performed to characterize interactions between ABBA and the KGA active site. Results: ABBA potently inhibited KGA activity, with an IC₅₀ of approximately 1.0 μM, demonstrating greater efficacy than several non-proteinogenic amino acid analogs. ABBA induced dose-dependent cytotoxicity across multiple TNBC cell lines, with pronounced sensitivity observed in basal subtype cells. and Cellular sensitivity correlated with KGA expression levels. Expression of γ-glutamyl transpeptidase 1 (GGT1) was negligible, indicating that the observed anticancer effects are primarily mediated through KGA inhibition. Docking analysis predicted that ABBA forms a reversible covalent adduct with the catalytic Ser286 residue of KGA, adopting a boronate tetrahedral geometry consistent with transition-state mimics and stabilized by hydrogen bonding and electrostatic interactions. Conclusion: ABBA is a potent boron-based glutaminase inhibitor with therapeutic potential for targeting glutamine metabolism in TNBC. Further structural optimization and in vivo evaluation are warranted to advance ABBA toward therapeutic development.

Article
Engineering
Marine Engineering

Yingjie Liu

,

Peng Zhou

,

Feng Xiao

,

Chenyang Li

,

Junhui Li

,

Jiawang Chen

,

Ziqiang Ren

Abstract: To address the accuracy divergence problem of the integrated navigation system caused by drilling slippage and mismatch between the tail cable encoder and the robot's motion when a seafloor drilling robot operates in deep-sea soft sedimentary layers, this paper proposes a robust navigation method based on robust square root ductile Kalman filter (RSRCKF). Considering the large deformation mechanical characteristics of the seabed under drilling conditions, a unified state-space model including the time-varying odometer scaling factor error is first established. To solve the numerical instability of the nonlinear system under non-Gaussian noise interference, the square root ductile Kalman filter (SRCKF) framework is introduced, and the positive definiteness of the error covariance matrix is dynamically maintained using QR decomposition. Based on this, an online fault detection mechanism based on the novel chi-square test is designed, and an adaptive variance expansion factor is constructed by combining a two-segment IGG weight function to realize the real-time identification and weight reduction processing of abnormal observations caused by slippage. Field drilling and turning tests on the mudflats off the coast of Zhoushan show that, under typical soft clay slippage conditions, this method can effectively identify "false displacement" interference. Compared with the traditional EKF and standard SRCKF, the position error is reduced by approximately 82.4%, and the heading angle error is controlled within±0.5∘Within a certain range, the high robustness and engineering practicality of the algorithm under complex seabed topography were verified.

Article
Biology and Life Sciences
Insect Science

Raisa Sukhodolskaya

,

Igor Solodovnikov

,

Teodora Teofilova

,

Vladimir Langraf

,

Alexander Borisovskiy

,

Sergey Luzyanin

,

Alexander Ruchin

,

Dominic Stočes

,

Anatoliy Anciferov

,

Roman Gorbunov

+4 authors

Abstract: The study was based on a large database of morphometric measurements of the ground beetle Carabus granulatus. It was compiled between 2006 and 2025 and includes over 10,000 individuals of this species, captured in 14 major regions of Russia and Western Europe. Beetles were captured with Barber traps across a spectrum of anthropogenic impacts—urban areas, suburbs, agricultural lands, and natural biotopes. They were then transported to the Institute of Ecology of the Academy of Sciences of the Republic of Tatarstan, where they were measured using a unified method for six linear traits. SSD was assessed using two methods. Using the standard Lovich formula, SSD for all traits was significantly higher, on average in all six traits, in beetle populations from suburban areas. Application of the second method, RMAII, showed that the slope of the regression curve is generally higher in females, indicating greater sensitivity of Carabus granulatus females to environmental factors. At the same time, a comparison of the results obtained by the aforementioned methods did not support the thesis that SSD increases with beetle size. The curves for SSD variability in both urban and non-urban populations were sawtooth-shaped. This conclusion may be due to the fact that the variability of both structural traits and SSD for them is not described by a monotonic curve. This necessitates studying the variability of SSD in other ground beetle species (or genera) using the same data set and a unified methodology.

Article
Biology and Life Sciences
Anatomy and Physiology

Douglas Roy

,

Jody Roy

Abstract: Contemporary models of resistance training often treat repetitions within a set as interchangeable, emphasize only those performed near failure, or prescribe controlled tempos that moderate effort across repetitions. These perspectives leave unclear how moment-to-moment intent and movement quality interact to determine where fatigue and adaptation are localized. We introduce the Targeted Intensity Cumulation (TIC) model, a minimal mechanistic framework in which high voluntary intent combined with high purity technique progressively concentrates mechanical and metabolic stress within target musculature across repetitions and sets. In this formulation, the rate of performance decay (e.g., as measurable by decline in concentric velocity) serves as an observable proxy for stimulus localization. The model provides a unifying account for (1) hypertrophy equivalence across repetition ranges, (2) the continuous accumulation of training stimuli, (3) exercise-specific 'performance cliffs,' and (4) cross-load performance transfer. By shifting the focus from external load to the internal state-space of intent and constraint, TIC generates testable predictions for optimizing training execution and monitoring.

Article
Biology and Life Sciences
Biochemistry and Molecular Biology

Bernard Delalande

,

Hirohisa Tamagawa

,

Vladimir Matveev

Abstract: The membrane pump theory (MPT) attributes the resting membrane potential of neurons to ionic diffusion driven by transmembrane concentration gradients, maintained by the Na,K-ATPase. Despite decades of dominance, this model harbours fundamental thermodynamic, kinetic, and geometric inconsistencies that have remained unaddressed in mainstream biophysics. We present a systematic quantitative critique across five independent axes: (1)~the electrostatic force exceeds the diffusive force by $\sim$300-fold under physiological conditions; (2)~the peri-axonal space contains 10--100$\times$ fewer ions than required by channel-based models; (3)~the Na,K-ATPase carries an energy deficit of $\sim$26\% per cycle and operates 5000$\times$ too slowly to compensate measured leak fluxes; (4)~the Nernst and Goldman--Hodgkin--Katz equations are applied outside their domain of validity; and (5)~cell geometry invalidates plane-membrane approximations. In contrast, direct experimental evidence (Tamagawa experiment) demonstrates that a potential of $\approx -40$\,mV arises from fixed negative charges alone, without any ionic gradient. We formalise this result within a Poisson--Boltzmann/Grahame electrostatic framework, supplemented by Ling's ion adsorption model and Manoj's murburn concept, and obtain $\Delta\psi \approx -65$ to $-85$\,mV from first principles. Four specific experimental predictions distinguish the model from MPT.

Article
Engineering
Architecture, Building and Construction

Mehmet Fatih Aydın

Abstract: This study presents the Structural–Typological–Value Sensitivity Model (STVSM), a multidimensional framework for evaluating vulnerability in historic buildings where physical fragility cannot be adequately captured through structural indicators alone. While existing approaches primarily prioritize load-bearing behaviour, they often overlook typological discontinuity, spatial fragmentation, and the erosion of architectural and cultural value. STVSM addresses this limitation through three weighted sub-indices: structural vulnerability (SV), typological degradation (TV), and heritage value (HV), each calibrated using expert-derived micro- and macro-level weighting coefficients. Field-based deterioration scores (0–1) are combined with these weights to generate SV, TV, and HV values, which are then integrated into a Conservation Priority Index (CPI). Although conceptually informed by building-scale seismic vulnerability literature, the model does not aim to simulate earthquake performance or replace numerical structural analysis. Instead, it operates as a comparative decision-support framework that incorporates seismic-informed deterioration patterns within a broader, conservation-oriented logic. The model is applied to twenty-five historic buildings across three heritage contexts: traditional houses in Cumalikizik, vernacular dwellings in Balıkesir–Karesi, and nineteenth-century Greek Orthodox churches in Bursa. The results demonstrate that integrating structural condition, typological integrity, and heritage value provides a transparent, repeatable, and scalable basis for conservation prioritization across diverse historic building stocks.

Article
Engineering
Civil Engineering

Stephen Mulundu

,

Moffat Tembo

,

Chabota Kaliba

Abstract: Land use planning plays an important role in advancing sustainable development by integrating environmental, social, and economic dimensions to optimize land utilization and bolster climate resilience. The adoption of efficient practices contributes to the mitigation of land degradation, while strategically planned agricultural systems enhance food security and promote ecological balance. This study focused on the development of an environmental conservation framework for sustainable land use planning in Zambia. Employing a mixed-methods research design, data were collected from a sample of 150 respondents. Quantitative data were analysed using descriptive and inferential statistics, including regression analysis, while qualitative data were subjected to thematic analysis. The research identified key conflicts between agriculture and environmental conservation, including unsustainable farming practices (30.8%), resource competition (24.2%), and deforestation (23.3%). Approximately 40.3% of respondents reported occasional conflicts, while 33% experienced them often. Major barriers to sustainable land development included inadequate financial support (35%) and lack of knowledge (30%). Awareness of sustainable agricultural practices varied, with 38% of respondents indicating high awareness and 35.8% reporting low awareness. Conventional agriculture (35.8%), crop rotation (30%), and conservation agriculture (11.7%) were the most common practices, with crop rotation being the easiest to implement (42.2%), and climate-smart agriculture being the most challenging (37.8%). A chi-square analysis revealed no significant association between awareness levels and perceived barrier impacts (p=0.327). Regression analysis indicated that age negatively correlated with the type of conflict (β=-0.0283, p< 0.001), while location influenced conflict experiences, with certain areas, such as Section D (β=1.3799, p< 0.001) and Section G (β=1.6554, p< 0.001), reporting more frequent conflicts. Additionally, sex had a positive but marginally significant effect (β=0.2640, p=0.062). Qualitative findings highlighted the tension between agricultural production and environmental conservation, with economic pressures driving environmental degradation, such as deforestation and water pollution. Participants also pointed to limited knowledge, training, and financial barriers, including high costs and restricted access to credit, as key obstacles. The study proposed an environmental conservation framework to address these conflicts, integrating sustainable agricultural practices with effective land use planning. The framework advocates a multi-stakeholder approach involving policymakers, farmers, and environmental experts to promote balanced sustainable land use. The findings enhance the body of knowledge by providing empirical evidence on the conflicts between agriculture and environmental conservation in land use planning, highlighting key socio-economic and spatial factors influencing sustainability challenges. The proposed environmental conservation framework offers a practical guide for policymakers and stakeholders to integrate sustainable agricultural practices into land use planning.

Article
Environmental and Earth Sciences
Remote Sensing

Eva Savina Malinverni

,

Marsia Sanità

Abstract: Hybrid classification approaches, combining pixel-based and object-based classification models, are increasingly being adopted to overcome the inherent limitations of Very High Resolution (VHR) image analysis. This paper proposes a hybrid classification framework that integrates probabilistic pixel-based classification, object-based aggregation, and rule-based refinement to produce GIS-ready Land Use/Land Cover (LULC) maps specifically designed for urban and regional planning. WorldView-2 imagery is first processed using an AdaBoost classifier to derive pixel-level class memberships; these results are subsequently aggregated at the object level following segmentation. Beyond thematic labeling, a Stability Map is introduced to quantify intra-object classification reliability, enabling the spatial identification of unstable or heterogeneous objects. The novelty lies not only in the integration of pixel and object paradigms but also in the operational utility of this stability map. When combined with rule-based reasoning, it provides a decision-oriented GIS product. The results demonstrate superior classification accuracy and enhanced interpretability compared to standard pixel-based or object-based approaches, highlighting the framework's relevance for geospatial data analysis and planning-oriented applications.

Article
Biology and Life Sciences
Biochemistry and Molecular Biology

Abdulmohsen H. Alrohaimi

Abstract: BackgroundAdvances in genomics over the past two decades have revealed a fundamental paradox in genome biology: the majority of genomic sequences remain transcriptionally inactive across most biological contexts. Early interpretations of this phenomenon described large portions of the genome as nonfunctional or evolutionary remnants, commonly referred to as “junk DNA” (Ohno, 1972; Gregory, 2005). However, subsequent research in functional genomics, epigenetics, and regulatory biology has increasingly demonstrated that genomic inactivity may represent dynamic regulatory states rather than permanent functional loss (ENCODE Project Consortium, 2012; Kellis et al., 2014).The persistence of pseudogenes, noncoding sequences, and conditionally expressed genes across evolutionary timescales suggests that genomic systems may preserve genetic elements whose functional roles are not immediately observable under standard biological conditions. Existing models of gene regulation explain many aspects of transcriptional control but provide limited theoretical explanation for why genomes maintain structurally intact yet inactive genetic information over long evolutionary periods (Lynch, 2007; Wagner, 2014). Understanding how genomes preserve latent functional potential has therefore become an important interdisciplinary research question spanning genomics, evolutionary biology, and systems biology.AimThis study aimed to develop a conceptual theoretical framework explaining how genomes preserve structurally intact genetic elements that remain functionally inactive across extended biological or evolutionary periods. The study introduces the Gene Latency framework, proposed by Alrohaimi, which conceptualizes genomic systems as dynamic information architectures capable of maintaining latent genetic potential that may become functionally active under specific biological conditions. MethodsA conceptual research design was employed using integrative literature synthesis across genomics, evolutionary biology, pseudogene research, epigenetic regulation, and systems biology. Through a multi-stage conceptual modeling process, several analytical constructs were identified and integrated into a unified theoretical framework describing the architecture of gene latency within genomic systems.The conceptual modeling process involved three stages: identification of recurring patterns related to genomic inactivity across empirical literature, development of theoretical constructs describing latent genetic states, and integration of these constructs into a systems-level model explaining transitions between active, silent, and latent gene states.ResultsThe analysis resulted in the formulation of a set of interacting constructs shaping the Gene Latency framework. Latency describes the condition in which genetic information remains structurally preserved while its functional execution is suspended. Recallability refers to the potential for latent genes to become activated under specific biological contexts. Biological context represents the regulatory environment—including developmental stage, cellular state, and environmental signals—that determines gene activation. Execution refers to the realization of genetic information through transcription and translation processes. Decision architecture describes the regulatory networks that integrate biological signals to determine gene activation. Latent genomic portfolio represents the collection of latent genetic elements preserved within the genome. Biological memory refers to the accumulation of preserved genetic information across evolutionary time, including duplicated genes, pseudogenes, and regulatory elements.Together, these constructs form a multi-layered genomic architecture through which biological systems preserve genetic information, regulate gene activation, and maintain reservoirs of latent functional potential. ConclusionThe proposed Gene Latency framework offers a new theoretical perspective for understanding genomic organization and the persistence of inactive genetic information within biological systems. By integrating insights from genomics, evolutionary biology, and systems biology, the framework expands existing models of gene regulation and proposes that genomes function not only as repositories of active genes but also as reservoirs of latent genetic potential. This perspective provides a conceptual foundation for future empirical and computational investigations into latent genomic systems and their potential roles in biological adaptation and evolutionary innovation.

of 5,666

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated