Sort by

Article
Biology and Life Sciences
Anatomy and Physiology

Douglas Roy

,

Jody Roy

Abstract: Contemporary models of resistance training often treat repetitions within a set as interchangeable, emphasize only those performed near failure, or prescribe controlled tempos that moderate effort across repetitions. These perspectives leave unclear how moment-to-moment intent and movement quality interact to determine where fatigue and adaptation are localized. We introduce the Targeted Intensity Cumulation (TIC) model, a minimal mechanistic framework in which high voluntary intent combined with high purity technique progressively concentrates mechanical and metabolic stress within target musculature across repetitions and sets. In this formulation, the rate of performance decay (e.g., as measurable by decline in concentric velocity) serves as an observable proxy for stimulus localization. The model provides a unifying account for (1) hypertrophy equivalence across repetition ranges, (2) the continuous accumulation of training stimuli, (3) exercise-specific 'performance cliffs,' and (4) cross-load performance transfer. By shifting the focus from external load to the internal state-space of intent and constraint, TIC generates testable predictions for optimizing training execution and monitoring.

Article
Biology and Life Sciences
Biochemistry and Molecular Biology

Bernard Delalande

,

Hirohisa Tamagawa

,

Vladimir Matveev

Abstract: The membrane pump theory (MPT) attributes the resting membrane potential of neurons to ionic diffusion driven by transmembrane concentration gradients, maintained by the Na,K-ATPase. Despite decades of dominance, this model harbours fundamental thermodynamic, kinetic, and geometric inconsistencies that have remained unaddressed in mainstream biophysics. We present a systematic quantitative critique across five independent axes: (1)~the electrostatic force exceeds the diffusive force by $\sim$300-fold under physiological conditions; (2)~the peri-axonal space contains 10--100$\times$ fewer ions than required by channel-based models; (3)~the Na,K-ATPase carries an energy deficit of $\sim$26\% per cycle and operates 5000$\times$ too slowly to compensate measured leak fluxes; (4)~the Nernst and Goldman--Hodgkin--Katz equations are applied outside their domain of validity; and (5)~cell geometry invalidates plane-membrane approximations. In contrast, direct experimental evidence (Tamagawa experiment) demonstrates that a potential of $\approx -40$\,mV arises from fixed negative charges alone, without any ionic gradient. We formalise this result within a Poisson--Boltzmann/Grahame electrostatic framework, supplemented by Ling's ion adsorption model and Manoj's murburn concept, and obtain $\Delta\psi \approx -65$ to $-85$\,mV from first principles. Four specific experimental predictions distinguish the model from MPT.

Article
Engineering
Architecture, Building and Construction

Mehmet Fatih Aydın

Abstract: This study presents the Structural–Typological–Value Sensitivity Model (STVSM), a multidimensional framework for evaluating vulnerability in historic buildings where physical fragility cannot be adequately captured through structural indicators alone. While existing approaches primarily prioritize load-bearing behaviour, they often overlook typological discontinuity, spatial fragmentation, and the erosion of architectural and cultural value. STVSM addresses this limitation through three weighted sub-indices: structural vulnerability (SV), typological degradation (TV), and heritage value (HV), each calibrated using expert-derived micro- and macro-level weighting coefficients. Field-based deterioration scores (0–1) are combined with these weights to generate SV, TV, and HV values, which are then integrated into a Conservation Priority Index (CPI). Although conceptually informed by building-scale seismic vulnerability literature, the model does not aim to simulate earthquake performance or replace numerical structural analysis. Instead, it operates as a comparative decision-support framework that incorporates seismic-informed deterioration patterns within a broader, conservation-oriented logic. The model is applied to twenty-five historic buildings across three heritage contexts: traditional houses in Cumalikizik, vernacular dwellings in Balıkesir–Karesi, and nineteenth-century Greek Orthodox churches in Bursa. The results demonstrate that integrating structural condition, typological integrity, and heritage value provides a transparent, repeatable, and scalable basis for conservation prioritization across diverse historic building stocks.

Article
Engineering
Civil Engineering

Stephen Mulundu

,

Moffat Tembo

,

Chabota Kaliba

Abstract: Land use planning plays an important role in advancing sustainable development by integrating environmental, social, and economic dimensions to optimize land utilization and bolster climate resilience. The adoption of efficient practices contributes to the mitigation of land degradation, while strategically planned agricultural systems enhance food security and promote ecological balance. This study focused on the development of an environmental conservation framework for sustainable land use planning in Zambia. Employing a mixed-methods research design, data were collected from a sample of 150 respondents. Quantitative data were analysed using descriptive and inferential statistics, including regression analysis, while qualitative data were subjected to thematic analysis. The research identified key conflicts between agriculture and environmental conservation, including unsustainable farming practices (30.8%), resource competition (24.2%), and deforestation (23.3%). Approximately 40.3% of respondents reported occasional conflicts, while 33% experienced them often. Major barriers to sustainable land development included inadequate financial support (35%) and lack of knowledge (30%). Awareness of sustainable agricultural practices varied, with 38% of respondents indicating high awareness and 35.8% reporting low awareness. Conventional agriculture (35.8%), crop rotation (30%), and conservation agriculture (11.7%) were the most common practices, with crop rotation being the easiest to implement (42.2%), and climate-smart agriculture being the most challenging (37.8%). A chi-square analysis revealed no significant association between awareness levels and perceived barrier impacts (p=0.327). Regression analysis indicated that age negatively correlated with the type of conflict (β=-0.0283, p< 0.001), while location influenced conflict experiences, with certain areas, such as Section D (β=1.3799, p< 0.001) and Section G (β=1.6554, p< 0.001), reporting more frequent conflicts. Additionally, sex had a positive but marginally significant effect (β=0.2640, p=0.062). Qualitative findings highlighted the tension between agricultural production and environmental conservation, with economic pressures driving environmental degradation, such as deforestation and water pollution. Participants also pointed to limited knowledge, training, and financial barriers, including high costs and restricted access to credit, as key obstacles. The study proposed an environmental conservation framework to address these conflicts, integrating sustainable agricultural practices with effective land use planning. The framework advocates a multi-stakeholder approach involving policymakers, farmers, and environmental experts to promote balanced sustainable land use. The findings enhance the body of knowledge by providing empirical evidence on the conflicts between agriculture and environmental conservation in land use planning, highlighting key socio-economic and spatial factors influencing sustainability challenges. The proposed environmental conservation framework offers a practical guide for policymakers and stakeholders to integrate sustainable agricultural practices into land use planning.

Article
Environmental and Earth Sciences
Remote Sensing

Eva Savina Malinverni

,

Marsia Sanità

Abstract: Hybrid classification approaches, combining pixel-based and object-based classification models, are increasingly being adopted to overcome the inherent limitations of Very High Resolution (VHR) image analysis. This paper proposes a hybrid classification framework that integrates probabilistic pixel-based classification, object-based aggregation, and rule-based refinement to produce GIS-ready Land Use/Land Cover (LULC) maps specifically designed for urban and regional planning. WorldView-2 imagery is first processed using an AdaBoost classifier to derive pixel-level class memberships; these results are subsequently aggregated at the object level following segmentation. Beyond thematic labeling, a Stability Map is introduced to quantify intra-object classification reliability, enabling the spatial identification of unstable or heterogeneous objects. The novelty lies not only in the integration of pixel and object paradigms but also in the operational utility of this stability map. When combined with rule-based reasoning, it provides a decision-oriented GIS product. The results demonstrate superior classification accuracy and enhanced interpretability compared to standard pixel-based or object-based approaches, highlighting the framework's relevance for geospatial data analysis and planning-oriented applications.

Article
Biology and Life Sciences
Biochemistry and Molecular Biology

Abdulmohsen H. Alrohaimi

Abstract: BackgroundAdvances in genomics over the past two decades have revealed a fundamental paradox in genome biology: the majority of genomic sequences remain transcriptionally inactive across most biological contexts. Early interpretations of this phenomenon described large portions of the genome as nonfunctional or evolutionary remnants, commonly referred to as “junk DNA” (Ohno, 1972; Gregory, 2005). However, subsequent research in functional genomics, epigenetics, and regulatory biology has increasingly demonstrated that genomic inactivity may represent dynamic regulatory states rather than permanent functional loss (ENCODE Project Consortium, 2012; Kellis et al., 2014).The persistence of pseudogenes, noncoding sequences, and conditionally expressed genes across evolutionary timescales suggests that genomic systems may preserve genetic elements whose functional roles are not immediately observable under standard biological conditions. Existing models of gene regulation explain many aspects of transcriptional control but provide limited theoretical explanation for why genomes maintain structurally intact yet inactive genetic information over long evolutionary periods (Lynch, 2007; Wagner, 2014). Understanding how genomes preserve latent functional potential has therefore become an important interdisciplinary research question spanning genomics, evolutionary biology, and systems biology.AimThis study aimed to develop a conceptual theoretical framework explaining how genomes preserve structurally intact genetic elements that remain functionally inactive across extended biological or evolutionary periods. The study introduces the Gene Latency framework, proposed by Alrohaimi, which conceptualizes genomic systems as dynamic information architectures capable of maintaining latent genetic potential that may become functionally active under specific biological conditions. MethodsA conceptual research design was employed using integrative literature synthesis across genomics, evolutionary biology, pseudogene research, epigenetic regulation, and systems biology. Through a multi-stage conceptual modeling process, several analytical constructs were identified and integrated into a unified theoretical framework describing the architecture of gene latency within genomic systems.The conceptual modeling process involved three stages: identification of recurring patterns related to genomic inactivity across empirical literature, development of theoretical constructs describing latent genetic states, and integration of these constructs into a systems-level model explaining transitions between active, silent, and latent gene states.ResultsThe analysis resulted in the formulation of a set of interacting constructs shaping the Gene Latency framework. Latency describes the condition in which genetic information remains structurally preserved while its functional execution is suspended. Recallability refers to the potential for latent genes to become activated under specific biological contexts. Biological context represents the regulatory environment—including developmental stage, cellular state, and environmental signals—that determines gene activation. Execution refers to the realization of genetic information through transcription and translation processes. Decision architecture describes the regulatory networks that integrate biological signals to determine gene activation. Latent genomic portfolio represents the collection of latent genetic elements preserved within the genome. Biological memory refers to the accumulation of preserved genetic information across evolutionary time, including duplicated genes, pseudogenes, and regulatory elements.Together, these constructs form a multi-layered genomic architecture through which biological systems preserve genetic information, regulate gene activation, and maintain reservoirs of latent functional potential. ConclusionThe proposed Gene Latency framework offers a new theoretical perspective for understanding genomic organization and the persistence of inactive genetic information within biological systems. By integrating insights from genomics, evolutionary biology, and systems biology, the framework expands existing models of gene regulation and proposes that genomes function not only as repositories of active genes but also as reservoirs of latent genetic potential. This perspective provides a conceptual foundation for future empirical and computational investigations into latent genomic systems and their potential roles in biological adaptation and evolutionary innovation.

Article
Biology and Life Sciences
Biochemistry and Molecular Biology

Brenda Palomar

,

Maria Ortega

,

Daniel G.Camblor

,

Francisco Gimeno-Valiente

,

Aitana Bolea

,

David Moro-Valdezate

,

Jose González

,

Marisol Huerta

,

Susana Roselló

,

Desamparados Roda

+3 authors

Abstract: Background/Objectives: MSS-CRC comprises a heterogeneous group of tumors generally considered “immune cold” due to limited neoantigen generation and T-cell exclusion or inactivation. Current evidence indicates that the composition of T and B immune cells within the tumor microenvironment represents a prognostically relevant factor, significantly associated with both tumor expression profiles and molecular subtypes. Methods: We conducted an exploratory analysis to identify prognostically relevant immune cell components in this group of tumors and to investigate corresponding differences in RNA-based bulk expression and high-resolution spatial transcriptomic profiles. Results: A total of 254 cases of localized mismatch repair-proficient colorectal cancer cases were evaluated. Our findings revealed PD-L1 expression as a robust, independent prognostic biomarker associated with favorable outcomes in this specific population. Bulk RNA expression analysis showed that PD-L1–negative tumors exhibited an expression profile consistent with abundant cancer-associated fibroblast infiltration, increased matrix stiffness, and impaired immune activation—features aligned with tumor progression and poorer clinical outcomes. In contrast, PD-L1–positive tumors displayed stromal programs enriched in immune activation and controlled remodeling, consistent with an immunologically active microenvironment. Spatial transcriptomics added an additional layer of evidence, revealing that epithelial-to-mesenchymal transition–related programs can dominate stromal niches in PD-L1–negative tumors, particularly within macrophage-enriched stromal regions. Conclusions: Our observations suggest a crosstalk link between PD-L1 expression on immune cells and immune-activated vs mesenchymal-dominant states driven within tumor-associated macrophage-enriched stromal niches. These results provide insight into the biological mechanisms underlying disease progression and highlight tumor-associated macrophages as a potential therapeutic target to overcome immune resistance particularly in PD-L1–negative MSS-CRC tumors.

Review
Biology and Life Sciences
Biochemistry and Molecular Biology

Bernard Delalande

,

Hirohisa Tamagawa

,

Vladimir Matveev

Abstract: The Hodgkin--Huxley model has provided an extraordinarily successful phenomenological description of the action potential for over seven decades. Its predictive power and mathematical elegance have made it a cornerstone of modern neuroscience. However, the model incorporates mechanistic assumptions about the nanoscale ionic environment of the axonal membrane that were necessarily simplified in 1952 and that modern biophysics allows us to examine critically. In this article, we identify eight independent physical inconsistencies in the mechanistic interpretation of the Hodgkin--Huxley model. These concern: the gel-phase nature of the axoplasm and its consequences for ionic activity; the insufficient ionic reservoir of the peri-membrane volume; the physical implausibility of ionic replenishment at physiological firing rates; ionic congestion and inter-species competition in confined spaces; the reductive representation of ion channels as single conductance parameters; the uncertain relationship between crystallographic channel structures and physiological reality; the osmotic paradox created by intra-pore ionic concentrations; and the systematic physical limitations of patch-clamp recordings. None of these arguments contests the experimental measurements on which the model is based. All of them contest the physical plausibility of the mechanistic interpretation placed on these measurements. The cumulative and mutually reinforcing nature of these inconsistencies suggests that the mechanistic foundations of the Hodgkin--Huxley model deserve serious and systematic reexamination.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Hossein Malekinezhad

,

Roya Rafati

Abstract: This study investigates the application of Markov and Hidden Markov Models (HMMs) for detecting latent market regimes in cryptocurrency markets, with a particular focus on Bitcoin. Cryptocurrency markets are characterized by high volatility, structural breaks, and non-stationary behavior, which often limit the effectiveness of traditional linear time-series models. Hidden Markov Models provide a probabilistic framework capable of identifying unobservable market states that generate observed price dynamics. In this research, a regime-switching framework is developed to classify Bitcoin market conditions into distinct latent states characterized by different statistical properties of returns and volatility. The proposed methodology extends standard homogeneous HMMs by incorporating non-homogeneous transition probabilities and Bayesian estimation techniques to better capture dynamic market behavior. Time-varying transition probabilities allow the model to reflect evolving market conditions influenced by trading activity and external factors. Additionally, extensions addressing duration dependence and long-memory volatility are considered to improve regime persistence modeling. Empirical evaluation using Bitcoin data demonstrates that regime-aware modeling effectively captures transitions between low-volatility consolidation phases and high-volatility turbulent periods. The results suggest that incorporating regime detection significantly improves the interpretability of market dynamics and provides a valuable foundation for risk-aware trading strategies and adaptive portfolio allocation in highly volatile digital asset markets. The findings highlight the potential of Hidden Markov frameworks as a robust tool for understanding structural shifts in cryptocurrency markets and improving predictive modeling of financial time series.

Review
Medicine and Pharmacology
Oncology and Oncogenics

Osama AlOudat

,

Omar S. Al-Odat

Abstract: Pediatric gastrointestinal (GI) cancers are rare malignancies that differ fundamentally from their adult counterparts in molecular drivers, histology, and clinical behavior. While adult GI cancers are frequently driven by recurrent oncogenic mutations, pediatric tumors often exhibit pathway-level dysregulation involving developmental signaling networks. Among these, the RAS/MAPK pathway emerges as a central convergent axis integrating growth factor signaling, developmental programs, inflammatory cues, and post-translational regulatory mechanisms. Increasing evidence suggests that aberrant phosphorylation dynamics result from imbalanced kinase activation and phosphatase-mediated signal attenuation which contribute to sustained MAPK signaling in pediatric GI malignancies, even in the absence of canonical RAS or RAF mutations. This review synthesizes current knowledge on RAS/MAPK signaling in pediatric GI cancers, emphasizing the role of kinase–phosphatase imbalance, signal duration, and regulatory failure in shaping oncogenic outcomes. We highlight how altered phosphorylation control may influence tumor differentiation, therapeutic responsiveness, and resistance mechanisms, and discuss emerging opportunities for targeting signaling dynamics rather than single genetic lesions. This signaling-centric framework provides a biologically grounded rationale for functional biomarker-driven precision therapy in pediatric GI malignancies.

Article
Medicine and Pharmacology
Pediatrics, Perinatology and Child Health

Massimo Crapis

,

Giangiacomo Nicolini

,

Andrea Lo Vecchio

,

Roberto Parrella

Abstract: Anti-inflammatory agents, antipyretics, and antibiotics are commonly used to manage fever and pain associated with infectious diseases in both adults and children. Despite their effectiveness, inappropriate and unnecessary prescriptions remain widespread, leading to adverse patient outcomes and, in the case of antibiotics, contributing to antimicrobial resistance. Addressing these issues requires effective stewardship programs focused on educating healthcare professionals and the public on evidence-based guidelines for optimal prescribing practices. This paper explores the five "A"s fundamental to infection management in pediatric and adult patients: appropriateness, abuse, antipyretics, anti-inflammatory agents, and antibiotics. Through a comprehensive literature review, expert perspectives, and clinical guidelines, the study evaluates the roles of anti-inflammatory agents (e.g., ibuprofen), antipyretics (e.g., paracetamol), and antibiotics in clinical practice, highlighting best practices for their use. Experts’ suggestion emphasize that antipyretics should only be administered when fever is accompanied by significant discomfort or pain, as fever itself plays a role in the immune response. Paracetamol is generally preferred as a first-line antipyretic due to its favorable safety profile, while ibuprofen should be used with caution, particularly during respiratory infections, varicella, and severe bacterial infections, due to its potential to exacerbate complications. Special consideration is also required for patients with renal or gastrointestinal comorbidities to prevent toxicity. Regarding antibiotics, prescription should be limited to clear evidence of bacterial infection to avoid unnecessary patient exposure and the development of antimicrobial resistance. Stewardship programs underscore the importance of selecting the right agent, optimizing dosing, and introducing shorter treatment regimens where supported by evidence, to improve therapeutic outcomes while minimizing resistance risks. Ultimately, this paper provides practical, evidence-based recommendations to support rational prescribing of antipyretics, anti-inflammatory drugs, and antibiotics, aiming to optimize patient outcomes, prevent unnecessary toxicity, and contribute to global efforts against antimicrobial resistance.

Article
Medicine and Pharmacology
Clinical Medicine

Vu Tung Son

,

Bui Dang The Anh

,

Vu Ngoc Hoan

,

Hoang Van Than

,

Bui Kim Linh

,

La Thi Huong Giang

,

Nguyen Tien Manh

,

Luong Thi Thu Thao

,

Hoang Xuan Cuong

,

Dao Truong Giang

+3 authors

Abstract: Background: Pneumococcal conjugate vaccines (PCVs) prevent severe disease in children, but high costs limit access. PNEUMOSIL®, a 10-valent PCV prequalified by World Health Organization (WHO) in 2019, offers a cost-effective alternative. This study assessed its safety and immunogenicity in Vietnamese children aged 6 weeks–24 months. Methods: An open-label, single-arm study enrolled 304 children in three age groups: 6 weeks–6 months (n=151), >6–12 months (n=76), and >12–24 months (n=77). Participants received two or three doses. Safety was evaluated through immediate reactions, adverse events (AEs), serious adverse events (SAEs), and withdrawals. Immunogenicity was measured 28 days after the final dose using serotype-specific IgG geometric mean concentrations (GMCs), opsonophagocytic activity (OPA) titers, and seroresponse rates. The trial was approved by the IRB of the National Ethics Council (code: No. 75/CN-HĐĐĐ on date June 4th, 2021) and was registered with ClinicalTrials.gov, NCT05140720. Results: Of 304 enrolled participants, 294 (96.7%) completed follow-up. No immediate adverse events or serious adverse events occurred. Unsolicited adverse events were reported in 17%, mainly respiratory, while serious adverse events occurred in 4%. Mild local/systemic reactions (e.g., injection site pain, crying) resolved without sequelae. Immunogenicity was strong, with GMCs 1.8–9.11 µg/mL, GMTs 277.8–22,342, and >90% achieving seroresponse for all 10 serotypes. Conclusions: PNEUMOSIL® demonstrated favorable safety and robust immunogenicity, supporting its inclusion in national immunization programs as an affordable option for pneumococcal disease prevention.

Article
Biology and Life Sciences
Neuroscience and Neurology

Masanori Shimono

Abstract: Translational neuroscience relies on both in vitro slice recordings and in vivo record-ings. Their spontaneous population dynamics are observed under decisively different conditions, and across independent experiments there is typically no clear neuron-to-neuron correspondence. Here we formulate a time-resolved, bidirectional transfer task be-tween in vitro and in vivo multineuronal spike trains and provide a standardized evalua-tion procedure for generation across markedly different recording preparations. We train an autoregressive Transformer on 1-ms binned, 128-unit binary sequences and introduce Dice loss to directly optimize spike-event overlap under extreme class imbalance, compar-ing it with Binary Focal Cross-Entropy (γ = 2.0). Across 12 mouse datasets (6 in vitro HD-MEA sessions and 6 in vivo Neuropixels sessions), the method achieves strong within-domain performance and remains above chance for cross-domain generation (ROC-AUC 0.70±0.09 for in vitro→in vivo; 0.80±0.10 for in vivo→in vitro). Because spike events are ra-re, we report Precision–Recall curves and PR-AUC alongside ROC-AUC to reflect minori-ty-event quality. To our knowledge, this is the first demonstration of bidirectional, time-resolved generation between unpaired in vitro and in vivo population spike trains without assuming cell correspondence, and the framework can be adapted to other sparse neural event data and related event-based datasets when domain-specific validation criteria are defined.

Article
Computer Science and Mathematics
Computer Science

Latifa Boubekri

,

Hassnae Aberkane

,

Mohammed Chaouki Abounaima

,

Loubna Lamrini

Abstract: The TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) method is one of the most widely used multi-criteria decision-making (MCDM) approaches in industrial, financial, and scientific fields. However, its sequential computational cost of O(m×n), where m denotes the number of alternatives and n the number of criteria, becomes prohibitive when decision matrices have several million rows. To overcome this limitation, we propose GPU-TOPSIS, a fully vectorized and parallel reformulation of TOPSIS based on tensor execution on graphics processing units (GPUs), whose main contributions are: (i) a formally correct reformulation of TOPSIS as a GPU tensor pipeline preserving mathematical fidelity to the original method; (ii) a two-pass fragment-processing algorithm guaranteeing exact mathematical equivalence with monolithic TOPSIS, while reducing the memory footprint from O(m×n) to O(mₜ×n); (iii) Three independent implementations on CuPy, PyTorch, and TensorFlow ensure the framework's portability and genericity. Experimental evaluations on real data from the Amazon Products 2023 dataset, using matrices of up to 200 million alternatives (via the 2-pass formulation), demonstrate speedups of up to 4.75× compared to the reference CPU implementation (NumPy). A perturbation sensitivity analysis of the criteria weights and cross-backend consistency tests confirms that GPU acceleration fully preserves robustness and decision reliability, making GPU-TOPSIS a practical, open, and reproducible solution for large-scale multi-criteria decision making in Big Data environments.

Article
Engineering
Aerospace Engineering

Jie Hu

,

Shuai Zhang

,

Xiaorong Feng

,

Xinglong Wang

Abstract: The Aircraft Landing Problem (ALP) poses significant challenges for traditional Monte Carlo Tree Search (MCTS) due to its vast search space and reliance on inefficient random simulations. To overcome these limitations, this paper proposes a novel Transformer-Augmented Monte Carlo Tree Search (TMCTS) algorithm. Our approach integrates a reinforcement learning framework that incorporates key operational constraints, including wake turbulence separation and time windows, and employs a cost function aimed at minimizing both delay time and fuel consumption. A core innovation is the replacement of the conventional random simulation phase in MCTS with a Transformer-based value predictor. This leverages the Transformer’s superior capability in sequence modeling and capturing global dependencies among flights, thereby dramatically accelerating search convergence. Specifically, we design a two-head Transformer network (comprising policy and value heads) to provide informed prior knowledge, which effectively guides the selection and expansion steps of the MCTS tree. The model is trained within an Actor-Critic framework, utilizing behavior cloning for pre-training followed by reinforcement learning for fine-tuning. Experimental evaluations on the standard OR-Library benchmark demonstrate that our TMCTS method significantly reduces scheduling deviation compared to state-of-the-art baselines (including DPALO+GA, DPALO+PSO, and DALP). Moreover, it achieves a 90.6% reduction in computation time relative to the DALP method, highlighting its superior efficiency and practical applicability for real-time scheduling.

Article
Chemistry and Materials Science
Nanotechnology

Congyi Zhang

,

Haotian Wu

,

Xiaotong Chen

,

Wenze Yin

,

Shizhuan Huang

,

Dixiang Wen

,

Xueting Song

,

Xiaoyan Xu

,

Changmei Zhang

,

Sheng Tai

Abstract: This study successfully developed a novel tumor-associated macrophages (TAMs)-targeting nanoplatform-sialic acid-disulfide bond-camptothecin (SA-SS-CPT) nanowires. This system significantly improved the solubility and bioavailability of camptothecin (CPT) and achieved active targeted drug delivery by utilizing sialic acid as a targeting ligand to specifically recognize the highly expressed Siglec-E receptor on TAMs. Upon internalization into TAMs, the disulfide bond in the SA-SS-CPT nanowires was cleaved in response to intracellular glutathione (GSH), leading to the controlled re-lease of CPT. SA-SS-CPT induced DNA damage in TAMs, thereby activating the cGAS-STING signaling pathway, promoting the polarization of TAMs toward the M1 phenotype, enhancing pro-inflammatory and anti-tumor immune responses, and effec-tively inhibiting tumor immune escape. Furthermore, the SA-SS-CPT nanowires syner-gistically enhanced the efficacy of PD-L1 blockade immunotherapy, collectively remod-eling the tumor immune microenvironment and ultimately facilitating significant tumor clearance.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Mohsen Mostafa

Abstract: This paper introduces Bayesian R-LayerNorm, a normalization layer that extends the previously proposed R-LayerNorm with uncertainty quantification. Building upon R-LayerNorm, we draw connections to statistical field theory, renormalization group methods, and infor-mation geometry to motivate the design. The method incorporates uncertainty estimation through a stable ψ-function, enabling adaptive noise suppression based on local entropy esti-mates. We provide theoretical analysis of numerical stability, gradient stability, and training convergence under standard assumptions. A key practical contribution is the integration of uncertainty quantification directly into the normalization operation, providing confidence estimates for each normalized activation without additional cost. The method adapts to local noise, varying normalization strength spatially based on estimated noise levels. The implementation is simple, adding only two learnable parameters per layer, and serves as a drop-in replacement for existing normalization layers. Due to computational constraints (Kaggle P100 GPU, limited epochs), we evaluate Bayesian R-LayerNorm on CIFAR-10-C using 50 training epochs and 3 random seeds. Under these limitations, it achieves average accuracy gains of +0.49% over standard LayerNorm across four common corruptions, with the largest improvement of +0.74% on shot noise. While these gains are modest, they are consistent across seeds. The method requires mini-mal computational overhead ( 10%) and we provide complete open-source implementation. We further show that the learned λ parameters offer interpretability, revealing which layers adapt most strongly to different corruptions. The framework suggests promising directions for trustworthy normalization in safety-critical applications where uncertainty matters alongside accuracy.

Article
Physical Sciences
Theoretical Physics

Jef Zerrudo

Abstract: We derive a quantum conjugacy between spacetime diffusivity and inertial mass from relativistic information-transport kinematics. Two Lorentz-invariant laws—(i)~an invariant-time gauge for timelike segments, \( ds=c\,dt \), and (ii)~diffusive evolution \( d\epsilon/ds=c- \)yield a first-order action whose canonical quantization gives \( [\hat\epsilon,\hat m]=i\hbar \) and the emergent Cosmological Uncertainty Principle~(CUP), \( \Delta\epsilon\,\Delta m\ge\hbar/2 \). Independence across coherence cells of size \( \ell_{\rm coh} \) amplifies the bound to \( \Delta\epsilon\,\Delta m\ge(\hbar/2)\,N_{\rm eff} \) with \( N_{\rm eff}=D/\ell_{\rm coh} \), extending quantum uncertainty to cosmic baselines. A single area-diffusion parameter provides an operational unification of Planck and Hubble times across \( {\sim}\,61 \) orders of magnitude. Applied to black-hole horizons, the CUP reproduces Hawking's temperature exactly. for de~Sitter space, a naive 1/H correlation window overshoots by a factor \( \pi \), while KMS/Unruh calibration (\( \tau=\pi/H \)) recovers the standard Gibbons–Hawking result \( T_{\rm dS}=\hbar H/(2\pi k_B) \). Unlike generalised or extended uncertainty principles that deform the position--momentum commutator, the CUP introduces a new conjugate pair (\( \epsilon,m \)) while leaving the Heisenberg sector intact. These results position CUP as an emergent, testable quantum--informational constraint on cosmological observables rather than an added axiom.

Article
Biology and Life Sciences
Plant Sciences

Swetaleena Mishra

,

Suchismita Prusty

,

Sowmya Poosapati

,

Durga Madhab Swain

,

Ranjan Kumar Sahoo

Abstract: Salinity stress is one of the major obstacle worldwide for the glycophytic crop production, including rice. It alters the cellular metabolism, causing significant crop destruction resulting in substantial reductions in yield. Through genetic engineering, the oxidative stress can be decreased while increasing the photosynthetic capability by using C3 transgenic plants that produce the C4 enzymes like phosphoenolpyruvate carboxykinase (PEPCK) at a high level. In this research, we evaluate the efficiency of transgenic rice plants (Oryza sativa L. cv. IR64) over-expressing PEPCK genes to act against salinity stress as well as increasing its photosynthetic efficiency. The T1 transgenics showed increased levels of several biochemical factors, including ascorbate peroxidase (APX), malondialdehyde (MDA), glutathione reductase (GR) and guaiacol peroxidase (GPX) activities suggesting the existence of an effective antioxidant defense mechanism that helps the plants to deal with oxidative damage driven by salt stress. The photosynthetic parameters like chlorophyll contents, net photosynthetic rate, intercellular CO2 content and stomatal conductance were elevated in transgenic plants when compared with the control plants (null seggregant). It also exhibited higher agronomic characteristics than the control plant. Our findings add a conclusive evidence of PEPCK gene’s potential role in regulating salt stress response and tolerance of rice plants.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Mohsen Mostafa

Abstract: Deep learning classifiers deployed in scientific applications often encounter inputs that violate physical laws (e.g., due to sensor failure or corruption). Standard methods cannot detect such violations and may produce confident but wrong predictions. We propose UA-PBR, a framework that combines a physics-informed autoencoder (to detect physics violations) with a Bayesian CNN (to quantify predictive uncertainty). Inputs are rejected if either the PDE residual exceeds a threshold or the predictive entropy is too high. As a proof-of-concept, we evaluate UA-PBR on a synthetic Darcy flow dataset (32 × 32 grid) under severe computational constraints (Google Colab, 10 seeds). Despite these limitations, UA-PBR reduces classification risk by over 90% on heavily corrupted samples while accepting 89.7% of clean inputs with 99.99% accuracy on accepted samples. Ablation studies confirm that both components contribute synergistically. These preliminary results on a synthetic benchmark illustrate the potential of physics-aware rejection and motivate further investigation with larger-scale experiments. Code is available at: https://github.com/UA-PBR/UA-PBR.

of 5,665

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated