Sort by

Article
Engineering
Other

Sofianos Panagiotis Fotias

,

Eirini Maria Kanakaki

,

Afzal Memon

,

Anna Samnioti

,

Jahir Khan

,

John Nighswander

,

Vassilis Gaganis

Abstract: Differential Liberation Expansion (DLE) and viscosity tests are core elements of the Pressure–Volume–Temperature (PVT) laboratory suite used to characterize reservoir oils under depletion and to support compositional modeling and reservoir simulation. Nevertheless, both DLE and viscosity testing remain expensive and time-consuming due to specialized equipment, strict operating procedures, and the need for experienced laboratory personnel.Building on our prior work that introduced the proximity-informed Local Interpolation Model (LIM) framework for Constant Composition Expansion (CCE), this study demonstrates how the same end-to-end, neighborhood-based workflow is applied to DLE and viscosity test data. A target fluid is embedded in a compositional–thermodynamic descriptor space and paired with a small set of thermodynamically similar fluids drawn from a PVT data archive. Within this locality, LIM is used to infer DLE behavior by combining local interpolation for key scalar quantities (e.g., saturation-point and endpoint PVT values) with shape-preserving reconstruction of pressure-dependent curves. For viscosity, the same approach reconstructs the oil-viscosity curve across the undersaturated and saturated regions. Evaluation on a proprietary database of DLE and viscosity tests shows strong agreement across diverse fluids for both DLE and oil viscosity trends. This supports reducing reliance on new DLE and viscosity measurements while maintaining engineering-grade fidelity in reservoir-engineering and simulation workflows. This approach has been fully automated through software so it can be set up and directly utilized by the field operators on their own databases to significantly reduce their fluid sampling and laboratory analysis costs. Moreover, the proposed AI model does not use others’ data while respecting data privacy and data ownership.

Review
Engineering
Civil Engineering

Kaustav Chatterjee

,

Mohak Desai

,

Joshua Li

Abstract: Over the last two decades, there has been a paradigm shift in geotechnical engineering driven by advances in sensing, communication, and data-driven techniques. These advancements enhanced the safety and reliability of geotechnical infrastructure through real-time monitoring and automated decision-making. In recent times, Large Language Models (LLMs) have emerged as advanced data-driven techniques contributing to automated risk assessment of geotechnical infrastructure. LLMs are advanced deep learning models widely used to solve complex numerical problems, analyze large volumes of data, and generate human language. This paper presents a comprehensive review of the application of LLM in geotechnical engineering. The integration of LLMs into geotechnical engineering has demonstrated significant advances in slope stability analysis, bearing capacity computation, numerical analysis, soil-structure interaction, and underground infrastructure. By summarizing the latest research findings and practical applications, this research paper underscores the potential of LLMs to advance and automate various processes in geotechnical engineering. The findings presented in this paper not only provide insights into the current LLM-based geotechnical practices but also emphasize the instrumental role LLM can play in advancing geotechnical engineering, ultimately ensuring a safer and more sustainable future. Lastly, this paper highlights the different LLM capabilities which can be used to empower geotechnical engineers.

Article
Biology and Life Sciences
Parasitology

Everson R. de Souza Teles

,

Wanderley de Souza

Abstract:

Toxoplasma gondii, the causative agent of toxoplamosis, a disease widely distributed, is an intracellular parasite that invades host cells of different tissues using specialized endocytic activity. Recent studies suggest that tunneling nanotubes (TNTs), thin cell surface projections, may participate in the parasite-host cell interaction process. We report results on the involvement of host cells TNTs in the adhesion and internalization of T. gondii tachyzoites to epithelial LLC-MK2 cells. Microscopy analysis showed that incubating cells in 0.45 M sucrose induces reversible assembly of TNTs without affecting cell viability. The presence of extended TNTs correlated with increase on parasite adhesion and reduction of parasite entry, suggesting a structural or signaling role in mediating adhesion. TNTs assembled following sucrose incubation contain both actin and tubulin components. These results highlight the functional relevance of TNTs in T. gondii host cell interaction, especially in parasite adhesion, opening new perspectives for understanding T. gondii-host cell interaction.

Article
Medicine and Pharmacology
Neuroscience and Neurology

Bernard Delalande

Abstract: Electrotherapy and neurostimulation universally employ the rectangular (square) waveform as their standard stimulation signal. This article demonstrates that this choice constitutes a fundamental error of physical, mathematical, and neurophysiological nature, perpetuated since the mid-twentieth century through three converging factors: insufficient signal theory training in medical and paramedical curricula; technological drift toward ever-steeper wavefronts perceived as progress; and inadequate spectral disclosure by medical device manufacturers. We recall that the founders of electrical neurophysiology—Du Bois-Reymond (1843) and Helmholtz (1850)—stimulated with smooth-envelope signals, involuntarily close to membrane physiological requirements. We analyze the technological stratigraphy that progressively established the square wave as the unquestioned norm, and identify two erroneous assertions in the French foundational literature (Dumoulin & de Bisschop, 1987; Crépon, 1994) as crystallization points of the error in clinical practice. We present spectral and energetic calculations demonstrating the inadequacy of the rectangular signal relative to the biological bandwidth of the excitable membrane: for a 600 μs rectangular pulse at 50 Hz, Fourier harmonics extend to 81,650 Hz, wavefront components exceed 5 MHz, and the calculated peak power reaches 7.75 × 108 W, against 6.1 × 105 W for the equivalent sinusoidal signal. We propose an optimal biomimetic signal described by a parametric Bézier curve whose inflection points correspond to the conformational time constants of voltage-gated ion channels as described by the Hodgkin-Huxley model (1952). This zero-mean signal respects the natural opening and inactivation kinetics of sodium and potassium channels, concentrating its energy within the physiologically relevant bandwidth. We discuss documented clinical consequences of the fundamental error: peri-electrode fibrosis in deep brain stimulation (DBS), progressive impedance drift, and the relative inefficacy of consumer TENS devices. This work is published open access under Creative Commons CC-BY 4.0. All parameters of the optimal signal are fully described herein, establishing permanent publication priority and excluding subsequent patent filing on this concept.

Review
Medicine and Pharmacology
Orthopedics and Sports Medicine

Fabian Poletti

Abstract: Background: Knee osteoarthritis (OA) is traditionally framed as a mechanical “wear-and-tear” disorder. Contemporary evidence supports OA as a whole-joint, immunometabolic and neurosensory disease in which low-grade inflammation modulates tissue homeostasis and pain. In midlife women, the menopause transition coincides with abrupt endocrine changes that plausibly amplify inflammatory tone, alter neuromuscular function, and increase pain sensitisation—often with symptoms disproportionate to imaging. Objective: To synthesise the biological rationale and clinical evidence linking menopausal hormonal decline with OA-relevant inflammatory and neuromuscular mechanisms, and to propose a collaborative orthopaedic model integrating menopause health expertise. Methods: Narrative synthesis of mechanistic, epidemiologic, and clinical trial data on OA inflammation, menopause-related musculoskeletal symptoms, and hormone therapy effects on pain/function and musculoskeletal resilience. Evidence is interpreted with attention to outcome type (symptoms vs structural progression), confounding in observational studies, and timing/continuity considerations. Key Findings: (1) OA pain and disability correlate imperfectly with radiographic severity, consistent with synovitis, adipose-derived mediators, subchondral remodelling, and peripheral/central sensitisation. (2) Perimenopause is associated with increased prevalence of musculoskeletal pain, suggesting a biological inflection period rather than linear age-related decline. (3) Oestrogen decline plausibly shifts immune signalling toward pro-inflammatory pathways (e.g., IL-6/TNF-α/NF-κB), while progesterone and androgen changes may influence sleep quality, recovery capacity, muscle strength, and neuromuscular control—factors strongly linked to knee OA outcomes. (4) Menopausal hormone therapy (when appropriately indicated and supervised) may reduce joint pain in some women and may improve musculoskeletal resilience; however, evidence for disease-modifying structural effects on OA remains limited and confounded. Clinical Implications: Orthopaedic care for midlife women with knee OA should include endocrine-aware phenotyping, screening for menopause-transition symptom clusters, and structured referral pathways to women’s hormonal health specialists. Optimising the systemic biological environment may enhance the durability of rehabilitation, regenerative strategies, and surgical outcomes. Conclusion: Menopause transition biology is a clinically relevant modifier of OA symptom expression and functional decline. Integrating hormonal health expertise into orthopaedic pathways is not scope expansion—it is precision care aligned with modern OA biology.

Review
Physical Sciences
Applied Physics

Tianxi Sun

Abstract: Human bodies evolved for Earth, not the cosmos. This paper argues that silicon-based intelligence offers the only viable path beyond this limitation. Recent breakthroughs in pressure‑free silicon batteries enable true energy autonomy in space, while two‑dimensional silicene provides radiation‑hardened cognition. The convergence of energy and information in a single material has no analogue in biology and constitutes the definitive evidence for the Silicene Event. We derive a figure of merit (F = E × T × R / M) showing that silicon outperforms biology by a factor of approximately 9 × 10⁶. We propose that this transition constitutes a new planetary interval: the Silicene Event. Far from ending the human story, it represents its most profound continuation. The Silicene Event—a new planetary interval defined by siliconbased intelligence—is already underway.

Article
Public Health and Healthcare
Public Health and Health Services

Ntandazo Dlatu

,

Lindiwe Modest Faye

Abstract: Background Treatment adherence and outcomes for drug-resistant tuberculosis (DR-TB) continue to be sub-par in rural South Africa, where structural health system limitations, comorbid conditions, and diverse resistance patterns make clinical management more challenging. This study aimed to assess how demographic, clinical, and programmatic factors, including a Community Engagement–Clinical Governance (CE–CG) implementation period, affect DR-TB treatment outcomes using explanatory predictive modelling. Methods A retrospective cohort study was conducted using routine programme data from 694 DR-TB patients. Complete-case analysis was performed for multivariable modelling (n = 282). Logistic regression and decision tree models were used to examine the relationships between treatment success and selected predictors, including age, sex, treatment regimen, resistance phenotype, comorbidities, and the CE–CG implementation period. Model discrimination and performance were evaluated using receiver operating characteristic (ROC) curves, pseudo-R² statistics, likelihood ratio tests, and multicollinearity diagnostics. Results The cohort had a mean age of 40.7 years, and 58.8% of patients were male. Overall treatment success was 59.9%. Severe resistance phenotypes were rare (1.7%) but clinically significant. Comparative analysis showed no notable demographic or outcome differences between included and excluded patients, indicating minimal selection bias. In adjusted models, treatment initiation during the CE–CG implementation period was significantly linked to lower odds of treatment success (adjusted odds ratio [aOR] = 0.443; 95% CI: 0.240–0.818; p = 0.009). Severe resistance phenotypes were strongly negatively associated with treatment success (aOR = 0.303; p = 0.056). Logistic regression models had limited discriminatory ability (AUC: 0.523–0.548), while the decision tree model showed modest improvement (AUC: 0.626). Overall, the model’s explanatory power was limited (pseudo-R² = 0.029), although no evidence of multicollinearity was found. Conclusions Programmatic implementation durations and resistance severity were key factors influencing treatment outcomes in this rural DR-TB cohort. Although the predictive accuracy was modest, the modelling approach revealed structural and programmatic vulnerabilities that impacted treatment success. Enhancing clinical governance, improving program documentation, and expanding community-engaged adherence strategies may improve DR-TB results. Future predictive models should incorporate programmatic indicators alongside longitudinal adherence data and social determinants of health to boost explanatory power and guide targeted interventions.

Article
Medicine and Pharmacology
Obstetrics and Gynaecology

İnci Öz

,

Engin Ulukaya

Abstract: Background: Gynecologic cancers constitute a major public health burden worldwide, with marked regional and temporal variations influenced by demographic changes, healthcare access, and screening practices. In Turkey, contemporary nationwide data capturing recent temporal trends—particularly during the COVID-19 pandemic—remain limited. Methods: This nationwide, retrospective observational study integrated six independent gynecologic cancer datasets obtained from the İstinye University Dataset Sharing Platform, comprising 22,468 adult patients (≥18 years) diagnosed between 2014 and 2024 across 33 hospitals in Turkey. Cancer types included ovarian, endometrial, cervical, vulvar, vaginal, and fallopian tube malignancies. Annual case counts were analyzed, and normalized admission rates were calculated per 100,000 unique patient admissions to account for variations in healthcare utilization. Cancer-type–specific analyses and descriptive pandemic-period sensitivity analyses were performed. Results: The mean age of the study population was 62.75 ± 13.95 years, with most patients aged over 60 years (60.9%). Ovarian cancer was the most frequent diagnosis (40.8%), followed by endometrial (34.7%) and cervical cancers (20.7%), while rarer malignancies accounted for less than 4% of cases. Annual diagnoses increased progressively from 2014 to 2019, followed by a marked rise in 2020 and a pronounced peak in 2021, during which 30.6% of all cases were recorded. When normalized to total unique patient admissions, gynecologic cancer admission rates ranged from 45.2–55.1 per 100,000 in the pre-pandemic period and peaked sharply in 2021 at 239.1 per 100,000 admissions. Cancer-type–specific normalization revealed parallel temporal patterns across all malignancies, with the most pronounced increases observed for ovarian (104.1 per 100,000) and endometrial cancers (77.4 per 100,000) in 2021. Overall mortality was 2.8%, with a mean survival of 13.13 ± 17.79 months among exitus patients, and no significant differences in survival duration across cancer types. Conclusions: This nationwide multicenter analysis demonstrates a substantial increase in gynecologic cancer presentations in Turkey over the past decade, with a pronounced, system-wide surge during the COVID-19 pandemic period. Normalized analyses indicate that this increase exceeded overall healthcare utilization, suggesting pandemic-related diagnostic delays and backlog effects. Continuous surveillance using utilization-adjusted metrics is essential to accurately interpret temporal trends and to guide future cancer control and healthcare resilience strategies.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Olena Litovska

,

Myroslav Mishchuk

,

Olena Pavliuk

Abstract: Wearable sensors enable continuous monitoring of human activity and physiological state, with applications in workplace health monitoring, occupational safety, sports performance analysis, and rehabilitation. However, effective use of these devices requires specialized data-processing algorithms and machine-learning (ML) methods. This work proposes a methodology for assessing workers’ physiological load using ML, combining accelerometer and photoplethysmography (PPG) signal processing. An ensemble Random Forest algorithm is used for activity recognition, while the KID-PPG deep learning model is applied for heart rate (HR) estimation. A personalized physiological load assessment framework normalizes effort indices against demographic-group-specific distributions defined by sex, age, and activity intensity. The methodology was validated on the PPG-DaLiA dataset comprising 15 participants with diverse demographic profiles across eight daily activities. Experiments demonstrated high accuracy in activity recognition (macro F1-score of 90.73%) and robust HR estimation even in the presence of motion artifacts (MAE below 10 bpm). The personalized assessment revealed that participant age substantially influences physiological-load patterns, confirming that demographic-aware normalization is essential for accurate workload interpretation. The main factors influencing system performance have been identified, and directions for improving the models across diverse user groups and limited-signal-quality conditions are discussed.

Article
Social Sciences
Political Science

Boris Gorelik

Abstract: When tightly-knit communities suddenly show electoral volatility, does it signal weakening group identity, or does it reveal something deeper? This question matters wherever centralized authority structures shape bloc voting. Conventional wisdom interprets such shifts as boundary erosion. This paper presents evidence for the opposite.Drawing on an extreme-case design, I exploit a natural experiment — Israel’s 2019–2022 political deadlock — to track voter transitions within ultra-Orthodox communities, where ethnically distinct subgroups maintain near-total political separation despite shared religious practice. Using ecological inference on ballot-box data from five population centers across six elections (2019–2022), I find exceptionally high baseline party loyalty (90–95%), a dramatic disruption during the March 2020 – March 2021 transition when switching surged to 12–19%, and a swift return to high loyalty within 13 months — though the shifted voters remained with their new parties.The synchronized switch of voting loyalty across geographically dispersed cities, occurring without residential mobility, is consistent with elite-mediated bloc realignment rather than emerging voter independence. Paradoxically, the capacity for mass switching may reflect stronger, not weaker, institutional control.These findings challenge how scholars of party–voter linkages interpret electoral volatility in identity-based voting blocs: apparent instability may reflect disciplined coordination, and what looks like boundary erosion may actually reveal institutional strength operating through collective action.

Review
Public Health and Healthcare
Public Health and Health Services

Kelly Rocio Chacon- Acevedo

,

Ana Maria Castillo

,

John Alexander Castro-Muñoz

,

Yonatan Ferney Rojas

,

Andrea Bermudez-Rodriguez

,

Ana Maria Rojas Gomez

Abstract: Implicit bias, automatic attitudes or stereotypes outside conscious awareness, may influence clinicians’ communication, diagnosis, and treatment decisions, contributing to inequities in care. We conducted a scoping review to map measurement strategies used to assess implicit bias among health professionals and students in healthcare and training settings. Using Joanna Briggs Institute guidance and PRISMA-ScR, we searched PubMed, Embase, BVS, Google Scholar, and institutional repositories for studies to November 2025; two reviewers independently screened and charted data (protocol was developed a priori but submitted internal in organization, and then uploaded in OSF ). Of 1,864 records, 93 studies from 28 countries were included. We identified 57 bias domains, most often race/ethnicity, weight, and sexual orientation. Across studies, 42 unique instruments were reported; the Implicit Association Test was most common, while psychometric validation and administration details were frequently limited, constraining comparability and interpretation. Evidence gap mapping showed concentration in academic and hospital settings, with fewer studies in primary care or community contexts and limited attention to age, disability, and intersectionality-related biases. The evidence base is growing but fragmented; future work should prioritize standardized administration and reporting, stronger validation, and tools that better capture automatic responding across diverse identities and care settings to support education and equity-oriented interventions.

Article
Physical Sciences
Theoretical Physics

Sacha Mohamed

Abstract:

We introduce an operational notion of transport latency, which we call quantum information copy time: the earliest time at which a receiver confined to a region B can certify, with prescribed advantage, which of two global hypotheses was prepared by local operations in a distant sender region A. The benchmark object is information-theoretic—the Helstrom advantage on B, given by the trace distance between reduced states—and it also admits receiver-restricted refinements that make measurement constraints explicit, including few-body and moment-channel receivers. We derive the corresponding kinematic locality constraints for Hamiltonian and Lindbladian dynamics with Lieb–Robinson tails, as well as for circuits and quantum cellular automata with strict light cones. We then establish a diffusive benchmark in the quantum symmetric simple exclusion process (Q-SSEP): for locally prepared charge-biased hypotheses, the Helstrom copy time obeys an unconditional diffusion-limited lower bound expressed in terms of the diffusion constant D and the static susceptibility χ. For closed Hamiltonian systems, we formulate a projection-based route—with assumptions stated explicitly—that relates restricted copy times to a single slow transport pole on a diagnostically checkable time window. We complement the analytical framework with conservative exact-diagonalization diagnostics in the XXZ chain and with a bundled TEBD/MPS reference implementation plus convergence protocol (Supplementary S2 and Code SC1), validated against exact evolution at small sizes. Finally, we compare copy time with scrambling diagnostics based on out-of-time-ordered correlators and identify regimes in which conservation laws delay certifiability well beyond the ballistic operator-growth front.

Article
Computer Science and Mathematics
Applied Mathematics

Hua-Shu Dou

Abstract: Existence of global smooth solutions to the three-dimensional (3D) Navier-Stokes equations is disproved for pressure-driven flows with no-slip boundary conditions. This study is rigorously grounded in Sobolev space analysis. We show that the solution breakdown arises from regularity degeneration instead of velocity blowup. For disturbed laminar plane Poiseuille flow, the instantaneous velocity field is decomposed into a time-averaged flow and a disturbance flow, both characterized by their regularity in Sobolev spaces. When the Reynolds number is larger than the critical Reynolds number, the nonlinear interaction modifies the mean flow profile, and the disturbance amplitude grows exponentially. This amplification leads to a local cancellation between viscous terms of the mean flow and the disturbance flow, resulting in the total viscous term (i.e., the Laplacian term) vanishing locally at the critical point $(\boldsymbol{x}^*, t^*)$. The local vanishing viscous term leads to zero velocity by the elliptic operator estimate, which contradicts the non-vanishing incoming velocity, leading to formation of a singularity. This singularity induces a velocity discontinuity, which causes the $L^\infty$ -norm of the velocity gradient to diverge, violating the definition of a global smooth solution in Sobolev spaces. The analysis is strictly grounded in partial differential equations (PDE) theory, with all key steps validated by Sobolev space properties and a priori estimates.

Review
Biology and Life Sciences
Biochemistry and Molecular Biology

Bernard Delalande

,

Hirohisa Tamagawa

Abstract: The origin of the transmembrane potential (TMP) in living cells is one of the foundational questions of cellular physiology. The dominant explanatory framework---the ion-pump model---attributes TMP to the active, \ATP-dependent displacement of ions (principally \Naplus\ and \Kplus) across the plasma membrane by dedicated protein complexes. This view, consolidated through the seminal work of Hodgkin and Huxley and the structural characterisation of the Na,K-ATPase, has shaped decades of research in neuroscience, cardiology, and cell biology. A competing framework, the \textit{murburn concept} developed by Manoj and colleagues, proposes a fundamentally different mechanism. According to this view, TMP is not the product of mechanical ion pumping but emerges spontaneously from asymmetric redox chemistry at the membrane interface. Diffusible reactive species (DRS), generated continuously during aerobic respiration, accumulate differentially on either side of the membrane, producing effective charge separation analogous to that observed in electrochemical cells. This perspective examines both frameworks critically, identifies the core points of disagreement, and evaluates the explanatory scope and empirical challenges of each. We argue that the murburn framework raises legitimate and underexplored questions about the thermodynamic sufficiency of the ion-pump model, and that a productive synthesis may lie in recognising redox chemistry as a primary contributor to membrane polarisation---rather than a secondary consequence of it.

Article
Social Sciences
Decision Sciences

Yuang-Hsiang Chao

,

Yao-ming Hong

,

Amit Kumar Sah

,

Mei-Chuan Lee

,

Su-Hwa Lin

Abstract: The global regulatory landscape is shifting from voluntary corporate social responsibility (CSR) to mandatory Environmental, Social, and Governance (ESG) disclosure. This study investigates the causal impact of mandatory ESG disclosure on firm value and operational decarbonization using a comprehensive balanced panel of 1,612 listed firms from the EU and the US between 2018 and 2025.Employing a Difference-in-Differences (DiD) design and an event study analysis, our empirical results yield three primary findings. First, consistent with Agency Theory, mandatory disclosure significantly increases firm value (Tobin’s Q) immediately following the 2021 regulatory shock (Post×Treat=0.5212, p< 0.01), indicating that standardized transparency reduces information asymmetry (H1). Second, we document a progressive and cumulative reduction in carbon intensity, providing robust evidence of substantive execution rather than mere ceremonial compliance (H2). The "downward-sloping" trajectory in the event study confirms that the mandate drives real-world operational transitions over time, refuting Decoupling Theory. Third, we find that internal governance mechanisms play a crucial moderating role in this transition (H3); the reduction in carbon intensity is significantly more pronounced in firms with higher board independence and established ESG committees. These findings suggest that "hard-law" transparency mandates effectively align corporate incentives with global climate goals. The synergy between external regulatory pressure and internal governance oversight is essential for bridging the "talk-walk" gap, offering critical implications for global policymakers designing the next generation of climate-related reporting standards.

Article
Medicine and Pharmacology
Endocrinology and Metabolism

Chunlan Yao

,

Yuxing Liu

,

Mei Yang

,

Yanzu Wang

,

Yanshan Li

,

Caijin Yan

Abstract: Background: The purpose of this research is to explore the relationship between vitamin B6 status (measured by pyridoxal-5′-phosphate, PLP levels) and central obesity in children and adolescents. Additionally, the study seeks to examine how this relationship might influence the link between exposure to tobacco and central obesity in this age group. Methods: This cross-sectional study utilized data from the National Health and Nutrition Examination Survey spanning 2005 to 2010. Central obesity was identified by waist circumference measurements that were equal to or exceeded the 90th percentile, with adjustments made for age and gender. To assess the relationship between tobacco exposure and central obesity, both weighted univariate and multivariate logistic regression analyses were applied. An analysis was performed using PLP concentration categories (PLP ≥ 53.74 nmol/L and PLP < 53.74 nmol/L) to determine the impact of tobacco exposure on central obesity in each respective PLP group. Results: The final sample included 5,865 participants. Higher tobacco exposure [odds ratio (OR): 1.25, 95% confidence interval (CI): 1.03-1.53, P =0.027] and lower PLP levels (OR: 1.28, 95% CI: 1.05-1.57, P =0.016) were each independently linked to an increased risk of central obesity in children and adolescents. Among children and adolescents with lower PLP levels, cotinine exposure was related to an increased risk of central obesity (OR: 1.36, 95% CI: 1.05-1.76, P =0.022), particularly in specific subgroups: individuals under the age of 12 years, males, and those with six hours or less of daily screen time. Conclusion: Our results underscore the critical role of nutritional status, specifically vitamin B6 levels, in modulating the relationship between environmental exposures and obesity risk. Future initiatives aimed at primary prevention could be enhanced by recognizing the link between central obesity and adjustable lifestyle elements.

Article
Engineering
Mechanical Engineering

Luis Costero Sánchez

,

Sagar Sadananda Bhat

,

Klaus Höschler

Abstract: This study presents, for the first time, a comprehensive multi-fidelity aero-thermo-fluid framework (spanning 0D-analytical, 1D and 3D domains) applied to the analysis of a structural oil-to-air Fan Outlet Guide Vane Cooler (FOGVC) in a jet engine. Addressing the need for efficient thermal management in next-generation engines, a hierarchical approach is established to characterize both thermal dissipation and pressure drop performance. The framework compares five simulation levels—ranging from high-fidelity conjugate heat transfer to 0D analytical models—across two distinct internal geometries (a rectangular inverted-U and a circular coil) covering different flow regimes. The research quantifies the trade-offs between physical fidelity and computational cost, establishing a decision-making criterion for the design of complex structural coolers. Results demonstrate that while 0D analytical methods provide high accuracy-to-speed ratios for temperature prediction, they exhibit significant deviations in pressure drop estimation and lack of capture local thermal gradients critical to structural integrity, where high-fidelity fully coupled 3D simulations are indispensable.Furthermore, the analysis reveals fundamental limitations in current passive heat exchanger designs under extreme operating conditions, suggesting a paradigm shift toward active or adaptive components is required to meet future dissipation targets.

Article
Medicine and Pharmacology
Epidemiology and Infectious Diseases

Ninel Iacobus Antonie

,

Vlad Alexandru Ionescu

,

Gina Gheorghe

,

Crista-Loredana Tiuca

,

Camelia Cristina Diaconu

Abstract:

Background/Objectives: Antimicrobial resistance (AMR) remains a major global health threat, strengthening the case for antimicrobial stewardship that limits unnecessary broad-spectrum empiric therapy while preserving timely coverage in severe infection. Large language models (LLMs) are being explored for decision support, but require rigorous offline evaluation before any clinical implementation. Methods: Single-center retrospective paired evaluation at Clinical Emergency Hospital of Bucharest (Internal Medicine, 2020–2024). The unit of analysis was the admission (N = 493), with paired 24 h empiric regimens (clinician-prescribed vs post hoc LLM-recommended via OpenAI API; not visible to clinicians; no influence on care). Local laboratory-derived epidemiology was precomputed from microbiology exports and provided as structured prompt context to approximate information parity with clinicians’ implicit local ecology knowledge. Primary (prespecified) endpoint: any contextual guardrail violation (unjustified carbapenem/antipseudomonal/anti-MRSA under prespecified structured severity/MDR-risk rules), exact McNemar. Key secondary (prespecified): Δ contextual guardrail penalty (LLM − Clin), sign test and Wilcoxon signed-rank (ties reported). Ethics committee approval was obtained. Results: Guardrail violations occurred in 17.0% of clinician regimens vs 4.9% of LLM regimens (paired RD −12.2%; matched OR 0.216, 95% CI 0.127–0.367; McNemar exact p = 1.60 × 10⁻¹⁰). Δ penalty had median 0 with 398/493 ties; among non-ties, improvements (Δ < 0) exceeded adverse shifts (79 vs 16; sign-test p = 3.47 × 10⁻¹¹). Conclusions: In this offline, non-interventional paired evaluation, LLM regimens were associated with fewer prespecified contextual guardrail violations compared to clinician empiric regimens under a rule-based stewardship benchmarking framework. These endpoints strictly quantify concordance with stewardship constraints rather than patient outcomes, necessitating cautious interpretation of secondary and subset analyses. Ultimately, reproducible guardrail-based benchmarking may support subsequent prospective, safety-governed evaluations.

Article
Business, Economics and Management
Econometrics and Statistics

Hongying Luo

,

Jian Xu

,

Li Zhu

,

Yifan Fu

Abstract: Against the backdrop of the high-quality development of the digital economy, exploring the impact of data assetisation on corporate investment efficiency with fintech as the core tool is of great significance for driving the steady and orderly development of enterprises...Taking A-share listed companies from 2012 to 2023 as the research sample, and further classifying them into self-used data assets and transactional data assets based on the types of data assets., the study finds: (1) Data assetisation significantly enhances corporate investment efficiency, with self-use data assets demonstrating a stronger driving effect.(2) Mechanism analysis reveals that data assetisation alleviates underinvestment by easing financing constraints and leveraging the "talent effect". Concurrently, it mitigates overinvestment by reducing agency problems and accelerating digital transformation, thereby enhancing investment efficiency. (3) Heterogeneity tests indicate that the positive impact of data assetisation on investment efficiency is more pronounced among growth-stage enterprises, technology-intensive firms, and companies operating in regions with high bank liquidity.(4) Banking fintech positively moderates the enhancement of corporate investment efficiency through data assetisation, with a more pronounced effect on alleviating underinvestment. However, it may also exacerbate overinvestment. Consequently, enterprises should vigorously develop data assetisation, applying different types of data assets to specific use cases to unlock data dividends. This approach supports the scientific development of corporate investment decisions and enhances investment efficiency.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Mohsen Mostafa

Abstract: The choice of activation function is a fundamental design deci-sion in deep learning, yet most popular options like ReLU, GELU, or Swish are static and treat all inputs uniformly. This one-size-fits-all approach breaks down in the presence of noisy or corrupted data, where the optimal non-linearity should depend on the input’s statistical context. In this paper, we introduce Bayesian Probabilistic Adaptive Sigmoidal Activation (Bayesian PASA), a novel activation function that dynamically adapts its behavior based on the input’s uncertainty. Bayesian PASA is not just a new function, but a new paradigm. It frames activation selection as a Bayesian model averag-ing problem, adaptively mixing sigmoidal, linear, and noise-aware be-haviors. The mixing weights are derived from a principled variational evidence lower bound (ELBO), regularized by a stable ψ-function that guarantees bounded influence from noise estimates. We provide three formal theorems proving its Lipschitz continuity, gradient stability, and convergence under standard training assumptions. On the challeng-ing CIFAR-100 benchmark, Bayesian PASA achieves a state-of-the-art test accuracy of 76.38%, outperforming ReLU (75.68%), GELU (75.98%), and the original PASA (75.53%). On the corrupted CIFAR-10-C dataset, the full Bayesian PASA model combined with Bayesian R-LayerNorm achieves an average accuracy of 53.91%, a +1.87% improvement over the ReLU+LayerNorm baseline. This work pro-vides a drop-in replacement for existing activations, offering not only improved performance but also built-in uncertainty quantification for more robust deep learning systems.

of 5,659

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated