Sort by

Article
Engineering
Telecommunications

Massimo Celidonio

,

Fernando Consalvi

Abstract: The integration of satellite and terrestrial networks within the same spectrum is a key enabler for extending mobile connectivity in future communication systems. In this context, the Direct Connectivity between Mobile Satellite Service and International Mobile Telecommunications user equipment (DC-MSS-IMT) paradigm, currently under study within the International Telecommunication Union [1], foresees the use of terrestrial IMT frequency bands by satellite systems to directly serve conventional mobile devices. This paper presents an experimental study to assess the coexistence between a terrestrial 5G-NR receiver and a co-channel interfering signal representative of a Low Earth Orbit (LEO) satellite downlink. A controlled laboratory setup in conducted configuration was implemented to ensure repeatability and accurate control of interference conditions. Measurements were performed over four carrier frequencies representative of IMT bands (763 MHz, 1482 MHz, 2150 MHz, and 2635 MHz) [2], considering different traffic load conditions (100% and 50%) and Doppler shifts associated with satellite motion. The interference impact was evaluated in terms of receiver desensitization, defined as the increase in the total received power relative to the baseline noise level [3]. The results show that a 1 dB desensitization threshold is consistently reached when the interfering signal power is approximately 5–6 dB below the receiver noise floor, corresponding to an interference-to-noise ratio (I/N) of about −6 dB. This behavior is observed across all tested frequency bands, traffic conditions, and Doppler scenarios, indicating limited sensitivity to frequency offsets within the considered range. The findings confirm the validity of commonly adopted coexistence criteria and provide experimentally derived reference values to support ongoing regulatory and technical studies on spectrum sharing between satellite and terrestrial IMT systems.

Article
Chemistry and Materials Science
Analytical Chemistry

Sami El Deeb

,

Mohammed Al Broumi

,

Reem K. Almarsafy

,

Maria Kristina Parr

Abstract: A cornerstone in transferring a classical liquid chromatography (LC) ultraviolet/visible (UV/Vis) method into greener and sustainable analytical method should consider the safety and toxicology of the used organic solvent in the method. Organic solvent portions used in the mobile phase may be replaced by a green solvent that is ideally bio-based and biodegradable to increase the greenness of the method. However, the implementation of a new solvent for high performance liquid chromatography (HPLC-UV/Vis) requires consideration of its environmental and health impact, cost-effectiveness, user-friendliness, and impact on the analytical performance and suitability of its chromatographic method. Existing greenness, blueness, and redness metrics expressing whiteness for evaluating the comprehensive sustainability of methods after solvent replacement overlook the chromatographic suitability of the selected solvent, this may potentially lead to suboptimal solvent replacement and an incomplete view of its capabilities. In this work, the authors present a Universal Suitability and Sustainability Index (USSI), a sixteen-parameter scoring system that quantifies four main factors for complete evaluation of a new solvent for implementation in HPLC. This index is beyond the white analytical chemistry principle. The four main factors are chromatographic suitability, greenness, blueness, and redness. Three of these factors, are based on available tools and metrics to evaluate the environmental and practicability impact on the health, and the analytical performance of the method. The fourth factor is added as an important criterion to judge the suitability of the solvent for HPLC analysis and to give an overview about its analytical applicability. The new index has been used to evaluate traditional liquid chromatographic as well as green solvents-based methods to give a universal overview that aids users to drive a rapid impression on the weakness and strength aspects and makes it easier to judge the selection of the solvent and the evaluation of the overall method sustainability.

Article
Physical Sciences
Thermodynamics

Jordan Barton

Abstract: This paper assumes that a thermodynamic system can be composed purely of coherence and information, and constructs a working model on that basis. We derive operational parameters for such systems using definitions of the Certainty Equation, semantic entropy, semantic temperature, and formulate five laws and three modes of coherence and information systems. This analysis is then compared to the features of black holes.

Article
Engineering
Mechanical Engineering

Ali Abughalia

,

Carsten Stechert

Abstract: This paper investigates how software configuration, hardware type, user background and context of use influence the usability of Virtual Reality (VR) systems in engineering product development. A VR usability assessment approach that combines task-based questionnaires, the System Usability Scale (SUS) and the NASA-TLX questionnaire was evaluated systematically across six experiments involving students, junior engineers and senior engineers in academic and industrial settings. The results demonstrate that user background and task context are at least as signifi-cant as the underlying hardware or software in influencing perceived usability and acceptance. Standalone headsets achieve higher usability scores with inexperienced users, whereas PC-based systems are still necessary for high-precision engineering tasks. Professional engineers primarily evaluate VR in terms of workflow integration, precision and return on investment, whereas students focus more on novelty and the interaction experience. Based on these findings, practical design recommendations have been derived for se-lecting a VR system, adapting interaction concepts, and implementing VR in product development processes. The study highlights that VR should not be deployed as a one-size-fits-all solution, but rather as a tool that is both context-specific and us-er-centered. It also demonstrates how systematic, iterative usability evaluation can directly support the successful industrial integration of VR technologies.

Review
Biology and Life Sciences
Animal Science, Veterinary Science and Zoology

Kathryn Ruth Connolly

,

Shane Maher

,

Torres Sweeney

,

John V. O'Doherty

Abstract: In commercial pig production systems, early weaning disrupts the coordinated maturation of the gastrointestinal tract, resulting in reduced feed intake, impaired digestive capacity, altered microbial ecology and increased susceptibility to post-weaning diarrhoea (PWD). Although enterotoxigenic Escherichia coli (ETEC) is frequently implicated, variation in disease expression is not explained by pathogen presence alone, but reflects interactions among host physiology, nutrient flow and microbial metabolism. This review examines the regulation of the gut microbiota in post-weaned pigs through the interaction between digestive capacity and dietary substrate supply. It proposes substrate flow as the organising principle linking digestive function, diet composition and microbial metabolism. Gut microbial function is regulated primarily by substrate availability, which is determined by the alignment between diet composition and host digestive capacity. When this alignment is disrupted, undigested nutrients are redistributed to the hindgut, driving a shift from saccharolytic to proteolytic fermentation. This transition generates metabolites that impair epithelial integrity, increase luminal pH and favour proliferation of opportunistic bacteria, thereby promoting intestinal dysfunction. Within this context, nutritional strategies, including optimisation of dietary protein, provision of fermentable carbohydrates and support of gastric function, act by regulating substrate flow rather than directly modifying microbial composition. Organic acids, functional ingredients and maternal influences operate through the same mechanisms, shaping nutrient digestion, microbial exposure and metabolic outcomes. The characteristic post-weaning increase in Enterobacteriaceae and reduction in microbial diversity are therefore best understood as consequences of altered substrate flow and luminal conditions, rather than primary initiating events. This interpretation provides a mechanistic basis for the design of integrated nutritional and management strategies to improve gut health and reduce antimicrobial reliance in pig production systems.

Review
Medicine and Pharmacology
Oncology and Oncogenics

Alberto Zaniboni

Abstract: Liposomal irinotecan (nal-IRI) has emerged as a cornerstone in the treatment of metastatic pancreatic ductal adenocarcinoma (mPDAC), particularly in combination with 5-fluorouracil and leucovorin (5-FU/LV) following progression on gemcitabine-based therapy. Despite its demonstrated survival benefit, gastrointestinal toxicity—most notably diarrhea—remains a clinically significant adverse event that can compromise dose intensity, treatment adherence, and patient quality of life. Diarrhea associated with irinotecan-containing regimens is mechanistically complex, encompassing both acute cholinergic and delayed secretory components mediated by mucosal injury and enterohepatic recirculation of the active metabolite SN-38. The liposomal formulation alters the pharmacokinetic profile of the drug, prolonging systemic exposure while maintaining toxicity risks. This review provides a comprehensive and updated overview of the pathophysiology, incidence, and clinical implications of diarrhea in nal-IRI–based regimens. Evidence-based management strategies are discussed, including pharmacologic interventions, supportive care, dose modifications, and patient education. Emerging therapeutic approaches—including microbiome modulation and pharmacogenomic-guided therapy—are also explored. A multidisciplinary and proactive management approach is essential to optimize outcomes and minimize toxicity.

Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Vinuka Silva

,

Nipuna Senanayake

Abstract: Search algorithms have long underpinned game-playing artificial intelligence, however as game domains expanded from small deterministic board games to large scale, partially observable, real-time environments, no survey has systematically organized this evolving literature into a unified taxonomy or evaluated algorithms through a consistent design space. This taxonomy based survey addresses that gap through domain-scoped literature clustering, analyzing 61 works from 1946 to 2025 across six thematic clusters: spatial pathfinding and navigation, adversarial game tree search, Monte Carlo tree search and bandit based planning, metaheuristic optimization, learning augmented search, and search under uncertainty and partial observability. A four dimensional design space — covering interaction topology, information structure, computational regime, and source of search guidance enables consistent cross-cluster comparison of hybrid approaches. Analysis reveals a paradigm shift from analytic correctness and proof driven evaluation toward empirical benchmarking, sampling based planning, and neural guided search. Cross-cluster synthesis identifies fundamental tensions among decision quality, formal guarantees, and resilience under uncertainty, and documents an evolution in evaluation methodology from deterministic metrics to distributional robustness testing. Open challenges are identified, pointing toward principled frameworks for managing trade offs among quality, optimality, and robustness. This survey provides artificial intelligence researchers and game developers with a structured reference for selecting and evaluating gaming search algorithms across diverse environments.

Article
Chemistry and Materials Science
Physical Chemistry

Xiangxi Zhang

,

Qing Zhou

Abstract: In this study, hydantoin (C₃H₄N₂O₂) was selected to investigate the photoluminescence mechanism of non-typical luminescent compounds. The emission spectra of single crystals were examined using a laser confocal microscope. Within the same crystal, the peak shape and position were consistent across different regions, while the intensity varied; this phenomenon is attributed to confinement-induced emission. For different crystal blocks, variations in molecular packing modes led to changes in both peak shape and position. Combined with theoretical calculations and analyses, the results show that: as the molecular number increases, the energy gap decreases and the excitation wavelength increases (lower excitation energy); the hole-electron attraction energy, delocalization index, and overlap degree all decrease, with the hole delocalization index decreasing faster than that of the electron; the spin-orbit coupling coefficients for high-lying triplet states are more sensitive to the molecular count; and the intersystem crossing rate increases sharply with increasing energy level. In summary, the number and mode of molecular packing in the crystal influence the excited-state electronic structure and hole-electron interactions, thereby determining the luminescence behavior of non-typical luminescent compounds.

Article
Physical Sciences
Condensed Matter Physics

N. Zen

Abstract: By making periodic thru-holes in a suspended film, the phonon system can be modified. Motivated by the BCS theory, the technique -- so-called phonon engineering -- was applied to a metallic niobium sheet. It was found that its electrical resistance dropped to zero at 175 K, and the zero-resistance state persisted up to 290 K in the subsequent warming process. Despite the initial motivation, neither these high transition temperatures nor the phase transition with thermal hysteresis can be accounted for by the BCS theory. Therefore, we abandon the BCS theory. Instead, it turns out that the metallic holey sheet is partly oxidized to form a niobium-oxygen square lattice, which has points of resemblance to a copper-oxygen plane, the fundamental component of cuprate high-Tc superconductors. Therefore, the pairing mechanism underlying this study should be related to that of cuprate high-Tc superconductors, which we may not yet understand. In addition to the electrical results of zero resistance, the holey sheet exhibited a decrease in magnetization upon cooling, i.e., the Meissner effect. Moreover, the remnant magnetization was clearly detected at 300 K, which can only be attributed to persistent currents flowing in a superconducting sample. Thus, this study meets the established criteria for a conclusive demonstration of true superconductivity. Finally, the superconducting transition with the unambiguous thermal hysteresis is discussed. According to Halperin, Lubensky, and Ma, or HLM for short, any superconducting transition must always be first order with thermal hysteresis because of the intrinsic fluctuating magnetic field. The HLM theory is very compatible with the highly oriented system harboring two-dimensional superconductivity.

Article
Physical Sciences
Astronomy and Astrophysics

John Henderson

Abstract: A number of approaches to a theory of quantum gravity assume the cosmological fabric of spacetime is distinct from the spacetime of the material world of matter and energy. On a short time scale, one cannot distinguish between the fabric of spacetime expanding, nominally due to dark energy, or the scale of the material world contracting with respect to the fabric of spacetime. Contraction of the scale of the material world (length, time, mass, and charge contracting equally) maintains observable physical laws and results in a decreasing derived dark energy density matching that reported by the DESI Collaboration in March 2025. The DESI Collaboration fits to the dark energy density over time show a distinct difference between those using scale-dependent supernovae data and those using mostly scale-independent angular measurements, such as from CMB and BAO measurements. That difference is resolved by applying a scale contraction rate of ~3%/Gyr to the supernovae data. Scale contraction of the material world eliminates the need for dark energy to explain the apparent expansion of space, resolving the ~10122 discrepancy between the dark energy density required to match observation and that calculated for the vacuum energy as the mechanism for dark energy. The large force of the vacuum energy is a potential mechanism for compression of the material world, and would explain why the observed expansion only occurs outside of gravitationally bound systems. A scale-contraction model for cosmological kinematics explains why the dark energy density appears to be decreasing without requiring the underlying vacuum energy to be changing with time. Scale contraction of the material world predicts the observed directions and order of magnitude of the Hubble tension and the S8 tension, which has been a challenge to other proposed modifications of Lambda-CDM since those two tensions have opposite trends over time, the Hubble constant being about 10% larger in the late universe compared to the early universe, and the structure constant, S8, about 10% smaller. Scale contraction of the material world can be tested by modifying the Lambda-CDM cosmological model to include scale contraction over time, and assessing if the Hubble, S8, and other tensions are quantitatively reduced or resolved.

Brief Report
Medicine and Pharmacology
Oncology and Oncogenics

Sergey Tsurkan

,

Evgueni Klinski

,

Anna Prostyakova

,

Janneta Tcherkassova

Abstract: The CLIA-CA-62 assay is an in vitro diagnostic device registered in Russia and Kazakhstan for measuring a marker specific to epithelial carcinomas. This pilot project aimed to assess CA-62 utility for primary cancer screening in an asymptomatic cohort in Kazakhstan. The trial was interrupted in January 2022 for reasons unrelated to the scientific program before clinical outcomes could be obtained. Available baseline data were therefore used to characterize the CA-62 value distribution and perform a scenario-based assessment of estimated assay specificity at a reference value of 5,000 U/mL. The analysis included 1,214 quantitative CA-62 measurements from asymptomatic healthcare workers aged 45–70 years, collected during annual preventive examinations between September and October 2021. The distribution was markedly right-skewed, with 92.5% of samples in the normal zone (median: 3,371 U/mL; IQR: 1,965–4,415 U/mL; 95th percentile: 6,309 U/mL). At the 5,000 U/mL cutoff, 7.5% of results (91/1,214) were elevated. Scenario-based modeling assuming cancer prevalence of 0.5–2.5% and assay sensitivity of 65–95% yielded an estimated specificity of 92.79–94.75%. These findings provide an analytical foundation for prospective verification of CA-62 in primary screening settings.

Review
Medicine and Pharmacology
Cardiac and Cardiovascular Systems

Alok D. Shah

,

Amr Gamal

,

Hesham Abdelaziz

,

Matthew Luckie

,

Andrew Wiper

,

Ranjit More

,

Tawfiq Choudhury

Abstract: Transcatheter Aortic Valve Implantation/ Replacement (TAVI/TAVR) has come a long was since the first in human implant done by Prof Cribier & colleagues in 2002. Initially a consideration in inoperable/ high surgical risk patients, TAVI is now indicated in patients with severe tricuspid aortic stenosis and suitable anatomy aged 70 or higher. This has been possible due to improvements in pre procedural planning, performance upgrades & evolution of transcatheter heart valve (THV) systems and increasing operator experience. Considering improving longevity and relatively younger ages at index implantation, complexities of redo TAVI planning and methods to improve THV durability are the next frontiers. This review summarizes these advancements while emphasizing on pre procedural planning, current guidelines, individualized device selection with a brief note on polymeric heart valves - developed to overcome the disadvantageous bioprosthetic dysfunction seen with current THVs.

Hypothesis
Computer Science and Mathematics
Computer Networks and Communications

Robert Campbell

Abstract: Post-Quantum Cryptography (PQC) migration to NIST FIPS 203, 204, and 205 under NSA CNSA 2.0 is a multi-year, multi-domain transformation across cloud, enterprise, embedded, OT, tactical, and national-security systems. Anthropic’s Claude Mythos Preview (April 2026) introduces AI-accelerated cybersecurity capabilities that intersect this migration directly, performing autonomous reasoning against previously unknown vulnerabilities in production software — a qualitative departure from signature-based and SAST/DAST tooling. Drawing on federal guidance from NIST, NSA, OMB, and CISA, and on independent analyses from CETaS and the UK AI Security Institute, we present a lifecycle and architecture analysis of how Mythos-class models alter PQC migration timelines, risk surfaces, lifecycle dependencies, and architectural constraints. Modeling Mythos as both accelerator and destabilizer, we derive an analytic projection of a compressed two-to-four-year migration window for highest-exposure systems, against traditional baselines of five-to-ten years for small organizations and twelve-to-fifteen-plus years for large enterprises. The compression collapses human-labor bottlenecks in discovery, planning, and code modification, not cryptography itself. We propose a lifecycle-aligned migration model, an updated cost model, and governance requirements for frontier-model access. The binding constraint shifts domain-conditionally: defender capacity at adversary tempo governs software-analytical phases, while non-compressible external cadence governs embedded and regulated domains.

Article
Computer Science and Mathematics
Analysis

Dong Guo

,

Xin Wang

,

Xi Luo

Abstract: This paper introduces a novel class of convex functions associated with a strip domain and establishes the upper bounds for the coefficients of initial terms, as well as second and third-order Hankel determinants. It provides exact upper bounds for the third-order Hankel determinants of both the inverse of starlike functions and convex functions, along with the upper bounds for the second-order Hankel determinants of the logarithmic coefficients related to these functions.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Wenqi Gu

,

Yingtao Zhang

,

Alessandro Muscoloni

,

Carlo Vittorio Cannistraci

Abstract: Logistic and logit functions play important roles in modern science, serving as foundational tools in various applications including artificial neural network (ANN). While there are functions that could produce distinct logistic and logit curves, no single, unified framework has been developed to generate both logistic and logit curves. We introduce a generalized logistic–logit function (CMG-GLLF) to fill this gap. CMG-GLLF provides four interpretable and trainable parameters that allow explicit control over: curve type and steepness, asymmetry, upper and lower limits of x- and y-axes. CMG-GLLF’s potential is explored in basic machine intelligence tasks. As a proof-of-concept on how this function can improve performance of deep learning, we propose a trainable input feature modulator (IFM) that consists in learning the parameters of the CMG-GLLF for each input layer node during backpropagation for multi-layer perceptron (MLP), which is a fundamental building block of many complex network architectures. Compared to various other learnable functions, across 3 different optimizers, CMG-GLLF allows superior MLP’s accuracy and stable training behavior on CIFAR-10 and CIFAR-100 image classification, but at the cost of increased computational time. Hence, we identified limitations to address in future studies, notably the need to derive an explicit mathematical expression for the logit phase, which could: (i) mitigate numerical instability in more complex architectures (e.g., CNNs) while reducing computational overhead, and (ii) enable a systematic evaluation of CMG as an activation function across all layers. Furthermore, CMG-GLLF adopted as data transformation function enhances the accuracy of affinity-graph-based neuron segmentation. CMG-GLLF combines in a unique framework the ability of logistic and logit function to modulate signals or variables, covering a full spectrum of attenuation or amplification transformations. CMG-GLLF is flexible and trainable, has potential to advance machine learning models, and can inspire further applications in other data analysis challenges in different domains of science.

Article
Medicine and Pharmacology
Medicine and Pharmacology

Adelina Tanevski

,

Ludusanu Andreea

,

Bogdan Mihnea Ciuntu

,

Ștefan Lucian Toma

,

Gheorghiță Balan

,

Raul-Vasile Lupusoru

,

Cristina Strobescu

,

Raluca Dragomir

,

Bogdan Florin Toma

,

Ciprian Gavrilă Ilea

Abstract: Chronic surgical mesh rejection is a complex pathological process characterized by persistent inflammation, foreign body reaction, and progressive extracellular matrix (ECM) remodeling, leading to impaired tissue integration. Although histopathological features are well described, the structural organization of the ECM at the mesh–tissue interface remains insufficiently understood. This study aimed to investigate multiscale morphological alterations associated with chronic mesh rejection, with emphasis on ECM disorganization and loss of structural coherence. Seven mesh–tissue complexes explanted due to clinically confirmed chronic rejection were analyzed. Histological evaluation (hematoxylin–eosin and Van Gieson staining) was combined with atomic force microscopy (AFM), including topographical imaging, directional analysis, coherence mapping, and roughness quantification according to ISO 25178-2. Histology revealed chronic inflammatory infiltrates, foreign body reaction, and disorganized collagen deposition. AFM analysis showed pronounced surface heterogeneity, fragmented fibrillar architecture, and absence of preferential orientation. Roughness parameters (increased Sa and Sq, elevated Sku, negative Ssk) indicated a structurally irregular surface dominated by depressions and isolated peaks. Directional and coherence analyses confirmed loss of organized fibrillar architecture. These findings suggest that chronic mesh rejection is associated with marked ECM disorganization and loss of structural coherence at the mesh–tissue interface, reflecting impaired tissue integration. Multiscale morphological analysis provides insight into the structural basis of mesh failure and highlights the importance of ECM organization in implant–tissue interactions.

Article
Environmental and Earth Sciences
Soil Science

Sito-Obong Ukeme Udofia

,

Lisa K. Williams

,

Alison P. Wills

,

Tim Bevan

,

Matt J. Bell

Abstract: The aim of this study was to assess potential uplift in soil organic carbon (SOC) levels within different types of agricultural land. A total of 1,032 soil samples were collected from 43 fields across six farms during the same year. Fields were used for arable, temporary grass and permanent grass production. The study compared SOC levels (in g/kg, ratio to clay and ratio to nitrogen) between the field boundary and within field areas. The field boundary was classed as either open (boundary with fence and/or wall) or covered (with hedgerow and/or trees). From the fields sampled, 69% of within field samples and 88% of boundary samples were categorized as having ‘very good’ levels of SOC. On average, the SOC in g/kg and ratio to clay were higher for permanent grass and boundary field areas compared to temporary and within field areas, with no difference between open or covered boundary areas. Benchmarking fields against the field boundary area or based on SOC to clay ratio can be used by land managers to identify fields for potential SOC uplift.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Tianji He

,

Yulin Shao

,

Fen Hou

Abstract: Deploying large language models (LLMs) at the network edge is hindered by their enormous cost, yet the reasoning quality they provide remains indispensable. Heterogeneous collaboration between edge small models and a server LLM has emerged as a promising direction, but existing methods fail under the dynamic conditions of multi-user contention, autoregressive generation, and time-varying resources. This paper puts forward a process reward model (PRM)-aided two-stage decoupled acceleration (PRADA) framework, which is built on a fundamental change of perspective: instead of querying a PRM online, which cripples multi-user systems with prohibitive latency, we use the PRM solely as an offline teacher. Its reasoning-quality intuition is fully distilled into a lightweight policy that screen each step locally, without any context upload, while a Lagrangian scheduler at the server resolves resource contention through a threshold-structured policy. Across diverse reasoning benchmarks, PRADA retains the vast majority of the LLM's accuracy while substantially reducing end-to-end latency. The results further reveal threshold effects for both server parallel capacity and total bandwidth: performance saturates beyond critical resource levels, after which the system bottleneck shifts from queuing to computation or from communication to contention. These structural findings provide actionable guidance for joint provisioning of computation and communication resources without requiring per-benchmark tuning.

Article
Medicine and Pharmacology
Pharmacy

Ana Alarcia-Lacalle

,

Miguel Ángel Morán-Rodríguez

,

Laura Morata

,

Arantxa Isla

,

Andrés Canut-Blasco

,

Alicia Rodríguez-Gascón

Abstract: Background/Objectives: Oritavancin therapy for complex infections remains challenging due to the lack of well-established dosing regimens. The objective of this work was to apply PK/PD modeling and Monte Carlo simulation considering different PK/PD targets to identify multiple dosing regimens that may ensure effective concentrations of oritavancin for the treatment of long-term infections. Methods: Plasma concentration–time profiles were simulated for different regimens (single dose of 1200 mg, 1200 mg followed by 800 mg every 7 days, 1200 mg followed by 800 mg every 10 days, 1200 mg q7d, 1200 mg q10d, 1200 mg every 14 days, 1200 mg every 21 days, and 1200 mg followed by 1200 mg on day 8, then 1200 mg q14d), and the probability of target attainment (PTA), indicative of treatment success, was estimated. Results: All dosing regimens provided probabilities of target attainment of 100% up to MICs of 0.5 mg/L when AUC0-24/MIC and Cmax/MIC were applied. Considering AUC0-72/MIC, the regimens would be adequate up to MIC of 0.125 mg/L. For fCmin > MIC, all except 1200 mg q21d resulted adequate for MIC of 0.125 mg/L, and 1200 mg day 1 + 800 mg q7d, and 1200 mg q10d may be useful to treat infections due to bacteria with MIC of 0.25 mg/L. Conclusions: More studies involving patients with complex infections are needed to better stablish the relationships among plasma concentrations, MIC values, and clinical outcomes. fCmin > MIC should be investigated as a potential PK/PD target for the treatment of these infections with oritavancin.

Article
Social Sciences
Education

Asmar Yulastri

,

Ganefri

,

Remon Lapisa

,

Feri Ferdian

,

Elfizon

,

Marwan

,

Arief Maulana

,

Yudha Aditya Fiandra

Abstract: Effective asset management is critical for university sustainability, yet the mechanisms linking governance and digitalization to asset performance remain unclear, particularly in developing-country higher education. This study investigates how governance and digitalization influence sustainable asset performance in Indonesian public universities, focusing on the mediating role of partnership capabilities and the moderating roles of organizational readiness and environmental dynamism. Survey data were collected from 113 staff involved in asset management, governance, and partnerships across three Indonesian universities. Partial least squares structural equation modeling (PLS-SEM) was employed to test the hypothesized relationships. The results show that partnership capabilities fully mediate the effects of both governance and digitalization on sustainable asset performance. Neither governance nor digitalization exhibits significant direct effects. Organizational readiness moderates the governance-partnership capabilities relationship, while environmental dynamism does not significantly moderate the digitalization-partnership capabilities link. These findings extend dynamic capabilities theory to public university asset management and suggest that universities should prioritize building partnership capabilities, align digitalization investments with collaborative needs, and assess organizational readiness before implementing governance reforms.

of 5,869

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated