Sort by

Article
Biology and Life Sciences
Food Science and Technology

Peilun Li

,

Juk-Sen Tang

Abstract: Machine learning (ML) models for predicting food recall severity could accelerate regulatory triage, yet no systematic benchmark exists on the U.S.\ Food and Drug Administration (FDA) open-access database. We construct the first comprehensive ML benchmark for FDA food recall severity classification (Class I / II / III) using 28,448 enforcement records spanning 2012--2025. A 1,437-dimensional feature space is engineered from TF-IDF and Sentence-BERT embeddings of recall narratives, structured categorical attributes, and temporal indicators. Five classifiers (Logistic Regression, Random Forest, XGBoost, LightGBM, CatBoost) are trained with Optuna-tuned hyperparameters. Under standard random splitting, XGBoost achieves Macro-F1 = 0.89; however, a multi-layer leakage audit reveals that this figure is inflated by entity-level autocorrelation. When firm-aware group splitting, temporal splitting, or their combination is applied, Macro-F1 drops to approximately 0.57. A firm-mode baseline---assigning each company's historically most frequent severity class---reaches 0.82 under random splitting, demonstrating that 92% of the apparent performance stems from firm-level memorisation. Identity-masking experiments confirm that the leakage is structural rather than attributable to explicit company-name tokens. A \( 2 \times 2 \) factorial decomposition shows that firm overlap and temporal continuity are highly collinear; removing either suffices to expose the true generalisation floor. A hazard-type decomposition reveals that pathogen--severity associations transfer across firms, whereas labelling and GMP violations are highly firm-specific, explaining the disproportionate collapse of Class~III prediction under group splitting. SHAP analysis, feature ablation, and a nine-year continuous-learning simulation provide additional insights into model behaviour and retraining strategies. We recommend that food-safety ML studies adopt group-aware or temporal evaluation protocols, report entity-overlap statistics, and include entity-prior baselines to prevent overstated conclusions.

Article
Social Sciences
Other

George Johnson

,

Wendy Carter

Abstract: Mental disorders are among the leading causes of disability worldwide and impose substantial economic costs on individuals, healthcare systems, and national economies. While the clinical rationale for early identification of mental disorders is well established, the economic implications of systematic early screening and detection remain underemphasized in policy discourse. This paper examines the economic advantages of early screening and early detection of common and severe mental disorders, integrating findings from epidemiology, cost-of-illness studies, cost-effectiveness analyses, and health systems research. Evidence consistently demonstrates that delayed diagnosis is associated with increased healthcare utilization, reduced labor force participation, lower lifetime earnings, and higher social welfare expenditures. Conversely, early detection—particularly when integrated into primary care and early intervention services—has been shown to improve functional outcomes and, in many contexts, to be cost-effective or cost-saving from a societal perspective. The analysis supports the conclusion that early mental health screening constitutes not only a clinical priority but also a fiscally responsible strategy for health system sustainability and economic productivity.

Article
Biology and Life Sciences
Biochemistry and Molecular Biology

Hemeng Wang

,

Yuhao Hu

,

Ziyi Yang

,

Zhijie Wang

,

Fuling Wang

,

Fengjiao Wang

,

Mengmeng Jia

Abstract: In the present study, an investigation was carried out on the molecular, morphological, and anatomical mechanisms underpinning the differential cold tolerance of two cotton cultivars, Xinluzhong 61 (C61) and Tahe 2 (C2). The seedlings of these cultivars were exposed to 0 °C treatments for 12 and 24 hours. Comparative transcriptomic analysis (RNA-Seq) was conducted to identify differentially expressed genes (DEGs). Simultaneously, comprehensive anatomical examinations of cotyledons, true leaves, and stems were carried out to evaluate morphological alterations.Transcriptomic analysis at the 24-hour time point, the transcriptional profile had changed, with trichome differentiation and phloem/xylem histogenesis were the most significantly enriched biological process in C61. This result was verified by phenotypic observations, as C61 developed dense glandular trichomes on its stems, a characteristic not observed in C2. Anatomical investigations demonstrated that although cold stress led to a reduction in tissue thickness in both cultivars, C61 maintained significantly greater leaf thickness, palisade tissue thickness, and a higher palisade-to-spongy tissue ratio in true leaves after stress. Moreover, C61 exhibited greater xylem thickness in the stem under cold conditions, implying superior structural integrity and water transport capacity. These findings highlight key adaptive traits and offer valuable targets for the genetic improvement of cold tolerance in cotton.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Zhenheng Tang

,

Xin He

,

Tiancheng Zhao

,

Fanjunduo Wei

,

Xiang Liu

,

Peijie Dong

,

Qian Wang

,

Qi Li

,

Huacan Wang

,

Ronghao Chen

+11 authors

Abstract: Large language models (LLMs) face significant challenges in sustaining long-term memory for agentic applications due to limited context windows. To address this limitation, many work has proposed diverse memory mechanisms to support long-term, multi-turn interactions, leveraging different approaches tailored to distinct memory storage objects, such as KV caches. In this survey, we present a unified taxonomy that organizes memory systems for long-context scenarios by decoupling memory abstractions from model-specific inference and training methods. We categorize LLM memory into three primary paradigms: natural language tokens, intermediate representations and parameters. For each paradigm, we organize existing methods by three management stages, including memory construction, update, and query, so that long-context memory mechanisms can be described in a consistent way across system designs, with their implementation choices and constraints made explicit. Finally, we outline key research directions for long-context memory system design.

Article
Engineering
Mining and Mineral Processing

Gregorii Iovlev

,

Andrey Katerov

,

Anna Andreeva

,

Alisa Ageeva

Abstract: Maintaining the integrity of waterproof strata (WPS) between mine workings and overlying aquifers is critical, because water-conducting cracks (WCC) may cause mine flooding and surface subsidence. In the Upper-Kama potash deposit, the WPS is a 50-140 m thick stratified sequence of evaporites and clays overlying mined-out cham-bers. Under long-term loading, salt rocks tend to creep, soften, and localize damage, which can cause WPS failure. In this paper the Concrete damage-plasticity model, supplemented by the N2PC-MCT viscoplastic creep model, is applied to simulate WCC initiation and evolution in the Upper-Kama WPS. Model parameters are obtained from published laboratory tests, in-cluding uniaxial and triaxial compression and tension, and then validated using ob-served ground-surface subsidence. A plane-strain finite-element model incorporates stratified lithology, interface elements between layers, and stepwise excavation. Long-term simulations up to 50 years investigate two operational scenarios: with and without backfilling. The calibrated model reproduces the main stages of surface subsidence and chamber closure. Without backfilling, simulations indicate that tensile damage localizes mainly in a stiff central salt layer of the WPS. Most cracks appear approximately between 33 and 37 years after the beginning of mining. With backfill, tensile crack propagation stops and damage remains stable. A hypothetical homogeneous WPS case confirms that the observed central-layer cracking is associated with stiffness contrasts and composite bending in the stratified system. An approximate analytical multilayer beam solution, based on energy minimization, predicts bending stress concentration in stiff intermediate layers and is consistent with the numerical stress distribution. The combined numerical and analytical results clarify the mechanisms of long-term WCC initiation in stratified WPS and may be used for hazard assessment and planning of mitigation measures, including backfilling and focused monitoring of stiff central layers.

Article
Computer Science and Mathematics
Computational Mathematics

Paola Cabascango-Flores

,

Erick P. Herrera-Granda

Abstract: This study integrated Item Response Theory (IRT) models with ordinal survey instruments to assess academic performance trajectories and identify multidimensional factors associated with academic achievement among first-semester leveling students (N=1,558 pre-test; N=1,676 post-test) at the Escuela Politécnica Nacional, Ecuador. A dual-component methodology was employed: (1) an 80-item ordinal survey measuring eight latent constructs (socioeconomic, academic, motivational, vocational, social integration, psychological/emotional, institutional, and biological/health factors), validated through Confirmatory Factor Analysis (CFI > 0.95, RMSEA < 0.06); and (2) structured diagnostic assessments in mathematics, physics, chemistry, geometry, and language, calibrated using three-parameter logistic (3PL) IRT models via Expected A Posteriori (EAP) estimation. Results demonstrated high internal consistency (r = 0.93 between IRT and raw scores), with mean IRT-scaled ability θ ̅ = 10.45 (SD = 3.51) on a 1–20 scale. Item parameters indicated adequate discrimination a ̅ = 1.92) and centered difficulty (b ̅ = 0.05), though 13.75% of items exhibited poor model fit (S-X² p < 0.01), concentrated in physics and chemistry domains. Factorial scores and performance outcomes were statistically contrasted against 24 categorical demographic variables, revealing differential performance patterns across student subgroups. This research provides validated psychometric instruments, reproducible IRT-LMS integration protocols, and empirical evidence supporting targeted interventions to strengthen university transition in resource-constrained contexts.

Article
Business, Economics and Management
Econometrics and Statistics

Omar Abu Risha

,

Jifan Ren

,

Mohammed Ismail Alhussam

,

Mohamad Ali Alhussam

Abstract: Northeast China’s rust-belt cities have faced persistent concerns about stagnating labor productivity amid structural change. This paper studies how urban agglomeration benefits depend on local economic structure and ownership composition using an annual city-level panel. We estimate two-way fixed-effects models with city and year effects and city-clustered standard errors, complemented by dynamic specifications that account for productivity persistence. Results show a robust positive within-city association between population density and labor productivity. This density premium is structure-conditioned: the productivity payoff to density is significantly larger in city-years that are more industry-oriented. In contrast, an information-theoretic measure of sectoral imbalance (KL divergence from an industry–services balance benchmark) adds limited explanatory power once fixed effects, structural orientation, and controls are included, suggesting that directional orientation matters more than balance per se in this two-sector setting. Ownership composition is also informative. While SOE and private employment shares correlate with labor productivity in the fixed-effects models, the strongest and most stable finding emerges from ownership-mixing entropy: binary SOE–private employment entropy is positively associated with labor productivity in dynamic specifications, with meaningful heterogeneity across provinces. Overall, the evidence supports a conditional agglomeration view in which productivity dynamics in Northeast China reflect the interaction of density, structural orientation, and ownership complexity. The results highlight the importance of aligning urbanization with higher-value structural transformation and improving the institutional environment that enables productive SOE–private coexistence.

Article
Engineering
Other

Mukul Badhan

,

Majid Bavandpour

,

Kasra Shamsaei

,

Dani Or

,

George Bebis

,

Neil P. Lareau

,

Qunying Huang

,

Hamed Ebrahimian

Abstract: Monitoring the progression of large wildfires in near-real-time is essential for active-fire situational awareness and emergency response management. Current satellite-based wildfire monitoring systems face a trade-off between temporal and spatial resolution: geostationary satellites such as GOES offer frequent (~5 minutes) but coarse observations (~2 km), while low earth orbit (LEO) instruments such as VIIRS provide fine spatial detail (∼375 m) with limited temporal coverage (twice per day). To bridge this gap, this study introduces a deep learning (DL) approach that enables near real-time, high-resolution wildfire monitoring using GOES data. The proposed approach consists of two main steps: a segmentation step to distinguish active fire regions from background areas and a regression step to estimate the active fire pixels brightness temperature (BT) across a region of interest. The output of these steps is combined to generate a high-resolution fire location and BT maps. To train the DL model, multi-spectral GOES inputs are paired with VIIRS-derived fire observations from several wildfires across the United States. Spatial consistency between GOES and VIIRS data is achieved through parallax correction, reprojection, resampling, and per-image normalization. Ablation studies are performed to demonstrate the impact of different assumptions (e.g., background values in the VIIRS ground truth) and strategies (e.g., loss functions) throughout the development process. The results show that the proposed DL approach effectively enhances GOES imagery, improving both BT estimation and fire boundary localization. Overall, the proposed method offers a practical and scalable solution for wildfire boundary detection and thermal mapping using existing satellite systems.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Chih-Hsiung Chen

,

Kuang-Yu Hsieh

,

Kuo-En Huang

,

Chang-Wei Chen

Abstract: Cloud-based large language models (LLMs) have demonstrated near-human performance in medical applications; however, their clinical deployment is constrained by concerns regarding patient privacy, data security, and network dependence. Locally deployable, open-weight LLMs may provide a privacy-preserving alternative for resource-limited or security-sensitive environments. We evaluated two families of locally deployed models, Google Gemma3 (1B, 4B, 12B, and 27B parameters; vision enabled in models since 4B) and GPT-OSS-20B, using 1,200 multiple-choice questions from the Taiwan Pulmonary Specialist Board Examinations (2013–2024), including 1,156 text-only and 44 text-and-image items across 26 categories. A cloud-based GPT-4 Turbo model served as a reference. Models were queried locally via Ollama. Accuracy was analyzed by year and category using repeated-measures ANOVA with Tukey-adjusted pairwise comparisons. GPT-OSS-20B achieved the highest overall accuracy (58–78 correct answers per 100 questions) and significantly outperformed all Gemma-3 variants (p &lt; 0.001), while Gemma3-27B ranked second. No statistically significant difference was observed between GPT-OSS-20B and GPT-4 Turbo after Tukey adjustment. Larger models showed improved accuracy but longer inference time. These findings suggest that selected open-weight LLMs deployed on-device can approach the performance of cloud-based models in structured medical examinations, with trade-offs between accuracy, modality support, and computational efficiency.

Article
Medicine and Pharmacology
Urology and Nephrology

Kelly Chong

,

Igor Litvinovich

,

Christos Argyropoulos

,

Yiliang Zhu

Abstract: Background: Rising kidney discard rates and uncertainty around accepting higher-risk donor kidneys highlight the need for decision-support tools that integrate donor and recipient factors and communicate risk in ways that are understandable and usable at the time of offer. Conventional indices (e.g., KDPI/KDRI) provide population-level signals but do not deliver individualized, cognitively accessible information aligned with real-time clinical workflows. Objective: To describe how key transplant stakeholders—patients, coordinators, and providers—interpret and evaluate a prototype Kidney Risk Calculator app that generates donor–recipient–specific survival projections, and to identify the content, format and features, and functionality needed for clinically meaningful, patient-centered decision support. Design: Qualitative study using focus groups and individual interviews. Setting: University of New Mexico Hospital (UNMH) Kidney Transplant Center. Participants: Five patients (four transplant candidates and one patient advocate), three transplant coordinators, and five transplant providers (3 attending physicians and 2 advanced practice practitioners). Methods: Semi-structured sessions (45–60 minutes) with 13 stakeholders (patients, coordinators, and providers) included a live app demonstration and explored usability, interpretability, contextual information needs, perceived clinical utility, and anticipated barriers/facilitators. Data were collected via one coordinator focus group, one patient focus group, and five provider interviews; sessions were recorded, transcribed, de-identified, and analyzed using inductive reflexive thematic analysis. Results: Stakeholders affirmed the value of personalized projections as an adjunct to clinical judgment, particularly for higher-risk offers. Participants prioritized: 1) Content—clear education on hepatitis C virus (HCV)-positive donors and Public Health Service (PHS) risk criteria; plain explanations of Calculated Panel Reactive Antibody (CPRA); and framing that makes time on dialysis and trade-offs salient; 2) Format & Features—plain-language narratives, percentages rather than decimals, simple visuals, minimized acronyms, U.S. customary units, and a stepwise (“TurboTax‑like”) input flow preferred by patients; and 3) Functionality—attention to cognitive load and workflow alignment, given phone-based time pressure and digital-access constraints. Stakeholders emphasized that the tool’s value hinges on clarity, context, and workflow fit—not predictive accuracy alone. Limitations: Single‑center, formative prototype study with a modest sample; findings are illustrative and may have limited transferability. Participants reacted to a demonstration rather than using the app during real‑time offer calls; convenience/email recruitment and Zoom‑only English sessions may introduce selection bias; team involvement in app development may contribute residual confirmation bias despite mitigation. Conclusions: Early stakeholder input suggests that a kidney offer decision support tool should integrate individualized predictions with plain language explanations, contextual information that addresses common misconceptions, workflow aligned functionality, and accessible outputs. Tools designed and implemented with these features may support acceptance of medically complex kidneys and may help reduce offer bypass and organ discard. These inferences reflect stakeholder perceptions in a formative qualitative study and warrant prospective evaluation.

Article
Engineering
Aerospace Engineering

Nico Liebers

,

Sven Ropte

Abstract: The significant heat generation during refueling of hydrogen pressure tanks might exceed the permissible 85 °C temperature limit for type IV tanks consisting of a thermoplastic liner and a carbon fiber composite overwrap. Common countermeasures like hydrogen pre-cooling or long filling times are energy and time consuming, hence in this paper passive means through thermally better suited materials are examined. Therefore state of the art and alternative materials are first characterized and finally compared using a transient heat model. The different material combinations are compared for maximum temperature and weight in a typical filling scenario. As alternative liner materials thermoplastics filled with short carbon fibres, minerals and graphite and concerning the composite overwrap copper coated carbon fibres were chosen to improve thermal properties. The findings show that the liner is the bottleneck while transferring heat from the inner to the outer tank surface. Using graphite filled thermoplastic as liner material shows the highest potential regarding thermal optimization with only little weight increase. Using additionally copper coated carbon fibres reduces the maximum temperature further, but at a high weight increase. This article is a revised and expanded version of a paper, which was presented at the 15th EASN International Conference, in Madrid, Spain, in October 2025 [1].

Article
Computer Science and Mathematics
Software

Daniel M. Muepu

,

Yutaka Watanobe

,

Md Faizul Ibne Amin

,

Md. Shahajada Mia

Abstract: Recent advances in large language models (LLMs) have made it feasible to use them as automated debugging tutors, but it remains unclear how much can be gained by moving from single-model tutors to multi-agent councils with separated roles. We study this question in an offline simulation on 200 debugging cases drawn from an online judge, spanning 20 problems split into course-style and contest-style challenge tracks. We compare four single-model tutors based on current frontier models with four councils that assign models to Architect, Skeptic, Secretary, Pedagogue, and Mentor roles and operate in both Blind and Guided modes. Single-model tutors achieve near-perfect repair on course problems but perform less reliably on challenge cases and often rewrite large portions of student code, show non-negligible false positive rates, and leak full or near-full solutions in a substantial share of hints. Councils designed around measured model strengths improve both technical and pedagogical behaviour. On the challenge track, the best council raises patch success by 12.2 percentage points over the best single tutor, while reducing false positives, shrinking median patch size, improving hint localisation, and cutting solution leakage in Blind mode from about one fifth of hints to under ten percent. Councils also exhibit higher stability across reruns and produce hints that two independent instructors consistently rate as more useful and better scaffolded. Guided mode, where internal components see a reference solution, yields further technical gains but introduces leakage risks that require prompt tightening and a sanitising Secretary to control the flow of ground truth. Additional trap experiments with poisoned reference solutions show a mix of resistance and fail-safe collapse rather than systematic poisoning of hints. These results indicate that orchestration and information flow are powerful levers and that well-designed councils can provide more reliable and pedagogically aligned debugging support than strong single-model tutors alone.

Article
Medicine and Pharmacology
Oncology and Oncogenics

Yun Wang

,

Yafei Wang

,

Dongqi Yuan

,

Shenge Liu

,

Peng Chen

Abstract: Background/Objectives: Head and neck squamous cell carcinoma (HNSCC) frequently exhibits resistance to targeted therapies, including cetuximab. Identifying key drivers of tumor progression and elucidating the mechanisms underlying therapeutic resistance are essential for improving clinical outcomes. This study aimed to investigate the role of Caveolin-2 (CAV2) in HNSCC proliferation and cetuximab resistance. Methods: Prognosis-associated genes in HNSCC were screened using the TCGA database. The functional role of CAV2 in cell proliferation and apoptosis was assessed via CCK-8, colony formation, and flow cytometry assays. Mechanistic insights were obtained through co-immunoprecipitation, ubiquitination assays, and proteomic analysis. The impact of CAV2 on cetuximab sensitivity was evaluated both in vitro and in a xenograft mouse model. Results: CAV2 emerged as a top prognostic candidate. Knockdown of CAV2 significantly suppressed HNSCC cell proliferation and induced apoptosis. Mechanistically, CAV2 interacted with and stabilized the PACT protein, thereby inhibiting PKR activation via the ubiquitin–proteasome pathway. Notably, CAV2 deficiency markedly enhanced the sensitivity of HNSCC cells and tumor xenografts to cetuximab treatment. Conclusions: These findings establish CAV2 as a critical driver of HNSCC progression and cetuximab resistance through post-translational regulation of the PACT–PKR axis. Targeting CAV2 may therefore represent a promising strategy to potentiate the efficacy of EGFR-targeted therapy in HNSCC.

Review
Biology and Life Sciences
Agricultural Science and Agronomy

Fabián Pérez-Labrada

,

Antonio Juárez-Maldonado

,

Paola Fincheira

,

Froylán Rincón-Sánchez

,

Gonzalo Tortella

,

Susana González-Morales

,

Adalberto Benavides-Mendoza

Abstract: In agricultural practice, botanical extracts have emerged as promising biostimulants that can modulate key metabolic and redox processes in crops, thereby increasing stress resistance and productivity. This review provides a comprehensive synthesis of current knowledge on how botanical extracts influence plant metabolism and redox homeostasis, with particular emphasis on their role in adaptive cellular responses. Additionally, it examines how agronomic practices, such as nutritional strategies, water availability, light regimes, and preharvest biostimulant applications, can be utilized to increase the bioactive composition and efficacy of these extracts. By integrating recent advances in metabolomics and transcriptomics, this review outlines the biochemical and molecular reprogramming triggered by botanical extracts, identifies knowledge gaps, and outlines future research directions to optimize their use in sustainable agriculture. The sections comprising the review are an introduction that establishes the context and objective of the manuscript. The second section describes the bioactive constituents found in botanical extracts from different species, along with their metabolic and redox effects. The third section describes the plant response to the botanical extracts. The fourth section describes the metabolic and gene expression reprogramming that occurs following the application of a botanical extract. The last section presents the conclusion and future directions envisioned by the authors.

Review
Public Health and Healthcare
Other

Ignas Lapeikis

,

Vincas Urbonas

Abstract: Background: Cutaneous melanoma remains a highly lethal malignancy once metastatic. Current prognostic stratification relies primarily on staging and serum lactate dehydrogenase (LDH), which incompletely captures inter-patient biological heterogeneity. Increasing evidence highlights the importance of tumour–immune interactions in melanoma progression and response to therapy. Aim: This narrative review summarises and critically evaluates current evidence on circulating cytokines as prognostic and biologically informative biomarkers in melanoma, with particular emphasis on the immunotherapy era. Main findings: Several circulating cytokines—most consistently interleukin-6 (IL-6) and interleukin-8 (IL-8)—are associated with adverse outcomes in advanced melanoma. However, baseline elevations predominantly reflect tumour burden and systemic inflammation, indicating prognostic rather than treatment-specific predictive value. In contrast, early on-treatment changes, particularly decreases in IL-8, may better capture evolving tumour–immune interactions during immune checkpoint inhibitor therapy. C-reactive protein (CRP), a downstream marker of IL-6 signalling, similarly reflects systemic inflammatory status and carries reproducible prognostic significance. Early circulating tumour DNA (ctDNA) dynamics demonstrate strong associations with response and survival and may provide complementary insight into tumour burden kinetics. Conversely, cytokines central to effective antitumour immunity, such as interferon-γ (IFN-γ), are more reliably characterised at the tumour transcriptional level than by circulating protein measurements. Conclusions: Circulating cytokines represent biologically meaningful but methodologically challenging biomarkers in melanoma. Their most realistic clinical role lies in complementing established prognostic factors within integrated biomarker frameworks rather than functioning as standalone tests. Standardization of pre-analytical handling, assay platforms, and sampling time points, together with prospective validation, is essential before broader clinical implementation.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Davide Venditti

,

Elena Sofia Ruzzetti

,

Giancarlo A. Xompero

,

Cristina Giannone

,

Andrea Favalli

,

Raniero Romagnoli

,

Fabio Massimo Zanzotto

Abstract: Large language models (LLMs) require a significant redesign in solutions to preserve privacy in data-intensive applications due to their text-generation capabilities. Indeed, LLMs tend to memorize and emit private information when maliciously prompted. In this paper, we introduce Private Association Editing (PAE) as a novel defense approach for private data leakage. PAE is designed to effectively remove Personally Identifiable Information (PII) without retraining the model. Experimental results demonstrate the effectiveness of PAE with respect to alternative baseline methods. We believe PAE will serve as a critical tool in the ongoing effort to protect data privacy in LLMs, encouraging the development of safer models for real-world applications.

Review
Environmental and Earth Sciences
Remote Sensing

Andrew Manu

,

Jeff Dacosta Osei

,

Thomas Lawler

Abstract: Unmanned aerial vehicle (UAV) remote sensing has evolved from experimental imaging into an operational diagnostic infrastructure supporting climate-smart agriculture through high-resolution, flexible, and timely crop observation. This review synthesizes advances in UAV platforms, multisensor payloads, artificial intelligence (AI) analytics, and multisource data fusion to evaluate their combined potential for monitoring heterogeneous smallholder systems. A PRISMA-guided analysis of 59 studies (2013–2024) classified sensing architectures, analytical approaches, and application domains across diverse agroecological contexts. Integrated UAV–AI frameworks improve detection of crop stress, yield variability, biomass distribution, and phenological dynamics compared with conventional monitoring, particularly when multimodal sensor data are fused with satellite and ground observations. Predictive performance and diagnostic reliability increase when spectral, thermal, and structural datasets are analyzed jointly using machine-learning or deep-learning models. However, scalability remains constrained by operational, infra-structural, and regulatory factors, especially in resource-limited systems. These findings demonstrate that integrated sensing–analytics systems form a critical foundation for scalable climate-smart agricultural transformation and data-driven decision support across farm, landscape, and institutional scales.

Article
Engineering
Aerospace Engineering

Lu Haoran

Abstract: This paper provides a rigorous examination of eight fundamental architectural deficiencies that render the Linux kernel unsuitable for deployment in safety-critical avionics. These deficiencies include inadequate temporal determinism, the absence of physical memory isolation, driver-induced contamination of global kernel state, an excessively large and unbounded Trusted Computing Base (TCB), open and nondeterministic system semantics, insufficient inter rocess fault containment, unstable kernel behavior due to continuous patching, and a highly complex toolchain that imposes prohibitive DO-330 qualification burdens. Through a technical and standards-aligned analysis, this paper demonstrates that Linux cannot satisfy the determinism, verifiability, isolation, and lifecycle stability required for airworthiness certification, making it inherently incompatible with certifiable airborne platforms.

Review
Biology and Life Sciences
Virology

Kenneth Lundstrom

Abstract: Translational virology, characterized as “from bench to bedside”, covers all issues from basic research through clinical evaluation and final registration and drug/vaccine approval. It covers the identification of the cause of disease, screening of potential prophylactic or therapeutic agents, evaluation in animal models, confirmation of activity in human clinical trials, registration and approval. The recent COVID-19 pandemic represents a perfect example of translational virology, which demonstrated an unprecedented cooperation from the identification of the SARS-CoV-2 to the rapid development of potential repurposed and novel drugs and vaccines for both prophylactic and therapeutic applications. After confirmation of therapeutic and prophylactic efficacy in animal models, clinical phase I-III evaluation was carried out in an overlapping strategy, reducing the development time significantly. To maximize the chances of success, vaccines based on whole viruses, protein and peptide subunits, viral vectors and nucleic acids were developed in parallel. Based on good safety profiles and robust immune responses, COVID-19 vaccine candidates were granted emergency use authorization worldwide allowing the start of mass vaccinations. More than 13.6 billion COVID-19 vaccine doses have been administered, and although severe adverse events have been registered millions of lives have been saved. Due to emerging SARS-CoV-2 variants vaccine re-engineering has been required as part of translational virology. Vaccine production, storage, transport and distribution have also been given attention.

Article
Engineering
Other

Amit Rangari

Abstract: This paper presents a conceptual framework, the AI-Augmented Interview Framework (AAIF), requiring empirical validation before deployment. No interviews have been conducted; all thresholds, weights, and KPI linkages are conjectures pending empirical testing. The accelerating adoption of AI-powered development tools (GitHub Copilot, ChatGPT, Claude) is transforming software engineering practice. Industry surveys indicate that over 75% of professional developers now use AI coding assistants regularly (noting potential self-selection bias in survey samples), yet fewer than one in four organizations assess AI fluency during technical interviews. AAIF proposes a structured five-stage interview methodology (Stage 0 fundamentals gate plus four AI-augmented stages) for evaluating developer competencies in AI-mediated environments. The framework assesses: (1) toolchain fluency and prompt engineering, (2) AI output evaluation and critical reasoning, (3) system-oriented problem solving with AI integration, and (4) meta-reasoning about AI limitations, ethics, and failure modes. We develop evaluation rubrics with behaviorally anchored rating scales, propose configurable decision thresholds, and provide an integrated risk framework addressing bias, fairness, legal compliance, and ethical dimensions. The novelty lies in the systematic integration of established methods from industrial-organizational psychology, software engineering, and risk management for the specific and underexplored problem of assessing developers who use AI tools. A detailed four-phase empirical validation protocol is proposed as a key contribution.

of 5,640

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated