Sort by

Article
Physical Sciences
Astronomy and Astrophysics

Raheb Ali Mohammed Saleh Aoudh

Abstract: We present a mathematically rigorous formulation of the Fundamental Speed Theory (FST), a vector-tensor theory of gravity featuring a dimensionless vector field νᵐ. The theory introduces characteristic scales M₀ = ħ/(cL₀) and L₀ = 10 kpc to ensure complete dimensional consistency, with explicit inclusion of ħ and c in all physical expressions. Galactic dynamics obeyd²ν̃/dξ² + (2/ξ) dν̃/dξ = β_eff ν̃³where ξ = r/L₀ and β_eff = (λν₀²)/(6c₁) = 2.0×10⁷.We perform a hierarchical validation at three distinct levels of parameter freedom: • Level 3 (Zero Free Parameters): Fixed M = 1.0×10¹⁰ M⊙ and r_d = 3.0 kpc for all 175 galaxies. Even with no galaxy-specific parameters, FST correctly describes 65.7% of galaxies with mean χ²_ν = 0.809. • Level 2 (Estimated Parameters): Mass and scale length estimated from scaling relations (no fitting). Success rate reaches 93.6% with mean χ²_ν = 0.347 for the 160 galaxies with χ²_ν < 3.0. • Level 1 (Fully Fitted): Mass and scale length fitted per galaxy. Success rate reaches 100% with mean χ²_ν = 0.170.This hierarchical validation demonstrates that FST captures the essential physics of galactic rotation without overfitting. The theory achieves a mean reduced chi-squared of ⟨χ²_ν⟩ = 0.170 across all 171 SPARC galaxies, with 91.2% of galaxies having χ²_ν < 0.5 (excellent fit) and only 1.8% (three galaxies) having χ²_ν > 1.0. The characteristic transition scale is ξ_c = √(2/β_eff) = 3.16×10⁻⁴, corresponding to a fundamental scale r_c = ξ_c L₀ ≈ 3.16 pc.Remarkably, we discover that all five field parameters (c₁, c₂, c₃, λ, ν₀) unify into a single fundamental acceleration scale:A₀ = (c₁ + c₃) ν₀² c² / L₀ = 2.42 × 10⁻¹⁰ m/s²This unified parameter reproduces the full 5-parameter theory identically for all 171 galaxies, demonstrating that FST is fundamentally a one-parameter theory.Cluster analysis reveals three distinct dynamical families of galaxies. Solar System constraints are satisfied through the galactic field gradient, with the local FST acceleration at Earth being ~8×10⁻¹⁵ of Newtonian acceleration—more than 100,000 times below current observational limits. Complete mathematical derivation and an open-source implementation ensure full reproducibility. Extension to cosmological scales is planned for future work.

Article
Medicine and Pharmacology
Otolaryngology

Meghna Kumar

,

Srinjeeta Garg

,

Zikki Hasan Fatima

,

Gaurav Kumar

,

Burhanuddin Qayyumi

,

Vanita Noronha

,

Kumar Prabhash

,

Pankaj Chaturvedi

Abstract: Background: Neo-adjuvant chemotherapy (NACT) has shown promise in reducing tumor size and in rendering onco-logically safe resections in borderline resectable head and neck cancers. In patients with squamous cell carcinoma (SCC) of the tongue, NACT may facilitate less extensive surgeries and preserve critical structures, one such being the hypo-glossal nerve. Methods: A retrospective audit was conducted of patients with tongue SCC who underwent NACT followed by surgery at our centre between May 2022 and December 2024. Outcomes of interest included tumor response, hypoglossal nerve preservation, and pathological response. Results: 31 patients requiring potential bilateral hypoglossal nerve sacrifice having a median age of 48 years were included in the analysis. All patients presented with advanced stage (Stage IVa/IVb/III) and 80.6% had clinical nodal involvement. Following NACT, 45.2% (14/31) of these patients showed sufficient tumor regression to allow for unilateral hypoglossal nerve preservation. The most common chemo-therapy regimen was DCF, with 83.9% of patients experiencing no grade III/IV toxicities. Post-NACT histopathology showed that 32.3% of patients had no residual tumor, and 93.6% achieved uninvolved margins. 32.3% of the patients achieved complete regression (Mandard Grade I). Conclusion: Functional preservation of at least one hypoglossal nerve in advanced OSCC of the tongue is feasible. In this study hypoglossal nerve preservation in nearly half of the patients with midline-crossing tumors was achieved by NACT. The favourable histopathological outcomes and manageable toxicity profiles suggest that NACT may be a viable approach for improving functional outcomes in locally advanced squamous cell carcinoma of the tongue.

Communication
Chemistry and Materials Science
Organic Chemistry

Yu-En Huang

,

Shigekazu Ito

Abstract:

Fluoroalkyl-substituted organoboron compounds are valuable building blocks for organic synthesis and for the development of functional molecules in medicinal chemistry, agrochemicals, and materials science. Building on our previous work on difluoromethyl-substituted borates, we report the synthesis and structural characterization of trifluoromethylated borates, 4,4,5,5-tetramethyl-2-aryl-2-(trifluoromethyl)-1,3,2-dioxaborolan-2-uide salts ([pinB(Aryl)CF3]). Treatment of pinB–Aryl boronates (pinB = 4,4,5,5-tetramethyl-1,3,2-dioxaborolane) with trimethyl(trifluoromethyl)silane (Ruppert–Prakash reagent) in the presence of potassium tert-butoxide and 18-crown-6 (18-cr-6) afforded the corresponding trifluoromethylated borates as isolable crystalline compounds. Compared with the related difluoromethylated borates, the CF3 substituent increases the tendency of [pinB(Aryl)CF3] to exhibit hygroscopic behavior, as supported by a hydrated crystal structure and the formation of a hygroscopic product. The isolable trifluoromethylborates can serve as reservoirs of electrophilic trifluoromethyl radicals upon oxidation.

Article
Computer Science and Mathematics
Algebra and Number Theory

Ugur Duran

Abstract: In recent years, Duran (Fundam. J. Math. Appl., 8(2) (2025), 55-64) introduced the central Bell-based type 2 Bernoulli polynomials of order \( \beta \) given by \( \left( \frac{t}{e^{\frac{t}{2}}-e^{-\frac{t}{2}}}\right) ^{\beta }e^{xt+z\left( e^{\frac{t}{2}}-e^{-\frac{t}{2}}\right) }=\sum_{m=0}^{\infty }% \text{ }_{CB}b_{m}^{\left( \beta \right) }\left( x;z\right) \frac{t^{m}}{m!}% \left( \left\vert t\right\vert <2\pi \right) \) and derived many formulas and relations, covering several symmetric properties, derivative properties, summation formulas, and addition formulas. In this paper, we aim to improve some new properties for the central Bell-based type 2 Bernoulli polynomials of order \( \beta \). We first investigate some new properties, involving central Bell polynomials, classical Bernoulli polynomials and numbers, and central factorial numbers of the second kind. Moreover, we show that the central Bell-based type 2 Bernoulli polynomials of order \( \beta \) are solutions of the some higher-order differential equations. Further, we give a determinantal representation for the central Bell-based type 2 Bernoulli polynomials of order \( \beta \).

Article
Computer Science and Mathematics
Computer Vision and Graphics

Sergio Villanueva

,

Emilio Soria-Olivas

,

Manuel Sánchez-Montañés

Abstract: Automotive inspection in real production lines requires robust detection of rare and diverse 2 defects. Fully supervised methods are often unfeasible because defective samples are scarce 3 and heterogeneous. This work benchmarks recent unsupervised anomaly detection (UAD) 4 methods on AutoVI, a real industrial dataset covering six automotive inspection tasks 5 with challenging lighting, cluttered backgrounds, and multiple viewpoints. We establish 6 RGB and pseudo-depth baselines for seven UAD models under a unified training and 7 evaluation protocol, training exclusively on defect-free samples with z-score calibration for 8 fair comparison. On top of these baselines we study late-fusion ensembles that combine 9 complementary detectors within RGB and across modalities, at both image-score and pixel- 10 map level, reporting AUROC, AP, TPR@TNR, and pixel-level sPRO/AUsPRO at 5% false 11 positive rate. The main finding is that RGB-only late-fusion ensembles consistently improve 12 pixel-level localization, often recovering defect coverage where all individual models fail. 13 Combining RGB with monocular pseudo-depth through the same scheme, by contrast, 14 does not yield systematic gains and is highly sensitive to the quality of the estimated depth 15 channel. These results, validated with statistical significance testing across three random 16 seeds, provide practical guidance for composing UAD pipelines in automotive inspection.

Article
Public Health and Healthcare
Public Health and Health Services

Taiwo Opeyemi Aremu

,

Olihe Nnenna Okoro

,

Caroline Gaither

,

S. Bruce Benson

,

Drissa M. Toure

,

Jon C Schommer

Abstract: Background: Local vaccine manufacturing is being pursued across Africa to improve pandemic preparedness and reduce reliance on imports. In Nigeria, where COVID-19 vaccines were largely imported, willingness to accept locally produced vaccines is important for sustainable domestic production. The objective of this study was to estimate willing-ness to accept a locally manufactured COVID-19 vaccine among adults in Lagos, Nigeria, and to identify demographic predictors. Methods: We conducted a cross-sectional survey of adults in Lagos State using a questionnaire administered in four open-air markets. The primary outcome was willingness to accept a COVID-19 vaccine manufactured in Nigeria (yes/no). We summarized respondent characteristics, tested bivariate associations with chi-square, and estimated adjusted odds ratios (AORs) using multivariable logistic regression. Model calibration and discrimination were assessed using Hosmer-Lemeshow testing and the area under the ROC curve (AUC). Results: Of 388 consenting respondents, 335 provided complete data (86.3%). 75.8% reported willingness to accept a Nigerian-made COVID-19 vaccine. Willingness differed by age group (p=0.0028; trend p=0.0002) and religion (p=0.0403). In adjusted models, respondents aged 45-54 years (AOR 6.54; 95% CI 1.73-24.79) and 55-64 years (AOR 4.97; 95% CI 1.05-23.55) had higher odds of acceptance than those aged 18-24 years. Christian affiliation was associated with lower odds than Muslim affiliation (AOR 0.41; 95% CI 0.20-0.83). Discrimination was acceptable (AUC 0.75; 95% CI 0.69-0.80). Conclusions: Most respondents were willing to accept a Nigerian-made COVID-19 vaccine, suggesting demand-side readiness. Confidence-building strategies tailored to younger adults and implemented with faith-based and community institutions may support uptake of locally produced vaccines.

Article
Business, Economics and Management
Economics

Hu Xuhua

,

Ernest Kay Bakpa

,

Josephine Adwoa Yeboah

Abstract: This paper examines the dynamic relationship between innovation, total factor productivity (TFP), and economic growth in Ghana using annual data for the period 1965–2021. Although Ghana has recorded relatively strong economic growth, concerns remain regarding the sustainability of this performance in the absence of consistent productivity improvements. The study combines growth accounting techniques with time-series econometric methods, including the autoregressive distributed lag–unrestricted error correction model (ARDL–UECM), vector error correction modelling (VECM), Granger causality tests, and two-stage least squares estimation. The results provide robust evidence of a stable long-run equilibrium relationship among innovation, productivity, and output. Innovation exerts a positive and statistically significant effect on economic growth, primarily through productivity-enhancing channels, while TFP emerges as the dominant long-run driver of growth. Short-run dynamics reveal feedback effects between innovation, productivity, and economic growth. However, growth accounting results indicate substantial volatility in TFP growth, suggesting that Ghana’s expansion has been driven largely by factor accumulation rather than sustained efficiency gains. The findings offer policy-relevant insights for productivity-centred growth strategies in Sub-Saharan Africa.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Ting Liu

Abstract: Forecasting market volatility matters for risk management, portfolio allocation, and financial monitoring [1]. This paper studies whether interpretable machine-learning methods can improve forecasts of future realized volatility under a realistic walk-forward design. Using SPY as the benchmark asset and cross-asset predictors from QQQ, IWM, TLT, GLD, and VXX, I compare HAR, GARCH(1,1), GJR-GARCH(1,1), Elastic Net, Random Forest, and XGBoost at the 5-, 10-, and 21-day horizons. Forecasts are evaluated with RMSE, MAE, QLIKE, and prediction–realization correlation under rolling re-estimation with training-sample preprocessing only. Across all three horizons, the tree-based models outperform the linear and GARCH benchmarks on the main loss metrics, with Random Forest ranking first overall and XGBoost remaining close behind. Feature-importance and diagnostic results show that these gains are tied to economically plausible predictors, especially measures of volatility persistence, Treasury-market conditions, and market stress. A stylized volatility-targeting exercise suggests that the forecasting gains also have practical value, although the best statistical model is not always the best under every economic criterion. Performance is less uniform during high-volatility episodes, and the largest realized-volatility spikes remain difficult to predict. Overall, the results suggest that interpretable machine-learning methods can improve volatility forecasting in a disciplined out-of-sample setting.

Review
Engineering
Automotive Engineering

Krisztian Horvath

Abstract: Considerable progress has been made in predicting nominal NVH behavior in electric drivetrains, but the acoustic scatter observed across manufactured units remains insufficiently understood. In practice, nominally identical drive units may still exhibit noticeably different tonal behavior because small deviations in gears, shafts, bearings, fits, centering features, or assembly phase modify the excitation, transfer, and radiation mechanisms of the system. This review examines how manufacturing and assembly variability influences NVH performance in electric drive units and e-axles, with particular focus on the rotor–shaft–gear–bearing–housing system. Unlike broader EV NVH reviews, the present work focuses specifically on variability-induced acoustic scatter and its propagation along the drivetrain NVH generation and transmission path. To support transparency and consistency, the literature search and selection process followed a structured, PRISMA-inspired approach to ensure transparency and consistency across Scopus, Web of Science, Google Scholar, and SAE Mobilus for the 2015–2026 period. From 387 identified records, 50 studies were retained after duplicate removal, screening, and full-text assessment. The selected literature was synthesized into eight thematic categories: imbalance; run-out and eccentricity; bearing clearance and preload; spline and pilot centering; thermal effects; phase indexing; transmission error and sidebands; and end-of-line NVH diagnostics. The reviewed literature shows that manufacturing- and assembly-induced deviations can significantly alter transmission error, sideband structure, shaft-order content, and final tonal response, even when individual components remain within nominal tolerance limits. Beyond synthesizing the evidence base, the review proposes a general simulation methodology for variability-aware NVH prediction based on explicit deviation parameterization, hierarchical model fidelity, intermediate excitation metrics, thermal-state awareness, and closer integration with production and measurement data. Overall, the findings support a shift from nominal NVH assessment toward robustness-oriented, production-representative prediction of acoustic scatter, and establish a structured methodology for variability-aware NVH engineering in electric drivetrains.

Article
Computer Science and Mathematics
Mathematics

Sukran Uygun

Abstract: In this study, we propose a matrix-based transformation framework constructed from special integer sequences, including Fibonacci, Lucas, Pell, and Jacobsthal numbers. The approach is based on block-wise 2×2 matrix transformations that preserve key structural invariants, particularly the determinant, ensuring explicit invertibility of the scheme. By combining multiple recurrence-based matrices within a unified framework, the method provides flexible forward and inverse transformations without increasing matrix dimensions or introducing additional redundancy. The determinant-preserving property enables intrinsic consistency checking and supports an analytic error-detection and correction mechanism at the block level. Several illustrative examples are presented to demonstrate the applicability of the proposed scheme and its computational characteristics. The framework is purely algebraic and can be extended to other matrix families generated by linear recurrence relations, making it suitable for a wide range of applications in applied and computational mathematics.

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Frank Vega

Abstract: We present the \textbf{Hvala} algorithm, a linear-time ensemble approximation method for the Minimum Vertex Cover problem. Hvala combines three complementary heuristics --- a maximal-matching 2-approximation, a linear-time maximum-degree greedy implemented via a bucket-queue, and the degree-1 weighted-reduction ``Hallelujah heuristic'' studied in a companion work --- with a redundant-vertex pruning post-processing step, and returns the smallest of the four resulting covers.\\ \textbf{Theoretical guarantees.} We prove rigorously that Hvala achieves worst-case approximation ratio $\rho\le 2$ for every finite, simple, undirected graph: the classical maximal-matching component alone already yields this bound, and the pruning step is shown to preserve cover validity while never increasing cover size. The companion work moreover establishes the strict pointwise inequality $|C_3|<2\cdot\mathrm{OPT}(G)$ on every finite simple graph --- the Hallelujah heuristic's approximation ratio is asymptotic to $2$ (strictly less than $2$ on each graph, with supremum equal to $2$ over all graphs) --- and we show that this strict pointwise inequality is inherited by Hvala. Hvala runs in $\mathcal{O}(n+m)$ time and $\mathcal{O}(n+m)$ space.\\ \textbf{Empirical performance.} We validate Hvala on two independent experimental studies totalling $239$ instances. The first uses $109$ vertex-cover instances of the public NPBench collection ($41$ FRB hard instances and $68$ DIMACS clique-complement graphs, both with known optima), completed in $126.97$ seconds: Hvala attains mean approximation ratio $1.021$, with maximum $1.192$ on a single Sanchis adversarial instance. The second evaluates Hvala on $130$ real-world large graphs from the Network Data Repository (Cai's undirected simple graph collection), reaching up to $3$ million vertices and $15$ million edges, completed in approximately $95.5$ minutes of cumulative solve time; on the $51$ instances with published best-known cover sizes, mean ratio is $1.006$ and maximum $1.036$.\\ \textbf{Prospects for a $\sqrt{2}-\epsilon$ bound.} Across the combined $160$ instances with known optima, every approximation ratio lies below $1.414$; $93.8\%$ lie below $1.05$ and $96.9\%$ below $1.10$. The natural open problem we propose as the continuation of this work is whether there exists a \emph{fixed} constant $\epsilon>0$ such that Hvala achieves uniform ratio $\sqrt{2}-\epsilon$ --- either on all graphs (which, by SETH-based hardness, would imply $\mathrm{P}=\mathrm{NP}$) or, more realistically, on broad but restricted graph classes (bounded degree, bounded clique number, bounded treewidth, or structural families such as power-law and expander-like graphs). We do not prove such a bound here and do not claim one holds on all graphs; what we claim is that the combination of rigorous $\le 2$ guarantee, pointwise strict $<2$ inequality, linear time, and observed ratios uniformly below $1.414$ makes Hvala a plausible vehicle for such a refined analysis. The algorithm is publicly available via PyPI as the \texttt{hvala} package.

Article
Public Health and Healthcare
Nursing

Elena Violeta Iborra-Palau

,

Elena García-Redondo

,

Carlos Blasco-García

,

Raquel Alabau-Dasi

Abstract: Background: Psoriasis is a chronic, systemic and immune-mediated disease that affects over 60 million people worldwide. Although phototherapy is a safe and effective treatment, its demanding thrice-weekly clinical regimen imposes a significant treatment burden that can disrupt the patient’s life narratives. Despite its clinical importance, little is known about how patients navigate the logistical and emotional complexities of this therapy within specialised nursing-led units. Objective: To explore the lived experiences, disease management strategies, and healthcare expectations of patients with psoriasis undergoing phototherapy. Methods: A descriptive phenomenological design was adopted. Between 2019 and 2022, purposive and exhaustive sampling was used to recruit 72 participants receiving treatment at a specialised nursing-led phototherapy unit in a tertiary hospital in Spain. Data were collected through semi-structured interviews and analysed using inductive thematic content analysis supported by NVivo 12 Pro. Results: Eleven subtopics emerged within four main thematic areas: (1) Knowledge about the disease and treatment options: A striking dichotomy exists between a well-recognised psychological burden and a persistent lack of awareness regarding systemic physical comorbidities and biological therapies; (2) Triggers of the disease and flare-ups: Psychological stress was identified as the primary driver of disease activity, overshadowing secondary external factors; (3) Functional and logistical stressors: The rigorous frequency of sessions creates an "adherence-stress cycle," where the effort to maintain therapeutic continuity paradoxically generates the stress that exacerbates clinical flares, leading to profound biographical disruption and a perceived incapacity to fulfil professional and family roles; (4) Healthcare expectations and systemic barriers: Participants identified diagnostic delays and inequities in the financing of supportive care, manifesting as a collective demand for a permanent professional nursing referent to act as an anchor for integrated care. Conclusions: Phototherapy functions as a "double-edged sword" where clinical efficacy frequently conflicts with the logistical rigour of the treatment. Clinical skin clearance is insufficient if the biographical and systemic gaps remain unaddressed. This investigation advocates for a paradigm shift toward integrated care models where specialised dermatology nurses provide the necessary clinical navigation to support patients "beyond the surface" of the disease.

Review
Biology and Life Sciences
Aging

Sae Sanaka

,

Asumi Kubo

,

Sara Kamiya

,

Kenyu Nakamura

,

Tetsuya Sasaki

Abstract: Interleukin-17A (IL-17A) is a proinflammatory cytokine that plays a crucial role in immune responses and tissue homeostasis. The expression of IL-17A is strictly regulated by transcription factors including RORγt and is mainly produced by Th17 cells, γδT cells, and innate lymphoid cells. IL-17A signals through a heterodimeric receptor complex consisting of IL-17RA and IL-17RC, leading to the activation of NF-κB and MAPK pathways. Recent studies have highlighted its functions in the central nervous system, with reported associations with multiple sclerosis and autism spectrum disorder. Furthermore, the development of IL-17A inhibitors has progressed significantly, showing high therapeutic efficacy particularly in autoimmune diseases. This review provides an overview of current knowledge regarding IL-17A, from its molecular characteristics to clinical applications.

Article
Engineering
Automotive Engineering

Zoltán Rózsás

,

István Lakatos

Abstract: Pedestrian safety at urban intersections requires risk-aware mechanisms that extend beyond binary collision detection toward comparative prioritization among multiple agents. This study introduces the Intelligent Pedestrian Model (IPM). This reference-normalized scalar framework represents pedestrian risk as a function of trajectory, contextual, infra-structural, and behavioral factors, decomposed into Exposure and Severity components. Building on IPM, the Safety-Prioritized Trajectory Model (SPTM) operationalizes the Ex-posure component using an observation-only, leakage-free kinematic proxy embedded into a cost-aware negative log-likelihood objective. Evaluation on the ETH/UCY benchmark under a strictly inductive protocol shows that moderate prioritization (β ≈ 1.0) improves best-of-K multimodal performance (ALL FDE@K: 0.979 → 0.970 m) while maintaining mean displacement accuracy within seed-level variability. The results indicate that Expo-sure-based weighting does not act as a global accuracy enhancer but redistributes predictive capacity toward safety-relevant motion regimes. Validation is limited to a single benchmark fold; cross-fold generalization and full IPM instantiation remain future work.

Article
Engineering
Mining and Mineral Processing

Thomas Beingessner

,

Davide Elmo

Abstract: Progressive slope failures in open pit mining are characterized by accelerating deformations that can be monitored and potentially forecast. While current monitoring practice emphasizes velocity-based parameters and the inverse velocity method for failure prediction, the role of acceleration in understanding failure mechanisms and improving early warning systems remains underexplored. This paper presents a conceptual and analytical framework for characterizing acceleration in progressive slope failures. We introduce the concept of slope damage as a cumulative measure of positive accelerations over time, and demonstrate its utility in identifying the Onset of Acceleration (OOA), defined as the critical transition from regressive to progressive failure. We further examine the geotechnical conditions necessary for the inverse velocity method to be valid, proposing that a fully or nearly fully mobilized failure surface is required for sustained acceleration. The distinction between hazard-relevant velocity exceedance and failure-indicative progressive acceleration is discussed in the context of Trigger Action Response Plan (TARP) frameworks. This work contributes to the fundamental understanding of progressive failure mechanisms and provides practical guidance for acceleration-based slope monitoring.

Hypothesis
Social Sciences
Psychology

Marcelo R. S. Briones

Abstract: Why does research on Neanderthals attract public attention far beyond its immediate scientific relevance? Such fascination reflects not merely intellectual curiosity but the activation of deep symbolic structures, what Carl Gustav Jung termed the collective unconscious. Neanderthals occupy a psychologically distinctive position as an "incorporated other": an extinct human lineage that remains genetically present in the genomes of non-African modern humans, collapsing intuitive boundaries between self and other, past and present, familiarity and extinction. This symbolic ambiguity is intensified by ancient pathogen evidence and the largely genomic but morphologically invisible presence of Denisovans. Integrating perspectives from evolutionary biology, ancient genomics, paleoanthropology, and analytical psychology, I address a question Jung did not explicitly pose: when, along the human evolutionary lineage, did the collective unconscious originate? I argue that this structure did not emerge suddenly. Homo erectus established the cognitive floor, providing basic universal schemas of fear, group cohesion, and hierarchy, without strong evidence of symbolic elaboration. Homo heidelbergensis, the common ancestor of both Neanderthals and Homo sapiens, is the strongest candidate for the emergence of proto-archetypal structures, given its enlarged brain, complex social behavior, and early funerary practice. The symbolic system was operational in Neanderthals and archaic Homo sapiens and became fully and unambiguously visible only with the Upper Paleolithic explosion approximately 40,000 to 50,000 years ago. Neanderthals are therefore not merely objects of curiosity; they are co-inheritors of the same deep symbolic architecture still operating in every modern mind that encounters them.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Nursultan Kuldeyev

,

Orken Mamyrbayev

,

Ainur Akhmediyarova

,

Assel Yerzhan

Abstract: Identifying insider threats in modern enterprise environments presents a unique cybersecurity challenge. Although malicious activity may often appear to be legitimate user activity, it is difficult to recognize the distinction. This study presents an innovative approach to insider threat detection by analyzing enterprise activity logs for continuous authentication along with behavioural biometrics. Behavioural patterns, such as logins, file accesses, network interactions and emails, are analyzed to determine abnormal behaviours of users. The proposed system utilizes a hybrid deep learning architecture that includes a Long Short-Term Memory (LSTM) network and an autoencoder model to model temporal dependence of a user’s behaviour and to identify anomalies through reconstruction error analysis. The LSTM network captures user’s sequential activity and autoencoder determines variance from the user’s typical behavioural profile. The outputs of both models are aggregated using a unified behavioural risk scoring mechanism for continuous authentication and an ongoing assessment of insider threats. The experimental results from Insider Threat Dataset for Corporate Environments demonstrate that proposed approach is effective in classifying normal versus malicious behaviours of users. The model achieves of 97.65% an accuracy, of 96.35% a precision, of 99.05% a recall rate, of 97.68% an F1-score and a Receiver Operating Characteristic - Area Under Curve (ROC-AUC) score of 99.20%, which indicates a high level of detection capability and very low false positives. The findings support that a developed model is a viable solution for integrating behavioural modelling, detection of anomalies.

Article
Social Sciences
Psychology

Junjie Wu

,

Ruoling Hang

,

Pingping Xin

,

Guoli Yan

,

Chanyuan Gu

,

Luyao Chen

Abstract: Proficient second language (L2) reading relies on complex neurocognitive processes. Neuroimaging studies have identified key brain regions recruited during L2 reading, including the left inferior parietal lobule (LIPL) and the calcarine cortex (CAL). The LIPL has been suggested to be involved in phonological decoding during L2 reading, whereas the CAL has been implicated in early-stage visual processing. However, given the cor-relational nature of neuroimaging techniques, it remains unclear whether these regions play causal roles in L2 reading or are merely epiphenomenal. To address this issue, the present study used transcranial magnetic stimulation (TMS) to modulate neural activity in these regions and eye-tracking technology to assess subsequent reading performance in Chinese-English bilinguals. Specifically, ninety-seven participants were randomly assigned to one of three offline TMS conditions: LIPL, CAL or vertex (as a control site) stimulation, after which they performed a natural sentence reading task in English. The results showed that, compared to the control condition, TMS over the LIPL significantly reduced first fixation duration, whereas no significant effects emerged on gaze duration, regression path reading time, or total reading time. TMS over the CAL produced no significant effects on any eye movement measures. These findings suggest that the LIPL plays a causal role in L2 reading for early-stage lexical processing through phonological decoding. Overall, this study is the first to employ TMS and eye-tracking to investigate the neural mechanisms underlying natural L2 reading.

Article
Environmental and Earth Sciences
Remote Sensing

Denilson Pereira Passo

,

Rodrigo Rodrigues Antunes

,

Edilson de Souza Bias

,

Gilson Alexandre Ostwald Pedro da Costa

,

Raul Queiroz Feitosa

,

Thanan Walesza Pequeno Rodrigues

,

Omar Roberto da Silveira

Abstract: Amaranthus palmeri has become established in agricultural areas of the Brazilian Cerrado, where it threatens soybean and cotton yields. Conventional field scouting cannot cover the large properties typical of the region fast enough to detect infestation foci before seed set. We tested an automated detection approach using aerial images from a remotely piloted aircraft (RPA), a DJI Matrice 300 RTK with a Zenmuse P1 camera (45 megapixels, MP), processed with YOLOv11x (You Only Look Once, version 11, extra-large). Four field campaigns in Sapezal, Mato Grosso, produced roughly 40,000 images over soybean and cotton at different weed growth stages; flight tests at 90, 20, and 12 m showed that 12–20 m altitude is needed to resolve individual plants. Two specialists annotated 382 Amaranthus individuals (A. palmeri and A. hybridus), split 70/30 for training and validation. Overall performance reached 84% precision, 84% recall, and 88% mean average precision at Intersection-over-Union 0.5 (mAP@0.5); for A. palmeri alone the figures were 95%, 93%, and 99%, respectively, with 98% accuracy in the confusion matrix and virtually no cross-class confusion. Within these limits, RPA imagery and deep learning can replace manual scouting for A. palmeri at the farm scale.

Review
Engineering
Civil Engineering

Samira Mirzavand

,

Joaquim Tinoco

,

José C. Matos

,

Joao Amado

Abstract: Earth-retaining Structures are engineered systems designed to hold back soil, rock, or other materials and prevent landslides or the collapse of earth onto roadways. These structures are essential for stabilizing slopes and supporting excavations. They play an important role, especially in transportation infrastructure systems. One of the biggest challenges the asset holders are facing with, is the maintenance of these important as-sets. Several techniques have been used recently to monitor the health level of these geotechnical systems. Although there are some works reviewing these structural health monitoring techniques for civil structures, none of them specifically focused on earth-retaining structures which leads to an overall lack of knowledge in this field. Therefore, this survey aims to conduct a comprehensive review of health monitoring methods that are being used for the assessment of these important geotechnical assets to highlight the current state of research, identify gaps and limitations, and suggest fu-ture directions. In particular, this paper outlines the importance of timely maintenance for earth-retaining structures, presents the types of structural health monitoring meth-ods used for predictive or preventive maintenance of these assets, and finally, identi-fies the challenges and new solutions to help with a more efficient assessment and monitoring of these assets.

of 5,832

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated