Sort by

Article
Environmental and Earth Sciences
Environmental Science

Thiago José Lima Rosa

,

Jorge Luís de Oliveira Pinto Filho

Abstract: The retail fuel sector in urban areas presents significant environmental risks, requiring systematic sustainability assessments. This study aims to highlight the socio-environmental performance of fuel stations in Mossoró/RN using the Corporate Sustainability Index (ISE). It is a descriptive and exploratory study with a quantitative approach, based on questionnaires administered to managers of 12 licensed fuel stations. The ISE was calculated using 17 equally weighted environmental, legal, social, and operational indicators. The results indicated a predominance of high sustainable performance, with 91.7% of enterprises presenting an ISE above 75%, associated with operational organization, preventive practices, and compliance with legal requirements. However, some actions remain primarily tied to regulatory compliance, revealing a predominantly reactive environmental management profile. The study provides insights for enhancing strategic environmental management in the urban context of the Brazilian Semi-Arid region.

Article
Medicine and Pharmacology
Neuroscience and Neurology

Arisa Tamura

,

Marie Noguchi

,

Naoko Nozawa

,

Emiko Suzuki

,

Kanae Ando

Abstract:

Mitochondrial dysfunctions are believed to contribute to the pathogenesis of tauopathies, a group of neurodegenerative diseases with abnormal accumulation of microtubule-associated protein tau. The combination of 5-aminolevulinic acid (5-ALA) and sodium ferrous citrate (SFC) is known to improve mitochondrial functions. Here, we report that 5-ALA combined with SFC (5-ALA/SFC) improves mitochondrial functions and mitigates neurodegeneration in transgenic Drosophila expressing human tau. We found that tau reduces ATP levels, decreases mitochondrial distribution to neurites, and increases mitochondrial reactive oxygen species (ROS). Expression of oxidative phosphorylation (OXPHOS) genes was upregulated and activities of complexes I and IV were elevated. Feeding 5-ALA/SFC to tau flies lowers oxidative damages without correcting OXPHOS activities or mitochondrial distribution. 5-ALA/SFC treatment suppressed pathological tau phosphorylation and mitigated tau-induced neurodegeneration. Our results suggest that 5-ALA/SFC decreases a neurodegenerative pathway involving tau, mitochondria, and ROS.

Article
Medicine and Pharmacology
Neuroscience and Neurology

Ahmed Negida

,

Moaz Elsayed Aboelmagd

,

Belal Mohamed Hamed

,

Yousef Hawas

,

Aya Dziri

,

Yasmin Negida

,

Brian D. Berman

,

Matthew J. Barrett

Abstract:

Background: Parkinson’s disease (PD) is clinically heterogeneous, yet the genetic architecture underlying this heterogeneity remains incompletely understood. We examined the genetic correlates of four complementary PD subtyping frameworks: clinical motor subtype (tremor-dominant [TD] vs. postural instability/gait difficulty [PIGD]), alpha-synuclein seed amplification assay status (SAA+ vs. SAA−), pathological subtype (brain-first vs. body-first, based on REM sleep behavior disorder), and data-driven subtype (diffuse malignant [DM] vs. mild-motor predominant [MMP] vs. intermediate [IM]). Methods: We analyzed 1,597 PD patients from the Parkinson’s Progression Markers Initiative (PPMI) with genetic testing for seven PD-associated genes (LRRK2, GBA, SNCA, PRKN, PINK1, PARK7, VPS35), including specific variant resolution (LRRK2 G2019S, R1441G/C/H; GBA N409S, severe variants; SNCA A53T), and APOE genotyping (ε2/ε3/ε4 alleles). Genetic variant frequencies were compared across subtypes using chi-square or Fisher’s exact tests with Benjamini–Hochberg false discovery rate (FDR) correction. Effect sizes were quantified using Cramér’s V. Multivariable logistic regression (statsmodels) estimated adjusted odds ratios with Wald-based 95% confidence intervals. Results: Among 1,390 genotyped PD patients, LRRK2 carriers constituted 13.7% (190/1,390; 170 G2019S, 18 R1441G/C/H), GBA 8.6% (119/1,390; 96 N409S, 23 severe), and SNCA 2.0% (28/1,390; all A53T). APOE ε4 carriers comprised 23.4% (323/1,380). SAA-negative patients were markedly enriched for LRRK2 variants (37.1% vs. 10.2%, P = 3.7 × 10⁻¹⁹, q < 0.001, V = 0.25), driven by G2019S (28.5% vs. 9.6%, P = 4.9 × 10⁻¹¹, q < 0.001) and R1441G/C/H (7.9% vs. 0.5%, P = 2.7 × 10⁻¹², q < 0.001). Body-first PD was enriched for GBA carriers (12.3% vs. 6.7%, P = 0.004, q = 0.021) and depleted for LRRK2 (7.9% vs. 15.0%, P = 0.002, q = 0.013). The DM subtype carried the highest GBA frequency (14.0% vs. MMP 5.9%, P < 0.001, q = 0.003). After FDR correction, 10 of 48 univariate tests remained significant. Clinical subtypes (TD vs. PIGD) showed only nominal LRRK2 differences that did not survive FDR correction. APOE genotype did not differ across any framework. Conclusions: PD subtypes defined by alpha-synuclein pathology (SAA), pathological onset pattern (brain-first/body-first), and data-driven classification (DM/MMP/IM) show distinct genetic profiles that survive multiple comparison correction. LRRK2 variants strongly associate with SAA-negativity (V = 0.25); GBA variants associate with the severe body-first onset and the diffuse malignant subtype.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Jiarui Qi

,

Haoyu Bian

Abstract: Large Reasoning Models (LRMs) face a fundamental challenge in balancing efficient "fast thinking" with accurate "slow thinking," often struggling to adaptively trigger deeper reasoning without incurring significant computational overhead. This paper introduces \( \textit{MetaThink (MT)} \), a novel inference-time adaptive refinement framework designed to imbue LRMs with conditional self-correction capabilities, without requiring any additional training. \( \textit{MetaThink} \) operates by an initial "fast thinking" phase, followed by a lightweight self-monitoring mechanism that assesses confidence through uncertainty markers. When low confidence or potential errors are detected, a refinement token triggers a targeted "slow thinking" phase, guided by domain-specific prompts. This allows the model to introspectively review and correct its reasoning, culminating in a more accurate final answer. Our comprehensive evaluation across diverse and challenging benchmarks—spanning mathematical reasoning, code generation, and scientific problem-solving tasks—demonstrates that \( \textit{MetaThink} \) consistently achieves substantial and robust improvements in Pass@1 accuracy. Crucially, these gains are realized while maintaining competitive or even improved inference efficiency, outperforming existing inference-time baselines. Our findings underscore that \( \textit{MetaThink} \) offers an effective, training-free approach to enhance the reliability and accuracy of LRMs in complex reasoning tasks by striking a superior balance between performance and efficiency.

Review
Medicine and Pharmacology
Clinical Medicine

Gustavo Lorenzo Moretta

,

Rosana Claudia Chaud Covarrubias

Abstract: Background: Biosimilars represent a paradigm shift in access to biological therapies, yet physician understanding of their pharmacological basis, regulatory framework, and clinical implications remains heterogeneous. This review addresses the knowledge gap from a pharmacological perspective relevant to internists and clinical specialists. Objective: To provide a comprehensive, evidence-based narrative review of biosimilars for clinical physicians, covering definitions, the comparability exercise, interchangeability, therapeutic applications, global development, pharmacovigilance considerations, and the Latin American regulatory landscape. Methods: A narrative review was conducted using PubMed, WHO technical reports, FDA and EMA regulatory documents, and IQVIA market data. Studies published between 2009 and 2025 were included, with emphasis on systematic reviews, meta-analyses, WHO guidelines, and regional regulatory analyses. Results: The totality-of-evidence approach demonstrates that biosimilars exhibit no clinically meaningful differences from reference products in efficacy, safety, or immunogenicity. Meta-analyses of switching studies (>10,800 patients) confirm comparable outcomes. The FDA (2024) eliminated switching study requirements for interchangeability designation, while the EMA declared universal interchangeability in 2022. The global market reached USD 30.3 billion in 2024. In Latin America, regulatory heterogeneity, limited technical capacity, and prescriber misconceptions remain barriers despite advancing frameworks in Brazil, Argentina, Mexico, and Colombia. Conclusions: Biosimilars are pharmacologically and clinically equivalent therapeutic alternatives supported by robust evidence. Clinical physicians should integrate biosimilars into their prescribing decisions with confidence, while advocating for strengthened pharmacovigilance systems and regulatory harmonization in their regions.

Article
Medicine and Pharmacology
Oncology and Oncogenics

Hüseyin Pülat

,

Oğuzhan Söyler

,

Ünal Öner

,

Deniz Öztaşan

,

Cüneyt Akyüz

,

Cemil Yuksel

Abstract: Objective: Soft tissue sarcomas (STS) are biologically heterogeneous malignancies with unpredictable clinical behavior. Although tumor size, histological grade, and surgical margin status remain the main determinants of prognosis, additional biomarkers that integrate tumor biology and host-related factors are needed. The hemoglobin × albumin × lymphocyte/platelet (HALP) score reflects systemic inflammation and nutritional status. This study aimed to evaluate the association between preoperative HALP score and oncological as well as surgical outcomes in patients undergoing curative resection for STS. Materials and Methods: A retrospective cohort study was conducted including 46 consecutive patients who underwent surgery for STS between 2017 and 2025. HALP scores were calculated using preoperative laboratory parameters, and patients were stratified into low and high HALP groups according to the cohort median (24.9). Overall survival (OS) and disease-free survival (DFS) were analyzed using the Kaplan–Meier method and Cox proportional hazards models. Surgical margin status and postoperative complications were also compared. Results: Patients with low HALP scores had significantly larger tumors, higher rates of non-R0 resection, and increased major complications (p<0.05). Recurrence and mortality were more frequent in the low HALP group. Kaplan–Meier analysis demonstrated significantly shorter OS (log-rank p=0.0034) and DFS (log-rank p=0.0318) in patients with low HALP scores. In univariate Cox analysis, HALP was significantly associated with survival outcomes; however, in multivariate analysis, histological grade and surgical margin status remained independent prognostic factors, while HALP lost independent significance. Conclusion: A low preoperative HALP score is associated with aggressive tumor characteristics, increased surgical morbidity, and poorer survival in STS patients. Although HALP did not retain independent significance in multivariable analysis, its strong association with tumor aggressiveness and survival suggests that it may reflect the systemic manifestation of high-risk tumor biology. As a simple and cost-effective biomarker derived from routine laboratory parameters, HALP may support preoperative risk stratification and help identify patients with biologically aggressive disease.

Review
Medicine and Pharmacology
Medicine and Pharmacology

Rediet Guta Mideksa

,

Alazar Amare Amdiyee

,

Alemayehu Godana Birhanu

Abstract: Antimicrobial resistance has emerged as a significant global issue in combating bacterial diseases. Pseudomonas aeruginosa is one of the major opportunistic bacteria that cause acute, chronic, and nosocomial infections. The WHO has indicated P. aeruginosa as a member of ESKAPE group due to its high resistance rate to multiple existing treatments. The rapid rises in bacterial strains that are extensively drug-resistant (XDR), pan-drug-resistant (PDR), and multidrug-resistant (MDR) significantly increases the morbidity and mortality rates. In response to the escalating challenge of antimicrobial resistance (AMR), phage therapy has emerged as a promising alternative to the regular antibiotics. Lytic phages are specific viruses that infect and lyse bacterial cells, offering targeted antibacterial activity while minimizing disruption of normal microbiota. Recent progresses in specific bacteriophage isolation, optimized phage cocktail formulation, and combination therapy with antibiotics have demonstrated significant therapeutic potential in both laboratory and clinical studies. This review provides an overview of the current molecular mechanisms of antimicrobial resistance in P. aeruginosa and discusses the therapeutic potential of bacteriophages, highlighting their advantages, limitations, and future perspectives as an alternative therapy.

Review
Medicine and Pharmacology
Oncology and Oncogenics

Giovanni Tarantino

,

Vincenzo Citro

,

Ciro Imbimbo

,

Felice Crocetto

Abstract: Growing evidence suggests that insulin resistance (IR) might be a core, unifying mechanism linking various established risk factors for bladder cancer (BC). While factors like smoking, central obesity, sedentary lifestyle, and high-fat diets are known to increase BC risk, a common thread among them is their role in driving IR due to chronic hyperinsulinemia. Hyperinsulinemia promotes BC development in several ways. It acts as a potent growth factor, stimulating the proliferation and inhibiting the programmed cell death of malignant cells by activating the insulin/IGF signaling pathway. Furthermore, IR is closely associated with chronic low-grade inflammation and oxidative stress, both of which contribute to a pro-tumorigenic microenvironment. This convergence of growth-promoting and inflammatory signals highlights the central role of IR. While more research is needed to fully elucidate these complex interactions, the available data suggest that metabolic interventions aimed at improving insulin sensitivity could be a valuable, modifiable strategy for BC prevention.

Article
Social Sciences
Urban Studies and Planning

Reyhaneh Ahmadi

,

Kaveh Ghamisi

Abstract: Smart city governance increasingly relies on AI-enabled planning systems, digital twins, vulnerability scoring tools, and capital investment prioritization platforms to allocate climate-resilient housing and infrastructure investments. Yet existing smart-urbanism and adaptation frameworks under-specify how such systems should encode (i) well-being, (ii) equity, and (iii) climate uncertainty in the decision logic that translates urban data into ranked projects and funded portfolios. This paper develops a governance-centered framework, Caring Urban AI, through a replicable conceptual synthesis that integrates research on (a) climate risk decision-making under deep un-certainty, (b) built-environment pathways relevant to psychosocial well-being, and (c) algorithmic accountability and fairness for public-sector decision infrastructures. The framework specifies a five-layer architecture linking (1) urban form and infrastruc-ture, (2) climate exposure and environmental resources, (3) psychosocial mediators of well-being, (4) algorithmic design choices (data, objective functions, equity constraints, uncertainty handling, documentation), and (5) institutional governance (procurement, auditing, participation, redress), with explicit feedback loops. The primary outputs are: (i) the five-layer Caring Urban AI architecture operationalized as auditable decision infrastructure; (ii) eight mechanism-based propositions that render the framework empirically testable via audits and quasi-experimental policy evaluations; and (iii) an operational specification guide illustrating objective-function forms, equity con-straints, robustness logic, and documentation artifacts for prioritization workflows. The analysis concludes that aligning Urban AI with SDG 11 requires treating well-being-supportive living conditions as a decision objective, constraining optimiza-tion with equity conditions, and institutionalizing auditability and contestability to prevent distributive and psychosocial harm in climate-resilient investment planning.

Article
Computer Science and Mathematics
Signal Processing

Xuchao Gao

,

Mingqiang Li

,

Kai Guan

,

Jianjun Ge

Abstract: To address the high computational complexity and insufficient real-time performance of traditional multi-radar trajectory planning methods in complex electromagnetic interference environments, this paper proposes an imitation learning-based trajectory planning method for multi-radar systems. This method designs a trajectory policy neural network architecture based on multiple semantic information. It proposes a training data construction method with coverage rate as the optimization objective. Then the trajectory policy neural network is trained by using an imitation learning algorithm with an auxiliary target. Simulation results show that the proposed method achieves an average coverage rate of 93.95%, and improves the single-step decision efficiency by a factor of 6.7 compared with heuristic-based trajectory optimization methods.

Article
Biology and Life Sciences
Biology and Biotechnology

Basker Palaniswamy

Abstract: What if cotton could grow already colored — eliminating the need for dyes altogether? Today nearly all cotton is harvested white and later dyed using chemical processes that account for roughly 17–20% of global industrial water pollution. Billions of liters of water and large quantities of synthetic chemicals are used each year simply to give fabrics their color. This work explores a transformative alternative: cotton that produces its own colors while growing. We present a unified biological design framework for cotton fibers capable of naturally producing six shades — the existing brown and green, along with engineered pink, blue, and, for the first time, black cotton. Instead of dyeing fabric after harvest, the plant itself is programmed to create pigments directly inside the fiber. A key innovation is a dual-pigment strategy that enables the production of black cotton by combining two natural pigment systems commonly found in plants and biological materials. By carefully activating these pathways only in the developing fiber, the plant can generate stable coloration without affecting normal growth. Beyond proposing the concept, this study provides a practical roadmap for turning naturally colored cotton into a real agricultural technology. The framework outlines the full journey from laboratory design to field deployment, including gene construction, plant transformation, greenhouse testing, field trials, regulatory approval, and large-scale seed production. Methods for combining color traits with existing pest-resistant cotton varieties are also discussed to ensure compatibility with modern farming. If successfully implemented, naturally colored cotton could dramatically reduce the environmental footprint of the textile industry by eliminating large portions of the dyeing process. In the long term, this approach points toward a future where the colors of clothing are not manufactured in factories but grown directly in the field.

Article
Social Sciences
Education

Adeeb Obaid Alsuhaymi

,

Fouad Ahmed Atallah

Abstract: Open Educational Resources (OER) are widely promoted as mechanisms for expanding access to knowledge and supporting sustainability in higher education. Yet their long-term viability remains constrained by fragmented governance, unstable funding arrangements, weak faculty incentives, policy gaps, and uneven digital infrastructure. This article develops a conceptual and policy-oriented framework that reconceptual-izes OER as sustainable knowledge commons embedded within higher education sys-tems rather than merely repositories of open content. Using an integrative review and thematic synthesis of global scholarship on OER sustainability, commons governance, and higher education policy, the study identifies four interrelated governance dimen-sions: institutional embedding, participatory stewardship, equitable access and inclu-sion, and long-term resource sustainability. The analysis shows that sustainable OER ecosystems depend not only on open licensing and technological platforms but also on coherent policy design, institutional alignment, academic recognition structures, and collaborative governance arrangements. Each dimension is associated with indicative governance mechanisms and policy indicators such as institutional OER strategies, faculty incentive programs, and shared digital infrastructure. The framework also recognizes institutional diversity, emphasizing that governance models must be adapted to different policy environments, academic cultures, and stages of OER adop-tion across higher education systems. By conceptualizing OER as governable knowledge commons, the article clarifies how open knowledge initiatives can con-tribute to social equity, educational resilience, and sustainable transformation in higher education.

Article
Computer Science and Mathematics
Computer Science

A Manoj Prabaharan

Abstract: The proliferation of misinformation in real-time digital media demands innovative solutions for verifiable journalism. This paper introduces SolanaNet-Journal, a pioneering framework leveraging Solana's high-throughput blockchain and multi-agent AI networks to enable immutable, real-time news dissemination with embedded credibility assurance. Autonomous agents, specialized in sourcing, cross-verification, and provenance tracking, collaborate via Solana smart contracts to process breaking stories at over 2,000 verifications per second, achieving sub-second finality unattainable on legacy blockchains. Key innovations include a hybrid proof-of-history consensus fused with agent Byzantine agreement, cryptographic hashing for tamper-evident content streams, and a dynamic credibility scoring model that adapts to evolving narratives using stake-weighted incentives. Implemented on Solana devnet, the system demonstrates 92% accuracy in fact-checking live datasets from global events, outperforming centralized tools by 4x in latency and resilience to adversarial inputs. Evaluations across scalability, security, and real-world case studies affirm its robustness against deepfakes and viral falsehoods. By decentralizing trust, SolanaNet-Journal redefines journalistic integrity in hyper-dynamic media landscapes, paving the way for ethical, scalable AI-blockchain hybrids in inclusive communication ecosystems.

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Frank Vega

Abstract: We present a polynomial-time algorithm for Minimum Vertex Cover achieving an approximation ratio strictly less than 2 for any finite undirected graph with at least one edge, thereby disproving the Unique Games Conjecture. The algorithm reduces the problem to a minimum weighted vertex cover on a degree-1 auxiliary graph using weights \( 1/d_v \), solves it optimally via Cauchy-Schwarz-balanced selection, and projects the solution back to a valid cover. Correctness and the strict sub-2 ratio are rigorously proved. Runtime is \( O(|V|+|E|) \), confirming practical scalability and opening avenues for revisiting UGC-dependent hardness results across combinatorial optimization.

Article
Environmental and Earth Sciences
Sustainable Science and Technology

Luciana T. Rattaro

,

Yehia F. Khalil

Abstract: In Latin America, sustainable commitments towards decarbonizing hard-to-abate industrial sectors have identified hydrogen (H2) as a key enabler for the energy transition. This study develops a policy analytical framework to enhance the green H2 economy, using Argentina as the central case study. Key insights from the study include identifying often-overlooked social challenges within the H2 economy and proposing the integration of social indicators into policy design, with a particular focus on the territorial dynamics of Patagonia, labor conditions, indigenous participation, governance, and community impacts. Drawing from Social Life Cycle Assessment (S-LCA) guideline standards and H2 approach, this study highlights key social hotspots that existing S-LCA tools overlook due to their lack of specific focus on regional territories and their communities. The analysis combines six social impact categories, namely, human rights, working conditions, health and safety, cultural heritage, governance, and socio-economic repercussions as recommended by the United Nations Environmental Program (UNEP), analyzed at a three-level dimension, and complemented by the H2 justice approach for Argentina's potential green H2 production sector. These policy recommendations aim to foster a more resilient and sustainable development of the green H2 industry.

Article
Physical Sciences
Astronomy and Astrophysics

Huang Hai

Abstract: General Relativity (GR) has long been confronted with a fragmentation dilemma regarding black hole singularities and galaxy rotation curves: the former requires undetectable higher-dimensional quantum gravity to circumvent infinite curvature, while the latter similarly relies on undetectable dark matter to provide additional gravitational force. In this paper, we abandon the hypothesis of undetectable entities and reveal that the two challenges may share an intrinsic geometric solution: the universal asymptotic behavior of mainstream dark matter halo models is equivalent to a logarithmically corrected gravitational potential \( Φ(r)∼-(lnr+1)/r \), which originates from the self-response of the curvature divergence at the GR singularity \( (R_{trt}^r∝r^{-3}) \) via Poisson integration. At the microscopic scale, the sign reversal of lnr generates a repulsive effect, thereby avoiding the singularity. The constructed logarithmically corrected Schwarzschild metric is rigorously solved via the Lambert W function, revealing a layered internal structure determined by the black hole mass \( M \) (with thickness \( ∝1/M \)), which realizes the holographic screen of the renormalization group flow under the AdS/CFT correspondence. On this basis, we present parameter-free a priori predictions for the black hole shadows of Sgr A* and M87* that are consistent with Event Horizon Telescope (EHT) observations, and provide rigid falsifiable predictions for unobserved black holes, especially the crucial discriminative prediction for NGC315. On the galactic scale, the logarithmic term can fit the galaxy rotation curves of the Milky Way, Andromeda, and NGC2974 without the additional gravitational force from dark matter, and also successfully passes the test of the gravitational lensing phenomenon of the Bullet Cluster with good agreement with observations. On the other hand, the calculated solar system tidal difference \( (Δg∼10^{-18} m/s^2) \) is far below the current experimental limit, ensuring the validity of the equivalence principle without the need for a shielding mechanism; meanwhile, the Solar System Parameterized Post-Newtonian (PPN) tests are also consistent with GR. This work demonstrates that gravitational phenomena from black holes to galaxies are governed by the spacetime self-response triggered by the GR singularity. It further reveals that macroscopic gravitational systems may be "holographic projections" of quantum topological structures (quantum vortices). This framework thus pulls quantum gravity research from pure mathematical modeling back to the energy scales accessible to contemporary observations, and provides a new direction for thinking about the unification of General Relativity and quantum mechanics.

Article
Computer Science and Mathematics
Computational Mathematics

Ibar Federico Anderson

Abstract: This paper introduces a novel investigation into the additive structure of primes that transcends the classical framework of the Goldbach Conjecture. Standing on the shoulders of Christian Goldbach's foundational insight regarding the sum of primes (1742) and Sophie Germain's pioneering work on special prime pairs, our inquiry addresses a qualitatively distinct problem: the existence of three simultaneously prime numbers (p, q, r) satisfying the relation p = q + r − 1. Unlike classical Goldbach verifications---notably the monumental work of Oliveira e Silva et al. up to 4 × 10¹⁸, which confirms decompositions for all even integers without filtering for the primality of the predecessor---this work isolates the specific subsequence where p is also prime. This restriction reveals a hidden arithmetic architecture previously unexplored, demonstrating that the behavior of primes within this subset diverges significantly from the general case of even numbers.Far from being a mere replication of existing results, this study leverages the asymptotic machinery of G.H. Hardy and J.E. Littlewood (1923) to uncover structural anomalies within this prime subsequence. By applying their singular factor S(n) to this restricted domain, we discover that it does not average to unity as implicitly assumed in the general literature, but converges to a previously unknown constant S̄∞ ≈ 1.74273. We provide a rigorous proof of this convergence using Dirichlet's theorem on arithmetic progressions (1837) and the Chinese Remainder Theorem, demonstrating that the distribution of primes in this context possesses a unique "additive richness" distinct from the general case due to a biased density of divisors. Furthermore, we integrate the analytic depth of Bernhard Riemann (1859) and Hans von Mangoldt (1895) by utilizing the explicit Riemann–von Mangoldt formula to define a restricted Chebyshev function Ψ*(x), linking the oscillatory behavior of our multiplicity function to the zeros of the Riemann zeta function ζ(s).This synthesis of classical analysis and new combinatorial data yields seven original contributions, now supported by a robust, bug-corrected computational verification using a modular Google Colab framework with checkpoint recovery. The execution analyzed 664,574 primes up to 10⁷, correcting previous algorithmic filters that generated false positives, and conclusively confirming: (1) A groundbreaking taxonomy classifying primes into three disjoint classes: Mirror (M), Anchor-3 (A), and Orphan (O), formally characterized via the von Mangoldt function Λ(n), (2) The unconditional Mirror Gap Theorem, proving that gaps between consecutive Mirror Primes (> 3) are divisible by 12, (3) The Prime Multiplicity Conjecture (N(p) ≥ 2 for p > 11), computationally verified for all 664,574 primes in the range with zero violations, (4) The derivation of a new prediction law (Law 3) with a Root Mean Square Error (RMSE) of 0.0205, representing an improvement of over 94% compared to the classical Hardy–Littlewood formula (RMSE ≈ 0.374), and achieving 99.84% coverage within a ±30% threshold. (5) The identification of the universal constant S̄∞ = ∏_{q>2}(1 + 1/((q−1)(q−2))) ≈ 1.74273, resolving why empirical constants in this domain systematically deviate from standard twin-prime predictions, (6) A systematic characterization of these classes via the von Mangoldt function, including a class-conditional decomposition of the Goldbach–Λ sum, (7) A formal proof establishing the finiteness and exact Euler product form of S̄∞, converting an empirical discovery into a theorem. Statistical analysis via bootstrap resampling (n = 2000) confirms that the observed empirical constant Ĉ ≈ 1.3301 lies strictly outside the 95% confidence interval of the classical twin-prime constant 2C₂ ([1.3289, 1.3314] vs. 1.3203), with the deviation rigorously explained by the structural inequality S̄∞ > 1. In conclusion, this work does not merely verify old conjectures but defines a new territory in additive number theory. By repurposing the legendary tools of Goldbach, Germain, Hardy, Littlewood, Dirichlet, Riemann, and von Mangoldt, we demonstrate that the subsequence of primes holds its own unique constants, laws, and structural theorems, marking a significant and quantifiable departure from the behavior of generic even integers.

Article
Engineering
Architecture, Building and Construction

Khuloud Ali

,

Ghayth Tintawi

,

Mohamad Khaled Bassma

,

Aftab Haider

Abstract: Environmental governance is no longer shaped only by expert judgement or statutory procedure. In recent years, algorithmic systems have begun to mediate how data are interpreted, to shape the scoring of risk, and to influence the way policy priorities are established. These systems now affect regulatory analysis. They also inform climate adaptation modelling and guide decisions on land use while supporting sustainability monitoring. Although artificial intelligence (AI) is often presented as a means to improve environmental outcomes, its deployment introduces lifecycle emissions while raising concerns about institutional opacity and exposing risks related to public legitimacy that remain insufficiently embedded in current governance frameworks. This article advances the concept of algorithmic sustainability and treats it as a condition of governance rather than a technical attribute of computational tools. Drawing on a structured qualitative synthesis of interdisciplinary research, the study identifies three conditions required for sustainable AI use in environmental decision systems. One concerns lifecycle carbon integrity. Another addresses institutional accountability. A third focuses on alignment with public value. These conditions are translated into a tiered Environmental AI Impact Assessment model (EAIA) designed to support regulatory oversight while remaining institutionally feasible. By separating computing-related effects from operational consequences and from wider systemic implications, the framework clarifies how algorithmic applications may improve environmental performance while still generating rebound pressures that threaten broader sustainability goals.

Article
Social Sciences
Cognitive Science

Nikesh Lagun

Abstract: Background: Motivation research has generated many constructs, yet many theories remain structurally under-specified, relying on flexible verbal accounts or models whose functional form is optimized to data rather than fixed in advance. This limits falsifiability, cross-domain comparison, and principled failure. Theory: Lagun’s Law proposes a fixed six-variable structural equation of volitional drive specifying ignition gating, nonlinear amplification, divisive resistance, and an explicit variability term. The law is defined by its functional architecture rather than by any particular semantic interpretation or measurement instantiation. Objective: This study evaluates Lagun’s Law using straight structural validation: assessing whether a pre-specified equation exhibits recurring empirical signatures when applied without reparameterization, optimization, or post hoc modification. The aim is to test structural admissibility. Method: The equation was instantiated using pre-defined proxies across four independent secondary datasets spanning learning analytics, intelligent tutoring systems, naturalistic smartphone sensing, and laboratory neurophysiology. All proxies respected temporal precedence and outcome non-overlap. Where full instantiation was not possible, analyses were treated as reduced-form tests. Results: Recurring structural signatures were observed across all four datasets. Readiness functioned as a prerequisite rather than a graded predictor, divisive resistance effects were observed in three of four datasets, and independent behavioral variability persisted across contexts. Nonlinear amplification was directly testable in two datasets and attenuated or untestable elsewhere due to measurement constraints. Conclusion: These findings provide empirical grounding for Lagun’s Law as a structurally admissible constraint on volitional drive, clarifying its scope conditions and falsification pathways while avoiding claims of causality, universality, or optimal measurement.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Usman Naseem

,

Tanmoy Chakraborty

,

Kai-Wei Chang

,

Mark Dras

,

Preslav Nakov

,

Nanyun Peng

,

Soujanya Poria

Abstract: Large Language Models are transforming communication, research, and decision-making, but misalignment – when models diverge from human values, safety requirements, or user intent – poses serious risks. In this position paper, we argue that many alignment failures stem from operational choices in training and deployment. We posit that alignment should shift from static, post-training constraints toward dynamic, participatory approaches that safeguard pluralism, autonomy, and human flourishing. We outline forward-looking directions, including pluralistic evaluation, transparency, and the Flourishing–Justice–Autonomy (FJA) framework, and present a roadmap for advancing alignment research and practice.

of 5,657

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated