Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Other

Felipe Oliveira Souto

Abstract: We propose that the first four non-trivial zeros of the Riemann zeta function satisfy the exact relation \(8\pi^2(\gamma_4/\gamma_1)^2 = 366\), equivalently \(\gamma_4/\gamma_1 = \sqrt{183}/(2\pi)\). This relation emerges from three fundamental considerations: (1) the geometric framework of the Riemann-Möbius-Enneper (RME) triad, (2) the constructive interference condition derived from the pendulum-zeta isomorphism with harmonic parameter \(k=3\), and (3) the self-consistency condition \(K_g \cdot C = 1\) where \(K_g\) and \(C\) are explicit functions of the zeros. We further explore a connection to modular forms, noting that the ratio \(\gamma_4/\gamma_1\) equals the ratio of logarithms of the Dedekind eta function evaluated at the Heegner points \(\tau_{163} = (1 + i\sqrt{163})/2\) and \(\tau_{43} = (1 + i\sqrt{43})/2\). The numbers 163 and 43 are the two largest Heegner numbers, famously associated with Ramanujan's observation that \(e^{\pi\sqrt{163}}\) is almost an integer. Where Ramanujan found striking approximations, we find exact equalities — transforming near-integer phenomena into precise identities that link the zeros of the zeta function to modular forms and, through the geometric framework, to fundamental physical constants. This connection reveals that the identity \(8\pi^2(\gamma_4/\gamma_1)^2 = 366\) is equivalent to a profound relation between these special values of the eta function. Numerical verification with 200+ digit precision confirms the exact nature of all identities. This result would provide a mathematical foundation for the geometric origin of fundamental physical constants, including the fine-structure constant \(\alpha^{-1}=137.035999084\), the Planck length \(\ell_P = 1.616255\times 10^{-35}\,\text{m}\), and the hydrogen Lamb shift correction \(\Delta\nu_{\text{Lamb}} = 7.314\,\text{kHz}\).

Article
Computer Science and Mathematics
Other

João Ferreira-Santos

,

Lúcia Pombo

Abstract: City-scale, in-the-wild Augmented Reality (AR) learning paths must remain operable under Bring Your Own Device (BYOD) heterogeneity, outdoor tracking degradation, public-space safety, and interruption recovery. This study conceptualizes the Art Nouveau Path as an AR learning service and makes a theoretical contribution by proposing a Determinant-driven Requirements traceability model that treats implementation Determinants as Requirements signals and links them to testable Requirements, transfer Artefacts, and evidence anchors for replication. Methods combined 8 Points of Interest (POIs) and 36 tasks profiling, group-session logs (118 sessions), and teacher-facing records from a validation workshop (T1-VAL, N=30) and in situ observation (T2-OBS, N=24). Teachers open-text fields were segmented into meaning units and coded with an eight-Determinant taxonomy, with intercoder reliability assessed on a stratified subset (Krippendorff’s alpha = 0.83). Logs and a post-path student questionnaire (S2-POST, N=439) bounded enactment feasibility and data integrity, without learning-outcome inference. Dominant determinants concerned onboarding and legibility, marker robustness and recovery, and curriculum framing, alongside safety and fallback constraints. These signals were translated into 18 “shall” Requirements with acceptance criteria and bidirectional trace links to transfer 6 Artefacts. The resulting transfer kit specifies routines, maintenance, incident handling, and fallback procedures to reduce replication fragility across teams.

Article
Computer Science and Mathematics
Other

Rexhep Mustafovski

,

Galia Marinova

,

Besnik Qehaja

,

Edmond Hajrizi

,

Shejnaze Gagica

,

Vassil Guliashki

Abstract: The transition from fifth-generation (5G) to sixth-generation (6G) mobile networks represents a fundamental shift in wireless communication paradigms, driven by the need for ultra-low latency, extreme data rates, native intelligence, and support for mission-critical and immersive applications. This paper presents the Rexhep Network Optimization Framework, a layered and AI-native architectural model designed to enable a smooth, efficient, and scalable evolution from 5G to 6G systems. The proposed framework integrates physical and spectrum intelligence, intelligent radio access networks (RAN) with edge computing, virtualized core networks with network slicing, and AI-driven optimization and control mechanisms. It further incorporates advanced service layers supporting extended reality (XR), digital twins, AI-based security, and mission-critical services. The framework explicitly addresses the coexistence of 5G and 6G technologies through phased deployment, hybrid optimization, and dynamic spectrum management, ensuring backward compatibility while enabling 6G-dominant capabilities. By positioning artificial intelligence as a cross-layer enabler rather than an auxiliary function, the proposed framework provides a systematic approach for network automation, resilience, and performance optimization in next-generation communication ecosystems. The presented model offers a conceptual foundation for future research, standardization, and practical deployment strategies toward 6G networks.

Article
Computer Science and Mathematics
Other

Linh Huynh

,

Danielle S. McNamara

Abstract: This study proposes a Natural Language Processing (NLP)-based evaluation framework to examine the linguistic consistency of Large Language Model (LLM)-generated personalized texts over time. NLP metrics were used to quantify and compare linguistic patterns across repeated generations produced using identical prompts. In Experiment 1, internal reliability was examined across 10 repeated generations from four LLMs (Claude, Llama, Gemini, and ChatGPT) applied to 10 scientific texts tailored for a specific reader profile. Linear mixed-effects models showed no effect of repeated generation on linguistic features (e.g., cohesion, syntactic complexity, lexical sophistication), suggesting short-term consistency across repeatedly generated outputs. Experiment 2 examined linguistic variation across model updates of GPT-4o (October 2024 vs. June 2025) and GPT-4.1 (June 2025). Significant variations were observed across outputs from different model versions. GPT-4o (June 2025) generated more concise but cohesive texts, whereas GPT-4.1 (June 2025) generated outputs that are more academic, lexically sophisticated and complex syntax. Given the rapid evolution of LLMs and the lack of standardized methods for tracking output consistency, the current work demonstrates one of the applications of NLP-based evaluation approaches for monitoring meaningful linguistic shifts across model updates over time.

Article
Computer Science and Mathematics
Other

Sirui Han

,

Zhizhuo Kou

,

Ruoxi Li

,

Yuyao Zhang

,

Yujin Zhou

,

Chuxue Cao

,

Han Zhu

,

Kunhao Pan

,

Haoran Li

,

Conghui He

+2 authors

Abstract: As large language models are increasingly used for contract drafting, case research and even judicial work, a central question is how to make their outputs trustworthy. This survey addresses that question through the lens of verified generation for legal AI, focusing on systems that are robust against hallucinations and traceable to authoritative legal sources. First, we propose a unified framework for verified generation in legal AI, linking reasoning, retrieval, and validation around factual reliability. Second, we cast reliability methods into two paradigms of epistemic negotiation, by failure and by conflict, enabling models to recognize and act on their competence limits. Third, we survey the legal-AI landscape and identify challenges for verifiable, governance-native systems. This survey outlines a roadmap for trustworthy legal AI and for reliable reasoning beyond the legal domain.

Article
Computer Science and Mathematics
Other

Valentin Waeselynck

,

David Saah

Abstract: Background: some widely used wildland fire behavior models like FARSITE propagate fire fronts by computing the front-normal velocity (spread rate) as a function of local inputs and the front-normal direction. Such models are sometimes observed to cause the collapse of crown fires into sharp wedge shapes that eliminate heading fire behavior. Aims: we set out to document this phenomenon, and more generally understand the relationships between fire shapes and spread rate functions. Methods: the phenomenon is studied both mathematically and through simulation experiments. Non-smooth fire fronts are theorized mathematically by an Eikonal partial differential equation ($H(x, \tau, D\tau) = 1$), where the unknown $\tau(x)$ is the time-of-arrival function and the Hamiltonian $H(x, t, p)$ is positively homogeneous and possibly non-convex in $p$; convex analysis is used to study viscosity solutions in constant conditions. Results: we show that a fire spread model preserves the smoothness of fire fronts if and only if it is equivalent to using the Huygens principle. Non-trivially, this is equivalent to a convexity criterion on the inverse spread rate profile, which is then the polar dual of the Huygens wavelet; this corresponds to Hamiltonian-Lagrangian duality. The relevance of smoothness-destroying models to crown fire is debated. Exact analytical formulas are derived for fire growth in spatially constant conditions. Conclusions: our understanding of fire spread models is improved by solving the spread equations in more general ways than previously known. In particular, the collapse of heading crown fires into sharp shapes is now explained. Smoothness-destroying spread models cannot be simulated by algorithms based on travel time like cellular automata; their general well-definedness remains an open question. Fire modelers can use these findings to guide their search for improved crown fire models, and more generally to verify the accuracy of numerical implementations.

Article
Computer Science and Mathematics
Other

Zi-Niu Wu

Abstract: This paper introduces the Generalized Coordinate System (GCS) as a framework for analyzing and generating rhetorical modes---the conventional patterns of discourse. The GCS is composed of ten axes: Thing, Feature, Quantitative Attribute, Qualitative Attribute, Formal Attribute, Basic Element, Rhetorical Mode, Cognitive Function, Epistemic Purpose, and the Five-Level Expression Staircase. The first six axes represent lower-dimensional components, the seventh serves as the ontological axis for rhetorical modes, and the final three constitute higher-dimensional components. Three types of semantic or modal mapping are defined: low-dimensional mapping (from lower-dimensional axes to the ontological axis), high-dimensional mapping (from the ontological axis to higher-dimensional axes), and full-dimensional mapping. These mappings form a pyramidal hierarchy, progressing from foundational elements (things, features, and attributes) to higher-order cognitive functions and epistemic purposes. By employing three core logical structures---combinatory, parallel, and embedded---the GCS consolidates infinite expressive possibilities within the finite intersections of its axes. The system's generative capacity, quantifiable by the number of axis intersections (generalized mode number), enables the navigation of nearly infinite expressive variations while steering practical applications toward finite, purpose-driven goals. The GCS transitions rhetorical modes from a static taxonomy to a dynamic analytical system for discourse construction and analysis, offering possibly insights for the development of large language models through the integration of a programmable rhetorical mode system.

Article
Computer Science and Mathematics
Other

Kamel Maaloul

,

Brahim Lejdel

,

Eliseo Clementini

Abstract: Background/Objectives: Hepatocellular carcinoma (HCC) is a leading cause of cancer-related mortality worldwide and is frequently associated with chronic hepatitis C virus (HCV) infection. Early prediction of HCC in HCV patients remains challenging due to complex clinical patterns. This study aims to develop and evaluate machine learning models for the early prediction of hepatocellular carcinoma in patients with HCV. Methods: Clinical and laboratory data from HCV patients were analyzed using a machine learning–based framework. The dataset was preprocessed, and relevant features were selected prior to model development. Six supervised machine learning algorithms—CatBoost, XGBoost, LightGBM, Gaussian Naive Bayes, Extra Trees, and Random Forest—were implemented. Hyperparameter optimization was performed using the Optuna framework. Model performance was assessed using standard evaluation metrics, including accuracy, precision, recall, and F1-score. Results: The experimental results demonstrate that machine learning techniques can effectively identify patterns associated with the progression to hepatocellular carcinoma in HCV patients. Among the evaluated models, ensemble-based algorithms achieved the highest predictive performance, outperforming baseline approaches across multiple evaluation metrics. Conclusions: The findings confirm that machine learning models can serve as valuable decision-support tools for the early detection of hepatocellular carcinoma in patients with HCV. Integrating such models into clinical workflows may enhance early diagnosis and improve patient outcomes. Future work will focus on expanding the dataset and validating the models in real-world clinical settings.

Article
Computer Science and Mathematics
Other

Cheng Junru

,

Li Na

,

Toksobaev Bulat T.

,

Kambarova Zhumagul Ularbaevna

,

Baktygul Toksobaeva

Abstract: Despite regional integration frameworks like the Lisbon Recognition Convention, cross-border academic mobility in Central Asia remains constrained by fragmented credential verification systems. This inefficiency stems from the absence of interoperable infrastructure and asymmetric institutional capacity across national systems. This paper employs a comparative case study to analyze how Kazakhstan and Kyrgyzstan manage credential recognition. We identify that while Kazakhstan has centralized its digital governance, Kyrgyzstan operates under a bifurcated model where academic and scientific degrees are verified by separate bodies. To address this, we propose the Central Asian Blockchain Education Alliance (CABEA), a consortium blockchain framework acting as a "middleware" layer. This architecture allows for functional centralization without requiring administrative consolidation. Based on the technical specifications of Hyperledger Fabric, the proposed model has the potential to reduce cross-border verification time from weeks to near-instantaneous automated queries. We conclude that distributed ledger technology offers a scalable path for regional educational integration by bridging the gap between divergent state infrastructures.

Article
Computer Science and Mathematics
Other

Felipe Oliveira Souto

Abstract: This paper introduces a unified geometric framework connecting the distribution of prime numbers to the spectral geometry of Riemann zeta zeros through physical interference phenomena. We demonstrate that prime spirals in the Sacks spiral are interference patterns generated by zeta zeros, with precise mathematical isomorphism to wave pendulum dynamics. The Riemann-Moebius-Enneper geometric triad provides the fundamental stage for this interference, from which key physical constants emerge naturally: $\alpha^{-1} = 137.035999084$, $E_0 = 1820.469$ eV, $\ell_P = 1.616255\times10^{-35}$ m. We derive the master equation of geometric interference, demonstrate applications to DNA structure and cosmic web formation, and provide overwhelming statistical evidence ($p < 10^{-298}$) for the theory. This work establishes that physical reality emerges from harmonic interference on a geometric triad, with mathematical constants serving as fundamental frequencies of existence.

Article
Computer Science and Mathematics
Other

Felipe Oliveira Souto

Abstract: This paper explores a spectral isomorphism between wave pendulum dynamics and prime number patterns via Riemann zeta zeros. We demonstrate that both systems share mathematical structures based on superposition of discrete frequency components, leading to comparable interference phenomena. The temporal evolution of the wave pendulum relates to logarithmic scaling in prime distributions, with both patterns emerging from similar spectral principles mediated by Riemann zero contributions. Analytical derivation and numerical analysis support this correspondence, suggesting connections between mechanical systems and number-theoretical concepts through spectral geometry.

Article
Computer Science and Mathematics
Other

Harris Wang

Abstract: Modern enterprise information systems must simultaneously support complex organizational structures, ensure robust security, and remain scalable and maintainable over time. Traditional Role‑Based Access Control (RBAC) models, while effective for permission management, operate primarily as post‑design security layers and do not provide a unified methodology for structuring system architecture. This paper introduces the Zoned Role‑Based (ZRB) model, a mathematically formalized and comprehensive framework that integrates organizational modeling, system design, implementation, access control, and long‑term maintenance. ZRB models an organization as a hierarchy of zones, each containing its own roles, applications, operations, and users, forming a recursive Zone Tree that directly mirrors real organizational semantics. Through formally defined role hierarchies, zone‑scoped permission sets, and inter‑zone inheritance mappings, ZRB provides a context‑aware permission calculus that unifies authentication and authorization across all zones. The paper presents the theoretical foundations of ZRB, a multi‑phase engineering methodology for constructing integrated enterprise systems, and a complete implementation architecture with permission inference, navigation design, administrative subsystems, and deployment models. Empirical evaluations across several deployed systems demonstrate significant improvements in permission accuracy, administrative efficiency, scalability, and maintainability. ZRB thus offers a rigorously defined and practically validated framework for building secure, scalable, and organizationally aligned enterprise information systems.

Article
Computer Science and Mathematics
Other

Penglei Sun

,

Song Tang

,

Jiawen Wen

,

Yuxuan Liang

,

Yang Yang

,

Xiaowen Chu

Abstract: Urban Embodied Agents (UrbanEAs) are emerging to interact with complex, large-scale city environments, generating vast, heterogeneous data streams. While embodied agent research has focused on controlled indoor environments, these settings lack the complexity of the physical world. In contrast, urban environments present distinct challenges, including environmental variability, limited observability, and interaction complexity. These challenges hinder the effectiveness of conventional agents. Therefore, establishing a comprehensive data lifecycle to fuse multi-domain data from terrain, aerial, and space is an essential strategy for developing actionable embodied capabilities from raw urban streams. Distinct from existing surveys that follow a model-centric paradigm for urban computing, we systematically propose and review a comprehensive Data Lifecycle from a multi-domain data perspective, which is essential for the UrbanEA. First, we propose a unified framework containing four key stages of this lifecycle: Data Perception, Data Management, Data Fusion, and Task Application. Next, we establish a taxonomy for each stage of the lifecycle. Finally, we outline the social impact of the data lifecycle of UrbanEA and open research problems. Our survey provides a rigorous roadmap for designing the robust, high-performance data frameworks essential for these UrbanEAs.

Article
Computer Science and Mathematics
Other

Abdelmajid Benahmed

Abstract: This article examines the development of operator splitting methods in Soviet numerical analysis during 1955–1975, with particular focus on N.N. Yanenko’s formalization of the Method of Fractional Steps at the Siberian Branch of the USSR Academy of Sciences. While similar techniques were independently developed in the West (Peaceman-Rachford 1955, Douglas-Rachford 1956), the Soviet school pursued a distinct trajectory shaped by acute hardware constraints and deep epistemological commitments to operator theory. Through analysis of technical publications, archival materials, and comparative historiography, this study argues that material scarcity catalyzed a systematic research program emphasizing computational economy, while a pre-existing mathematical culture valorizing theoretical elegance reinforced this trajectory. The case illuminates how geopolitical constraints and intellectual traditions jointly shaped algorithmic innovation, contributing to methods that ironically became foundational for modern massively parallel computing. Significant archival gaps limit definitive claims about industrial applications, highlighting the need for further primary source research.

Article
Computer Science and Mathematics
Other

Esmam Khan Babu

Abstract: The accelerating pace of artificial intelligence research and deployment makes both extraordinary opportunity and profound peril increasingly apparent. This paper discusses the innovative proposition that AI can be marsharded, paradoxically, as a proactive guardian of human cognition against the harmful applications of the very technology on which it relies. The heuristic of “brain hacking”—an intentional deployment of AI-driven interventions that systematically augment mental capacities while fortifying neural substrates against adversarial incursions—emerges as a promising trajectory for both theoretical and practical inquiry. Central to the inquiry is the acknowledgment that the human brain, as a highly interactive and non-linear complex adaptive system, is susceptible to perturbations from sophisticated external agents. Nevertheless, leveraging the quasi-infinite adaptiveness of advanced AI algorithms may permit the engineering of defensive architectures that preserve both the integrity and the adaptive plasticity of neural circuits. This paper systematically reviews emergent scholarship across deep neural network design, reinforcement learning paradigms, and convergent advances in cognitive neuroscience, converging to identify convergent leverage points for human neural fortification. The research objective is to fabricate a multilayered AI-mediated cognitive firewall that autonomously surveys the brain’s operational state, diagnostically distinguishes anomalous patterns of activity, and pre-emptively desensitizes or reroutes them before they achieve disruptive penetration. Through rigorous simulation and empirical validation, the framework aspires to safeguard the epistemic domain of the human mind without impairing its intrinsic generative capacities. This study further addresses the essential ethical dimensions inherent in deploying artificial intelligence for the safeguarding of neural integrity, advocating for transparency, systematic safety, and the preservation of personal autonomy. Confronting these issues explicitly allows us to construct a future in which AI operates not only as a catalyst for remarkable technological advance, but also as a vigilant guardian of human cognition and psychological health.

Article
Computer Science and Mathematics
Other

Felipe Oliveira Souto

Abstract: This work presents a series of interconnected mathematical \emph{constructions} that take the zeros of the Riemann zeta function as primordial elements. Rather than seeking a conventional proof of the Riemann Hypothesis, we investigate: what kind of mathematical reality emerges when we \emph{postulate} that these zeros form the spectrum of an operator within a specific geometric arena? Our constructions reveal a remarkable chain of coherence, linking geometry (minimal surfaces), topology (M\"obius bands), statistics (GUE), and fundamental physical constants. Within the constructed framework, the critical line $\Re(s)=1/2$ appears as a \emph{necessary condition}, GUE statistics as an intrinsic geometric property, and relations between the first four zeros encode the fine structure constant $\alpha^{-1} = 137.035999084\ldots$ to experimental precision \cite{CODATA2018}. We present these constructions not as final theorems, but as substantive \emph{insights} from a perspective that treats the zeta function not merely as an object of analysis, but as a potential organizational principle of mathematical reality.

Article
Computer Science and Mathematics
Other

Khondokar Fida Hasan

,

William Hughes

,

Adrita Rahman Tory

,

Chris Campbell

,

Selen Turkay

Abstract: Serious games are increasingly recognized as powerful pedagogical tools, often offering engaging, interactive, and practical learning experiences. This paper presents the design, implementation, and evaluation of a 3D virtual serious game specifically tailored for cybersecurity governance and policy education. In particular, the nature of the game is an escape room, drawing on military training principles: players must solve a problem to escape one room before advancing to the next. Set within a virtual company environment, the game features three interactive zones that guide students through analyzing cyber risks, aligning security frameworks, and drafting appropriate policies. This structure cultivates critical thinking and decision-making skills and strengthens practical cybersecurity competencies. The primary contribution lies in the innovative integration of game-based learning and 3D virtual technology to create robust, hands-on educational materials. The design also includes AI-resilient assessment features to address challenges related to generative AI misuse, ensuring that the activities cannot be easily replicated and thereby supporting academic integrity. Survey results demonstrate that students found this approach both engaging and effective, reporting enhanced understanding and enthusiasm toward cybersecurity governance and policy concepts. These findings highlight the potential of gamified environments to bridge theory and practice in cybersecurity education, equipping learners with industry-relevant skills while fostering deeper engagement and active learning.

Article
Computer Science and Mathematics
Other

Bakhtiiar Tashbolotov

,

Burul Shambetova

Abstract: The transition of machine learning (ML) from experimental models to production-ready systems is hindered by the complexities of managing high-dimensional data and mitigating "train-serve skew." This paper presents an architectural framework for a high-performance Feature Store, designed as a centralized "missing data layer" that unifies feature engineering across the ML lifecycle. Utilizing a microservices approach, the system leverages Go for low-latency serving and Apache Spark for scalable distributed aggregations. We propose a dual-layer storage strategy integrating DragonflyDB for sub-millisecond online retrieval and Apache Iceberg for transactional offline persistence and historical time-travel. Experimental results demonstrate that this architecture achieves a p99 latency of less than 0.85ms at 50,000 requests per second while maintaining 100 percent data consistency. Finally, the research addresses the emerging shift toward embedding-centric pipelines, outlining the evolution required to manage high-dimensional vector spaces and drift in self-supervised models.

Article
Computer Science and Mathematics
Other

Andrea Brites Marto

,

Philip Krauss

,

Katie Kalt

,

Vasundra Touré

,

Deepak Unni

,

Sabine Österle

Abstract: The Swiss Personalized Health Network developed a national federated framework for semantically described medical data, in particular hospital clinical routine data. Instead of centralizing patient-level information, hospitals perform semantic coding and standardization locally and store SPHN-compliant data in a triple store. These decentralized RDF datasets, following the FAIR (Findable, Accessible, Interoperable, Reusable) principles, together exceed 12 billion triples across more than 800,000 patients, all signed a broad consent. In this work, we address the computational challenge of efficiently querying and integrating these distributed RDF resources through SPARQL. Our use cases focus on feasibility queries and value distribution, which allow researchers to assess the potential availability of patient cohorts across hospitals without disclosing sensitive patient-level information. We present methods for optimizing SPARQL querying, tailored to the characteristics of large-scale federated and complex clinical data. We evaluate these approaches by iteratively testing optimized queries on the SPHN Federated Clinical Routine Dataset, which spans 125 SPHN concepts including demographics, diagnoses, procedures, medications, laboratory results, vital signs, clinical scores, allergies, microbiology, intensive care data, oncology, and biological samples. With this approach, we’ve built a set of rules to consider for gradually optimizing SPARQL queries. Our results demonstrate that optimized SPARQL query planning and execution can significantly reduce response times without compromising semantic interoperability.

Review
Computer Science and Mathematics
Other

Ângela Oliveira

,

Paulo Serra

,

Filipe Fidalgo

Abstract: Artificial intelligence has become fundamental to the advancement of digital gastronomy, a domain that integrates computer vision, natural language processing, graph-based modelling, recommender systems, multimodal learning, IoT and robotics to support culinary, nutritional and behavioural processes. Despite this progress, the field remains conceptually fragmented and lacks comprehensive syntheses that combine methodological insights with bibliometric evidence. To the best of our knowledge, this study presents the first systematic review to date dedicated to artificial intelligence in digital gastronomy, complemented by a bibliometric analysis covering publications from 2018 to 2025. A structured search was conducted across five major databases (ACM Digital Library, IEEE Xplore, Scopus, Web of Science and SpringerLink), identifying 233 records. Following deduplication, screening and full-text assessment, 53 studies met the predefined quality criteria and were included in the final analysis. The methodology followed established review protocols in engineering and computer science, incorporating independent screening, systematic quality appraisal and a multidimensional classification framework. The results show that research activity is concentrated in food recognition, recipe generation, personalised recommendation, nutritional assessment, cooking assistance, domestic robotics and smart-kitchen ecosystems. Persistent challenges include limited cultural diversity in datasets, annotation inconsistencies, difficulties in multimodal integration, weak cross-cultural generalisation and restricted real-world validation. The findings indicate that future progress will require more inclusive datasets, culturally robust models, harmonised evaluation protocols and systematic integration of ethical, privacy and sustainability principles to ensure reliable and scalable AI-driven solutions.

of 12

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated