Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Other

Abdelmajid Benahmed

Abstract: This article examines the development of operator splitting methods in Soviet numerical analysis during 1955–1975, with particular focus on N.N. Yanenko’s formalization of the Method of Fractional Steps at the Siberian Branch of the USSR Academy of Sciences. While similar techniques were independently developed in the West (Peaceman-Rachford 1955, Douglas-Rachford 1956), the Soviet school pursued a distinct trajectory shaped by acute hardware constraints and deep epistemological commitments to operator theory. Through analysis of technical publications, archival materials, and comparative historiography, this study argues that material scarcity catalyzed a systematic research program emphasizing computational economy, while a pre-existing mathematical culture valorizing theoretical elegance reinforced this trajectory. The case illuminates how geopolitical constraints and intellectual traditions jointly shaped algorithmic innovation, contributing to methods that ironically became foundational for modern massively parallel computing. Significant archival gaps limit definitive claims about industrial applications, highlighting the need for further primary source research.

Article
Computer Science and Mathematics
Other

Esmam Khan Babu

Abstract: The accelerating pace of artificial intelligence research and deployment makes both extraordinary opportunity and profound peril increasingly apparent. This paper discusses the innovative proposition that AI can be marsharded, paradoxically, as a proactive guardian of human cognition against the harmful applications of the very technology on which it relies. The heuristic of “brain hacking”—an intentional deployment of AI-driven interventions that systematically augment mental capacities while fortifying neural substrates against adversarial incursions—emerges as a promising trajectory for both theoretical and practical inquiry. Central to the inquiry is the acknowledgment that the human brain, as a highly interactive and non-linear complex adaptive system, is susceptible to perturbations from sophisticated external agents. Nevertheless, leveraging the quasi-infinite adaptiveness of advanced AI algorithms may permit the engineering of defensive architectures that preserve both the integrity and the adaptive plasticity of neural circuits. This paper systematically reviews emergent scholarship across deep neural network design, reinforcement learning paradigms, and convergent advances in cognitive neuroscience, converging to identify convergent leverage points for human neural fortification. The research objective is to fabricate a multilayered AI-mediated cognitive firewall that autonomously surveys the brain’s operational state, diagnostically distinguishes anomalous patterns of activity, and pre-emptively desensitizes or reroutes them before they achieve disruptive penetration. Through rigorous simulation and empirical validation, the framework aspires to safeguard the epistemic domain of the human mind without impairing its intrinsic generative capacities. This study further addresses the essential ethical dimensions inherent in deploying artificial intelligence for the safeguarding of neural integrity, advocating for transparency, systematic safety, and the preservation of personal autonomy. Confronting these issues explicitly allows us to construct a future in which AI operates not only as a catalyst for remarkable technological advance, but also as a vigilant guardian of human cognition and psychological health.

Article
Computer Science and Mathematics
Other

Felipe Oliveira Souto

Abstract: This work presents a series of interconnected mathematical \emph{constructions} that take the zeros of the Riemann zeta function as primordial elements. Rather than seeking a conventional proof of the Riemann Hypothesis, we investigate: what kind of mathematical reality emerges when we \emph{postulate} that these zeros form the spectrum of an operator within a specific geometric arena? Our constructions reveal a remarkable chain of coherence, linking geometry (minimal surfaces), topology (M\"obius bands), statistics (GUE), and fundamental physical constants. Within the constructed framework, the critical line $\Re(s)=1/2$ appears as a \emph{necessary condition}, GUE statistics as an intrinsic geometric property, and relations between the first four zeros encode the fine structure constant $\alpha^{-1} = 137.035999084\ldots$ to experimental precision \cite{CODATA2018}. We present these constructions not as final theorems, but as substantive \emph{insights} from a perspective that treats the zeta function not merely as an object of analysis, but as a potential organizational principle of mathematical reality.

Article
Computer Science and Mathematics
Other

Khondokar Fida Hasan

,

William Hughes

,

Adrita Rahman Tory

,

Chris Campbell

,

Selen Turkay

Abstract: Serious games are increasingly recognized as powerful pedagogical tools, often offering engaging, interactive, and practical learning experiences. This paper presents the design, implementation, and evaluation of a 3D virtual serious game specifically tailored for cybersecurity governance and policy education. In particular, the nature of the game is an escape room, drawing on military training principles: players must solve a problem to escape one room before advancing to the next. Set within a virtual company environment, the game features three interactive zones that guide students through analyzing cyber risks, aligning security frameworks, and drafting appropriate policies. This structure cultivates critical thinking and decision-making skills and strengthens practical cybersecurity competencies. The primary contribution lies in the innovative integration of game-based learning and 3D virtual technology to create robust, hands-on educational materials. The design also includes AI-resilient assessment features to address challenges related to generative AI misuse, ensuring that the activities cannot be easily replicated and thereby supporting academic integrity. Survey results demonstrate that students found this approach both engaging and effective, reporting enhanced understanding and enthusiasm toward cybersecurity governance and policy concepts. These findings highlight the potential of gamified environments to bridge theory and practice in cybersecurity education, equipping learners with industry-relevant skills while fostering deeper engagement and active learning.

Article
Computer Science and Mathematics
Other

Bakhtiiar Tashbolotov

,

Burul Shambetova

Abstract: The transition of machine learning (ML) from experimental models to production-ready systems is hindered by the complexities of managing high-dimensional data and mitigating "train-serve skew." This paper presents an architectural framework for a high-performance Feature Store, designed as a centralized "missing data layer" that unifies feature engineering across the ML lifecycle. Utilizing a microservices approach, the system leverages Go for low-latency serving and Apache Spark for scalable distributed aggregations. We propose a dual-layer storage strategy integrating DragonflyDB for sub-millisecond online retrieval and Apache Iceberg for transactional offline persistence and historical time-travel. Experimental results demonstrate that this architecture achieves a p99 latency of less than 0.85ms at 50,000 requests per second while maintaining 100 percent data consistency. Finally, the research addresses the emerging shift toward embedding-centric pipelines, outlining the evolution required to manage high-dimensional vector spaces and drift in self-supervised models.

Article
Computer Science and Mathematics
Other

Andrea Brites Marto

,

Philip Krauss

,

Katie Kalt

,

Vasundra Touré

,

Deepak Unni

,

Sabine Österle

Abstract: The Swiss Personalized Health Network developed a national federated framework for semantically described medical data, in particular hospital clinical routine data. Instead of centralizing patient-level information, hospitals perform semantic coding and standardization locally and store SPHN-compliant data in a triple store. These decentralized RDF datasets, following the FAIR (Findable, Accessible, Interoperable, Reusable) principles, together exceed 12 billion triples across more than 800,000 patients, all signed a broad consent. In this work, we address the computational challenge of efficiently querying and integrating these distributed RDF resources through SPARQL. Our use cases focus on feasibility queries and value distribution, which allow researchers to assess the potential availability of patient cohorts across hospitals without disclosing sensitive patient-level information. We present methods for optimizing SPARQL querying, tailored to the characteristics of large-scale federated and complex clinical data. We evaluate these approaches by iteratively testing optimized queries on the SPHN Federated Clinical Routine Dataset, which spans 125 SPHN concepts including demographics, diagnoses, procedures, medications, laboratory results, vital signs, clinical scores, allergies, microbiology, intensive care data, oncology, and biological samples. With this approach, we’ve built a set of rules to consider for gradually optimizing SPARQL queries. Our results demonstrate that optimized SPARQL query planning and execution can significantly reduce response times without compromising semantic interoperability.

Review
Computer Science and Mathematics
Other

Ângela Oliveira

,

Paulo Serra

,

Filipe Fidalgo

Abstract: Artificial intelligence has become fundamental to the advancement of digital gastronomy, a domain that integrates computer vision, natural language processing, graph-based modelling, recommender systems, multimodal learning, IoT and robotics to support culinary, nutritional and behavioural processes. Despite this progress, the field remains conceptually fragmented and lacks comprehensive syntheses that combine methodological insights with bibliometric evidence. To the best of our knowledge, this study presents the first systematic review to date dedicated to artificial intelligence in digital gastronomy, complemented by a bibliometric analysis covering publications from 2018 to 2025. A structured search was conducted across five major databases (ACM Digital Library, IEEE Xplore, Scopus, Web of Science and SpringerLink), identifying 233 records. Following deduplication, screening and full-text assessment, 53 studies met the predefined quality criteria and were included in the final analysis. The methodology followed established review protocols in engineering and computer science, incorporating independent screening, systematic quality appraisal and a multidimensional classification framework. The results show that research activity is concentrated in food recognition, recipe generation, personalised recommendation, nutritional assessment, cooking assistance, domestic robotics and smart-kitchen ecosystems. Persistent challenges include limited cultural diversity in datasets, annotation inconsistencies, difficulties in multimodal integration, weak cross-cultural generalisation and restricted real-world validation. The findings indicate that future progress will require more inclusive datasets, culturally robust models, harmonised evaluation protocols and systematic integration of ethical, privacy and sustainability principles to ensure reliable and scalable AI-driven solutions.

Article
Computer Science and Mathematics
Other

Fabiola Boccuto

,

Ugo Lomoio

,

Salvatore Derosa

,

Daniele Torella

,

Pierangelo Veltri

,

Pietro Hiram Guzzi

Abstract: Objective: The rising incidence of myocardial infarction (MI) in individuals under 50 years underscores an urgent need for innovative rehabilitation strategies that extend beyond hospital care, empowering young patients to reclaim active lives through sustained physical activity and remote monitoring. Wearable health technologies hold transformative potential here, as studies demonstrate their ability to boost exercise capacity, daily steps, and reduce rehospitalizations in post-MI recovery. This study thus assesses the clinical value of wearable devices in remotely tracking motor activity among young adults during early MI rehabilitation. Methods: Using the SiDLY Care Pro wristband, continuous non-invasive measurements of heart rate, oxygen saturation, and physical activity were collected from 62 of 80 post-MI patients (<50 years) over seven days, alongside validated questionnaires (IPAQ, SF-36, DASS-21). Time-series clustering and principal component analysis characterized heart rate dynamics and activity patterns. Most participants showed sedentary behaviour (2,000–4,000 steps/day), though self-reported health and psychological well-being were satisfactory. Results: The device provided reliable, clinically meaningful data, particularly when linked to clinician feedback. Participants expressed interest in using such technologies, especially if supported by reimbursement and professional guidance. Despite limitations—short monitoring, small heterogeneous samples, and accuracy constraints—the findings suggest wearable systems can enhance remote monitoring, patient engagement, and early intervention in post-MI care. Broader studies and supportive policies are recommended. Conclusion: Overall, integrating wearable technologies with professional oversight and patient participation may substantially improve recovery and outcomes for young MI survivors.

Article
Computer Science and Mathematics
Other

Suraiya Akter Sathi

,

Nashrah Hasan

,

Abdullah Al Mamun

,

Md Salim Sadman Ifti

,

Mohammad Shafiul Alam Khan

Abstract: 5G is the fastest-growing generation and the future of telecommunications. In the following years, when it is completely developed, it will be used by a large number of individuals all over the world. But 5G has a lot of security and privacy issues. As a result, it's critical to properly identify existing security weaknesses and provide effective solutions to address them. Some security and privacy issues still exist in the standard 5G AKA protocol, which have been identified and addressed in recent literature. By taking advantage of those vulnerabilities of the protocol, an adversary can perform some attacks such as SUCI replay attack, parallel session attack, linkability attack, etc. As a result, this standard protocol cannot ensure the location confidentiality of the subscriber. In the recent literature, researchers have provided some effective ways to mitigate the vulnerabilities of the protocol. However, the standard 5G AKA protocol still has a number of security issues that are either partially or entirely unsolved. In this paper, those issues have been addressed, and a security-enhanced 5G AKA protocol has been proposed which can mitigate the vulnerabilities of the AKA protocol. The proposed protocol may now be able to prevent such attacks and ensure the location confidentiality of the subscriber.

Article
Computer Science and Mathematics
Other

Addy Arif Bin Mahathir

,

Sivamuganathan Mohana Dass

,

Jerry Wingsky

,

Kelvin Chang

,

Joshua Loh Tze Han

,

Sai Rama Mahalingam

,

Noor Ul Amin

Abstract: This paper presents an extensive exploration of quantum computing as an emerging paradigm that operates on the principles of quantum mechanics to process information in ways unattainable by classical systems. It traces the evolution of quantum theory from its early conceptual foundations to its present-day technological applications, highlighting key milestones such as the development of qubits, quantum gates, and essential algorithms including Shor’s and Grover’s. The study examines the fundamental mechanisms of quantum superposition and entanglement, alongside the hardware and software innovations driving scalability and performance. By analysing experimental progress, programming models, and comparative advantages over classical and cloud computing, this paper underscores how quantum computing can transform industries such as data security, medicine, and artificial intelligence. Furthermore, it outlines future prospects involving error correction, neuromorphic integration, and commercialization trends that are shaping the next generation of computational technology.

Article
Computer Science and Mathematics
Other

Dimitrios Liarokapis

Abstract: This paper introduces the concept of hyper transfer—a qualitatively distinct form of knowledge transfer that operates through metaphorical, analogical, and allegorical mappings. While existing literature distinguishes between near transfer (application in similar contexts) and far transfer (application in dissimilar contexts), we argue that metaphorical knowledge transfer represents a third dimension that cannot be adequately captured by a linear continuum. We propose a triangular framework where hyper transfer occupies the apex, connecting to but remaining fundamentally distinct from both near and far transfer. This distinction is crucial for understanding how abstract conceptual structures are transferred across domains through cognitive mechanisms that prioritize relational correspondence over contextual similarity. We illustrate this framework with a pedagogical example from computer science education, demonstrating how technical concepts can be hyperbolically transferred to socio-political domains through metaphorical reasoning.

Review
Computer Science and Mathematics
Other

Suresh Neethirajan

Abstract: Artificial intelligence is transforming digital livestock farming, yet the same systems that improve welfare, efficiency, and emissions monitoring can impose large carbon costs from training, continuous inference, and hardware manufacture. This PRISMA-guided systematic review examines how Green AI can realign performance with environmental responsibility. We searched IEEE Xplore, Scopus, Web of Science, and the ACM Digital Library (January 2019–October 2025), screening 1,847 records and including 89 studies (61 with quantitative data). We address three questions: (RQ1) How do energy-efficient model designs reduce computational footprints while preserving accuracy? (RQ2) Which low-carbon machine-learning frameworks minimize training and inference emissions? (RQ3) How do sustainable infrastructures enable climate-positive deployments? Meta-analysis shows strong decoupling of performance from impact. Compression (pruning, quantization, distillation) achieves 70–95% parameter reductions with <5% accuracy loss. Lightweight architectures (e.g., MobileNet, EfficientNet) deliver 10–50× energy savings versus conventional CNNs, while neuromorphic systems achieve 200–1000× power reductions. Carbon-aware scheduling cuts emissions by ~70% via temporal and spatial workload placement; federated learning reduces communication energy by ~85% while preserving privacy; edge–fog–cloud hierarchies lower inference energy by ~87% by localizing computation. Six representative deployments report mean energy savings of 90.3% (85.9–99.96%) and cumulative CO₂ reductions of 2,175 kg with >91% accuracy retained. Key gaps remain: no ISO-aligned carbon metrics for agricultural AI; embodied emissions are rarely counted (17% of studies); accessibility for smallholders is limited; rebound effects are unquantified. We propose a roadmap prioritizing ISO-compliant accounting, low-cost solar or neuromorphic edge devices, rebound analysis, field validation, and multi-stakeholder Pareto optimization.

Article
Computer Science and Mathematics
Other

Evi M.C. Sijben

,

Vanessa Volz

,

Tanja Alderliesten

,

Peter A.N. Bosman

,

Berit M. Verbist

,

Erik F. Hensen

,

Jeroen C. Jansen

Abstract: Background: Paragangliomas of the head and neck are rare, benign and indolent to slow-growing tumors. Not all tumors require immediate active intervention, and surveillance is a viable management strategy in a large proportion of cases. Treatment decisions are based on several tumor- and patient related factors, with the tumor progression rate being a predominant determinant. Accurate prediction of tumor progression has the potential to significantly improve treatment decisions, by helping to identify patients who are likely to require active treatment in the future. It furthermore enables better-informed timing for follow-up, allowing early intervention for those who will ultimately need it, and optimization of the use of resources (such as MRI scans). Crucial to this is having reliable estimates of the uncertainty associated with a future growth forecast, so that this can be taken into account in the decision-making process. Methods: For various tumor growth prediction models, two methods for uncertainty estimation were compared: a historical-based one and a Bayesian one. We also investigated how incorporating either tumor-specific or general estimates of auto-segmentation uncertainty impacts the results of growth prediction. The performance of uncertainty estimates was examined both from a technical and a practical perspective. Study design: Method comparison study. Results: Data of 208 patients were used, comprising 311 paragangliomas and 1501 volume measurements, resulting in 2547 tumor growth predictions (a median of 10 predictions per tumor). As expected, the uncertainty increased with the length of the prediction horizon and decreased with the inclusion of more tumor measurement data in the prediction model. The historical method resulted in estimated confidence intervals where the actual value fell within the estimated 95% confidence interval 94% of the time. However, this method resulted in confidence intervals that were too wide to be clinically useful (often over 200% of the predicted volume), and showed poor ability to differentiate growing and stable tumors. The estimated confidence intervals of the Bayesian method were much narrower. However, the tumor volume fell only 78% of the time within its estimated 95% confidence interval. Despite this, the Bayesian method showed good results for distinguishing between growing and stable tumors, which has arguably the most practical value. When combining all growth models, the Bayesian method that uses tumor-specific auto-segmentation uncertainties resulted in an 86% correct classification of growing and non-growing tumors. Conclusions: Of the methods evaluated for predicting paraganglioma progression, the Bayesian method is the most useful in the considered context, because it shows the best discrimination between growing and non-growing tumors. To determine how these methods could be used and what its value is for patients, they should be further evaluated in a clinical setting.

Article
Computer Science and Mathematics
Other

Thuthukile Jita

,

Mamosa Thaanyane

Abstract: The rise of open distance e-learning (ODeL) has transformed the landscape of higher education, offering flexible learning opportunities to student-teachers to pursue academic programs without constraints. Literature shows an increasing adoption of ODeL models in higher education to expand access. However, despite the expansion of ODeL, deep-rooted inequalities in technological access remain a pressing concern, particularly for student-teachers in rural and historically disadvantaged communities. Therefore, the current study aimed to assess the impact of technology access and equity on student-teachers’ learning in ODeL to inform more inclusive and context-sensitive policy and practice. Grounded in the digital divide framework, the study examined how access to technologies influences opportunities for participation in the digital age. This study adopted a qualitative approach with semi-structured interviews with student-teachers to understand their views on these issues. Data were analyzed using thematic analysis, with results revealing that some student-teachers do not own personal digital devices and depend on shared campus access. On the other hand, unlike this group, student-teachers with a stable Internet connection and their own devices can access materials, attend virtual classes, and even meet academic deadlines. If these inequities are not addressed, implementing ODeL may inadvertently widen the educational divide.

Technical Note
Computer Science and Mathematics
Other

Daisuke Sugisawa

Abstract: This document is a Technical Note that reports on the design and proof-of-concept of a self-distributed Selective Forwarding Unit (SFU) architecture for real-time video streaming. Unlike a full research article, this note emphasizes the practical motivation, implementation details, and observed behaviors of the system. The central idea is to address the limitations of traditional prediction-based scaling by introducing a finite state machine (FSM)-based self-stabilizing method that enables autonomous scale-out and scale-in of distributed SFUs under dynamic traffic conditions.

Article
Computer Science and Mathematics
Other

Paulo Serra

,

Ângela Oliveira

Abstract: The integration of Artificial Intelligence (AI) into educational environments is rede-fining how digital resources support teaching and learning, highlighting the need to understand how prompting strategies can enhance engagement, autonomy, and per-sonalisation. This study explores the pedagogical role of prompt engineering in trans-forming static digital materials into adaptive and interactive learning experiences aligned with the principles of Education 4.0. A systematic literature review, conducted under the PRISMA protocol, examined the use of educational prompts and identified key AI techniques applied in education, including machine learning, natural language processing, recommender systems, large language models, and reinforcement learning. The findings indicate consistent improvements in academic performance, motivation, and learner engagement, while also revealing persistent limitations related to technical integration, ethical risks, and weak pedagogical alignment. Building on these insights, the article proposes a structured prompt engineering methodology encompassing in-terdependent components such as role definition, audience targeting, feedback style, contextual framing, guided reasoning, operational rules, and output format. A practical illustration demonstrates how embedding prompts into digital learning resources, exemplified through PDF-based exercises, enables AI agents to facilitate personalised, adaptive study sessions. The study concludes that systematic prompt design can repo-sition educational resources as intelligent, transparent, and pedagogically rigorous systems for knowledge construction.

Short Note
Computer Science and Mathematics
Other

Christian R. Macedonia

Abstract: Kosmoplex Theory proposes that physical reality emerges from a finite computational substrate: an 8-dimensional octonionic space structured by the Fano plane, projecting into observable 4-dimensional spacetime through a discrete transformation mechanism. The framework is built on triadic closure over the alphabet {−1, 0, +1} and enforces reversibility through an affine geometric model of eternal cosmic transformation. From these axioms, the theory derives fundamental constants, including the fine structure constant α−1 ≈ 137, the force hierarchy, and Planck-scale discreteness, as necessary consequences rather than free parameters. This work presents a systematic enumeration of falsifiable predictions, following the Popperian criterion that a scientific theory must specify conditions under which it could be experimentally refuted. Drawing on traditions from Cartesian methodological doubt to Popper’s demarcation principle, we demonstrate that theoretical strength derives not from unfalsifiable claims but from precise vulnerability to empirical test. Beginning with cosmological consistency checks (Olbers’ Paradox, dark matter/energy), we then detail decisive experimental protocols: altitude-dependent measurements of α at ∆α/α ∼ 10−18 precision, tests of the 7n force coupling hierarchy, searches for Planck-scale discreteness via Lorentz violation, quantum information capacity bounds at 137 bits/cycle, and ultra-high-precision spectroscopic searches for granular structure in fundamental constants. Each prediction provides explicit falsification criteria; contradiction of any would necessitate substantial revision or abandonment of the framework.

Review
Computer Science and Mathematics
Other

Fabio Cumbo

,

Davide Chicco

,

Sercan Aygun

,

Daniel Blankenberg

Abstract: Vector-Symbolic Architectures (VSAs) provide a powerful, brain-inspired framework for representing and manipulating complex data across the biomedical sciences. By mapping heterogeneous information, from genomic sequences and molecular structures to clinical records and medical images, into a unified high-dimensional vector space, VSAs enable robust reasoning, classification, and data fusion. Despite their potential, the practical design and implementation of an effective VSA can be a significant hurdle, as optimal choices depend heavily on the specific scientific application. This article bridges the gap between theory and practice by presenting ten tips for designing VSAs tailored to key challenges in the biomedical sciences. We provide concrete, actionable guidance on topics such as encoding sequential data in genomics, creating holistic patient vectors from electronic health records, and integrating VSAs with deep learning models for richer image analysis. Following these tips will empower researchers to avoid common pitfalls, streamline their development process, and effectively harness the unique capabilities of VSAs to unlock new insights from their data.

Article
Computer Science and Mathematics
Other

Manale Boughanja

,

Zineb Bakraouy

,

Tomader MAZRI

,

Ahmed SRHIR

Abstract: With the increasing complexity of autonomous vehicle (AV) networks, ensuring enhanced cybersecurity has become a critical challenge. Traditional security techniques often struggle to adapt dynamically to evolving threats. This study proposes a novel domain ontology to assess its coherence and effectiveness in structuring knowledge about AV security threats, intrusion characteristics, and corresponding mitigation techniques. Developed using Protégé 4.3 and the Web Ontology Language (OWL), the ontology formalizes cybersecurity concepts without directly integrating with an Intrusion Detection System (IDS). By providing a semantic representation of attacks and countermeasures, the ontology enhances threat classification and supports automated decision-making in security frameworks. Experimental evaluation demonstrated its effectiveness in improving knowledge organization and reducing inconsistencies in security threat analysis. Future work will focus on integrating the ontology with real-time security monitoring and IDS frameworks to enhance adaptive intrusion response strategies.

Short Note
Computer Science and Mathematics
Other

Evi Togia

Abstract: This research sets out to explore how data analytics can be harnessed to assess and promote diversity within cultural institutions. By examining patterns of engagement and representation across demographic groups, the study aims to provide a comprehensive understanding of inclusivity in cultural participation. The approach combines quantitative and qualitative methodologies, leveraging statistical analysis, machine learning, and thematic coding to extract insights from diverse data sources. Collaboration with cultural institutions will ensure the relevance and applicability of the findings. The anticipated outcomes include actionable recommendations for enhancing diversity and inclusion, as well as broader contributions to cultural informatics and policy development. Ultimately, the research aspires to foster more equitable and representative cultural ecosystems that reflect the richness of contemporary society.

of 12

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated