Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Computer Science

Panagiotis Karmiris

Abstract: ExecMesh introduces cryptographically verifiable computation as a foundational primitive for regulatory compliance and audit trail requirements in AI/ML systems [1–3]. By combining commitmentbased verification with secure multi-party oracles and a two-tier regulatory architecture, ExecMesh enables enterprises to meet FDA, SEC, and EU AI Act requirements while maintaining the benefits of decentralized infrastructure. Immediate Value Proposition: ExecMesh provides immediate value as an audit trail and provenance layer for regulated AI systems, independent of advances in zero-knowledge proof technology. Even without full verification of large neural networks, the system delivers cryptographic guarantees for data integrity, execution timestamps, and pipeline reproducibility—meeting core regulatory requirements today.
Article
Computer Science and Mathematics
Computer Science

Laxmi Kuravi

Abstract:

The current research delves into the effects of predictive modeling and explainable artificial intelligence (XAI) as a transformation agent in banking decision making, emphasizing the aspects of fairness and transparency. The Bank Marketing Dataset, acquired from UCI Machine Learning Repository, is the basis of the development of predictive models for the purpose of forecasting term deposit subscriptions. We have not only performed a comparison of linear, tree-based and ensemble methods but also utilized SHAP (SHapley Additive exPlanations) for the interpretation of model predictions. Moreover, a fairness audit has taken place among the demographic groups so as to pinpoint any biases that may be present. Among the results is the discovery that ensemble models, with XGBoost being particularly singled out, have the highest accuracy in prediction; conversely, XAI tools have been the ones that have provided insulin through the insights on feature contributions. The fairness analysis has uncoved the aggregation of model outcomes disparity in relation to age, job, and marital status groups. This is where the exemplification of the digital transformation potential comes in as the banking industry would be able to not only enhance its predictability but also expertly control ethical dilemmas using technological means.

Article
Computer Science and Mathematics
Computer Science

Noor Ul Amin

,

Addy Arif Bin Mahathir

,

Sivamuganathan Mohana Dass

,

Sai Rama Mahalingam

,

Priyanshu Das

Abstract: This study presents a comprehensive data visualization and statistical analysis of Singapore’s waste management trends to evaluate progress toward Sustainable Development Goal 12: Responsible Consumption and Production. Drawing on datasets from national agencies and international repositories, the research integrates information on waste generation, recycling, imports, and public behavior to produce a multi-dimensional understanding of the nation’s waste ecosystem. The study employs data preprocessing, transformation, and exploratory visualization techniques to address inconsistencies across diverse data sources and uncover significant temporal and sectoral patterns. Findings reveal increasing plastic dependency, stagnating recycling rates in domestic sectors, and varying degrees of public participation in sustainable practices. Furthermore, the analysis identifies the energy and resource savings achievable through material-specific recycling initiatives, particularly emphasizing non-ferrous metals and plastics. By consolidating visual narratives through Tableau-based dashboards, the study provides actionable insights for policymakers, sustainability researchers, and environmental agencies to design more data-informed strategies for achieving Singapore’s Zero Waste 2035 vision.
Article
Computer Science and Mathematics
Computer Science

Xiaopeng Li

,

Bo Chen

,

Junda She

,

Shiteng Cao

,

You Wang

,

Qinlin Jia

,

Haiying He

,

Zheli Zhou

,

Zhao Liu

,

Ji Liu

+20 authors

Abstract: The recommender systems community is witnessing a rapid shift from multi-stage cascaded discriminative pipelines (retrieval, ranking, and re-ranking) toward unified generative frameworks that directly generate items. Compared with traditional discriminative models, generative recommender systems offer the potential to mitigate cascaded error propagation, improve hardware utilization through unified architectures, and optimize beyond local user behaviors. This emerging paradigm has been catalyzed by the rise of generative models and the demand for end-to-end architectures that significantly improve Model FLOPS Utilization (MFU). In this survey, we provide a comprehensive analysis of generative recommendation through tri-decoupled perspective of tokenization, architecture, and optimization, three foundational components that collectively define existing generative systems. We trace the evolution of tokenization from sparse ID- and text-based encodings to semantic identifiers that balance vocabulary efficiency with semantic expressiveness; analyze encoder–decoder, decoder-only, and diffusion-based architectures that increasingly adopt unified, scalable, and efficient backbones; and review the transition from supervised next-token prediction to reinforcement learning–based preference alignment enabling multi-dimensional preference optimization. We further summarize practical deployments across cascade stages and application scenarios, and examine key open challenges. Taken together, this survey is intended to serve as a foundational reference for the research community and as an actionable blueprint for industrial practitioners building next-generation generative recommender systems. To support ongoing research, we maintain a living repository https://github.com/Kuaishou-RecModel/Tri-Decoupled-GenRec}{https://github.com/Kuaishou-RecModel/Tri-Decoupled-GenRec that continuously tracks emerging literature and reference implementations.
Article
Computer Science and Mathematics
Computer Science

Soyoon Kim

,

Jaehyun Park

Abstract: This study takes the road closure problem as a case of combinatorial optimization and proposes a hybrid method that combines a Graph Neural Network (GNN) with a Ge-netic Algorithm (GA). The proposed approach uses the GNN to predict a clo-sure-potential score for each road (edge), and biases the GA’s initial solution genera-tion and mutation operations accordingly. In a virtual road network environment, the hybrid method reduced average travel time by approximately 3% compared to using GA alone. These results suggest that combining learning-based heuristics with evolu-tionary search can be an efficient and practically viable approach to solving combina-torial optimization problems.
Article
Computer Science and Mathematics
Computer Science

P. Selvaprasanth

Abstract: Air pollution remains a critical global challenge, severely impacting environmental health and public well-being in urban areas. This article presents an integrated framework combining artificial intelligence (AI) with real-time IoT sensing networks for advanced air quality monitoring, predictive analytics, and enhanced public awareness. Leveraging machine learning models such as LSTM and Random Forest on datasets from urban sensor deployments, the system forecasts key pollutants (PM2.5, PM10, NO2, CO) with up to 98% accuracy and RMSE values as low as 5.2 μg/m³, outperforming traditional methods by 25-30% in temporal forecasting.​The framework incorporates edge computing for low-latency data processing, anomaly detection for health risk alerts, and interactive dashboards for real-time public engagement, demonstrated through case studies in high-density cities showing a 40% increase in citizen-reported compliance with air quality advisories. Results validate the system's scalability, enabling proactive policy interventions and reduced healthcare burdens from pollution-related illnesses.
Article
Computer Science and Mathematics
Computer Science

Yang Guang

,

Sho Sakurai

,

Takuya Nojima

,

Koichi Hirota

Abstract: In social virtual reality (VR) and metaverse platforms, users express identity through both avatar appearance and on-avatar textual cues, such as speech balloons. However, little is known about how the harmony between these cues influences self-representation and social impressions. We propose that when avatar appearance and text design, including color, font, and tone, are consistent, users experience stronger self-expression fit and elicit greater interpersonal affinity. A within-subject study (N = 21) in VRChat manipulated social context, color harmony between avatar hair and text, and style or content consistency between tone and font. Questionnaires provided composite indices for perceived congruence, self-expression fit, and affinity. Analyses included repeated-measures ANOVA, linear mixed-effects models, and mediation tests. Results showed that congruent pairings increased both self-expression fit and affinity compared to mismatches, with mediation analyses suggesting that self-expression fit partially carried the effect. These findings integrate theories of avatar influence and computer-mediated communication into a framework for metaverse design, highlighting the value of consistent avatar and text styling.
Article
Computer Science and Mathematics
Computer Science

P Meenalochini

Abstract: The deployment of Zero Trust security models in hybrid cloud infrastructures represents a transformative approach to cybersecurity, shifting away from traditional perimeter-based defenses to a model of "never trust, always verify." By continuously authenticating and authorizing all users and devices regardless of their location, Zero Trust minimizes lateral movement of threats within distributed environments. This framework leverages robust identity verification, micro-segmentation, and least privilege access to establish secure, granular control over access to resources. Continuous monitoring and dynamic verification mechanisms ensure that access privileges adapt in real time based on evolving risk profiles, enhancing resistance to sophisticated cyber threats. Implementation in hybrid clouds requires integration of cloud-native and on-premises controls, automated policy enforcement, and strong data protection measures, addressing the complexity and diversity of hybrid environments. Collectively, these strategies strengthen access control while significantly reducing the attack surface, thereby improving overall organizational security posture.
Article
Computer Science and Mathematics
Computer Science

Arsen Suranov

Abstract: How a game looks influences how players feel about that video game a ton. It influences players’ immersion, emotions, focus and enjoyment. This includes immersion, emotional engagement, cognitive commitment, and general satisfaction. For years, studios were fixated on hyper-realistic graphics, and thought that meant better games. But the enormous success and long-standing popularity of games utilizing more stylized graphics—such as cartooning, pixel art, or just simple shapes—proves that realistic graphics aren’t the only way to go. The popularity of different, non-photorealistic rendering styles—such as cel-shading, pixel art, minimalist design, abstract aesthetics—challenges this conventional perspective. This paper examines how the visual style of a game really affects players through a review of existing literature synthesizing findings and to develop a new methodological framework for subsequent empirical studies. The main idea here is that the thing that matters the most is not how realistic a game is or the fancy graphics are. Instead the best visual style comes down to how well it fits with the game’s plot, how it plays, and the kind of emotions it’s trying to elicit. This paper employs key psychological frameworks, such as the Capacity Model of narrative comprehension and the Theory of Affective Response to Media—to explain the mechanisms different visual styles facilitate or hinder cognitive absorption, emotional connection, and long-lasting involvement. Research would use surveys and experiments and combine the methods so as to study this. Experimental controlled conditions, with complete subjective self-report measures to quantify and qualitatively measure key player experiences including spatial presence, affective involvement, sentiment, aesthetics, and gameplay enjoyment. The hope this research may help game developers to make better choices of how their games look. Not all of these are just to try to impress—but to improve the appearance of their games, as well as create memorable and enjoyable experiences.
Article
Computer Science and Mathematics
Computer Science

Gulkaiyr Toktomusheva

Abstract: Efficient indexing remains a central factor in achieving predictable performance in modern relational database systems. PostgreSQL provides six native index types—B-Tree, Hash, GiST, SP-GiST, GIN, and BRIN—yet their relative behaviour under different workloads has been characterized in a variety of empirical studies, technical reports and official documentation rather than within a single unified benchmark. This paper presents a comparative, literaturebased analysis of these index types across transactional (OLTP), analytical (OLAP), full-text, JSONB, spatial and time-series workloads. Drawing from existing benchmarks and evaluations, the study synthesizes reported insights on index build time, query latency, storage footprint, maintenance overhead, and index bloat across PostgreSQL deployments. Prior work consistently finds that B-Tree remains the most robust default choice for OLTP equality and range workloads; GIN provides the lowest latency for fulltext and JSONB containment queries at the cost of substantial maintenance overhead and index bloat; GiST and SP-GiST dominate spatial workloads; and BRIN offers the best scalability for append-only analytical and time-series tables due to compact block-range summarization. The analysis highlights that index selection in PostgreSQL must be guided by workload semantics, update intensity, and storage constraints rather than by generic heuristics. Based on the synthesized findings, the paper proposes a practical recommendation matrix that maps workloads to suitable index types, providing actionable guidelines for database practitioners. By consolidating previously fragmented benchmark evidence into a single coherent review, the study clarifies when each native PostgreSQL index type is likely to be the most effective choice in practice.
Article
Computer Science and Mathematics
Computer Science

Alikhan Alybaev

Abstract: Choosing an appropriate game engine—especially for independent developers—has gotten more important as the video game sector has exploded in size. In relation to 2D game production, this study offers a relative comparison of Unity and Godot. To assess three important elements—development process, performance, and usability—a basic 2D platformer prototype was made in both engines using the same assets and design. While usability was evaluated using qualitative aspects like simplicity of installation, project organizing, performance metrics covered frame rate (FPS), memory usage, and construct size. Learning curve, community support, interface clarity, documentation quality, and the results clarify the advantages and disadvantages of every engine, therefore offering useful advice for developers—especially novices—in selecting a 2D game development platform.
Review
Computer Science and Mathematics
Computer Science

Nurzhibek Makushova

Abstract: Today, the digital world is changing incredibly fast — new technologies, apps, services, and AI tools appear every day. But as these technologies grow, so do cyber threats. Phishing, online fraud, and other attacks are becoming more common and more sophisticated. This work explores different ways to protect people and businesses from these threats and evaluates how effective these methods really are.In my research, I used a qualitative comparative method and examined four methods to protect against phishing and online fraud. It is important to regularly update and test security measures, as attackers constantly improve their schemes, create new ones, and adapting to existing protection tools.
Article
Computer Science and Mathematics
Computer Science

Timur Ibragimov

Abstract: As modern applications increasingly demand low-latency access, high availability and elastic scalability, traditional single-node databases fail to handle growing workloads. To address these limitations, distributed systems apply data sharding strategies that divide datasets across multiple nodes. This study compares two major sharding ap- proaches—horizontal and vertical sharding—by examining their scalability, consistency guarantees, operational complexity and real-world deployment behavior. Using a com- parative qualitative methodology supported by technical documentation and case eval- uations of systems such as Google Spanner, Amazon Aurora, Cassandra, Vitess and PostgreSQL+Citus, the research highlights core performance trade-offs, fault-tolerance implications and cost considerations. Findings indicate that horizontal sharding provides superior throughput and availability under large-scale transactional workloads, while ver- tical sharding optimizes read-heavy operations and strict attribute-based consistency. The study concludes that hybrid sharding can balance these trade-offs for mixed workloads, and recommends workload-driven selection criteria for distributed database architecture.
Article
Computer Science and Mathematics
Computer Science

Gollapalli Venkata Vinod

,

Venkatrao Palacharla

,

Haraprasad Mondal

,

Mohammad Soroosh

,

Mohammad Javad Maleki

,

Sandip Swarnakar

Abstract: A responsive dual-core surface plasmon resonance-based photonic crystal fiber DCSPRPCF biosensor has been present in this study, which has been designed for the early and true detection of carcinoma. It is capable of identifying different types of carcinoma cells such as MDAnderson - Metastatic Breast –231(MDA-MB-231), Michigan Cancer Founda-tion-7(MCF-7), Pheochromocytoma (PC12), and Jurkat Cells. It incorporates a titanium dioxide intermediate layer, thereby improving the joining between the silica fiber and the Au layer at the surface of the plasmonic material, resulting in excellent performance. Carcinoma will be detected by calculating the variation in resonance frequency due to the distinction in ReI among the nutritious and Carcinogenic cells by using the Frequency Investigation method. For MDA-MB-231 cells, the sensor achieves peak responsiveness of 16428.54 nm/RIU when the refractive index changes by 0.014. In addition, more Figure of Merit, with a peak utility of 74.29 RIU⁻¹ for PC12 cell detection, guarantees the reliability and accuracy of this sensor.
Article
Computer Science and Mathematics
Computer Science

Omar Anwar Zegama

,

Anas Albakar

,

soobia saeed

Abstract: Lung cancer continues to be among the most common reasons for death caused by cancer all over the world, mainly because the diagnosis is often delayed and treatment options in advanced stages are limited. Besides, immediate recognition is very important in improving the survival rate-making predictive analytics a necessary tool in the healthcare sector. The present research work employs data mining and machine learning methods to separate lung cancer patients into early and late disease stages. This is done using a Kaggle dataset that contains 53,427 clinical and demographic records. After a thorough cleaning of data, addressing of missing values and encoding of categorical variables, three different classification models—Logistic Regression, Random Forest and XGBoost—were created and their performances evaluated. Through the exploratory data analysis, it was found that there is a class distribution that is balanced and there is slight multicollinearity between the variables like age, gender, tobacco usage, race and days to diagnosis, etc. The performance of the models was measured by accuracy, precision, recall, F1-score, and ROC-AUC metrics. The best performance was obtained by Logistic Regression (Accuracy = 0.56, F1-score = 0.57, AUC = 0.58) which was better than Random Forest and XGBoost. Though the overall predictive accuracy did not exceed a certain level, the results have pointed out the possibility of data-based modeling in helping doctors to give priority to the high-risk patients in terms of early treatment. Among the things to be recommended for the next research work are advanced feature engineering, hyperparameter tuning, handling of class imbalance, and incorporation of different clinical variables which would make the model stronger and more useful for diagnosis.
Article
Computer Science and Mathematics
Computer Science

Soobia Saeed

Abstract:

Heart disease is still at the top of the list of causes of deaths around the globe, which shows that there is a great need for early and accurate diagnostic methods that will aid clinical decision-making. A machine learning–based predictive system for heart disease will be developed and evaluated in this project using a real-world Heart Failure Prediction dataset that contains 918 anonymized patient records and 11 clinical attributes. As part of data preprocessing, medically impossible values were identified and treated, invalid cholesterol readings were replaced with the median, non-sensical entries were removed, categorical variables were encoded, and feature standardization was done to ready the dataset for model training. Accordingly, Logistic Regression, Support Vector Machine (SVM) with an RBF kernel, and Random Forest were three supervised learning algorithms implemented to evaluate their performances in binary classification. To guarantee data quality and model trustworthiness, Exploratory Data Analysis (EDA) and cross-validation were done. Model performance evaluation included the use of accuracy, precision, recall, F1-score, confusion matrices, and ROC–AUC metrics. The results indicate that the Random Forest classifier produced the best overall performance with an accuracy of 87.50%, precision of 91.59%, recall of 87.50%, F1-score of 89.50%, and an AUC of 0.9391, thus beating both SVM and Logistic Regression. Though Logistic Regression gave a comprehensible baseline, its greater false-negative rate made it less suitable for high-risk clinical applications. SVM displayed excellent non-linear classification power but needed more computational tuning. Taken together, these results show that Random Forest is the most dependable and robust model for heart disease prediction with this dataset. The next step should be incorporating wider lifestyle factors, using improved data collection methods, sophisticated outlier handling, additional machine learning models, and possibly deployment as a clinical decision-support tool through web or mobile applications.

Article
Computer Science and Mathematics
Computer Science

Wilson Chango

,

Ana Salguero

,

Tatiana Landivar

,

Roberto Vásconez

,

Geovanny Silva

,

Pedro Peñafiel-Arcos

,

Homero Velasteguí-Izurieta

Abstract: This study aimed to evaluate the comparative predictive efficacy of the SARIMA statistical model and the Prophet machine learning model for forecasting monthly traffic accidents across the 24 provinces of Ecuador, addressing a critical research gap in model selection for geographically and socioeconomically heterogeneous regions. By integrating classical time series modeling with algorithmic decomposition techniques, the research sought to determine whether a universally superior model exists or if predictive performance is inherently context-dependent. Monthly accident data from January 2013 to June 2025 were analyzed using a rolling-window evaluation framework. Model accuracy was assessed through Mean Absolute Percentage Error (MAPE) and Root Mean Square Error (RMSE) metrics to ensure consistency and comparability across provinces. Results revealed a global tie, with 12 provinces favoring SARIMA and 12 favoring Prophet, indicating the absence of a single dominant model. However, regional patterns of superiority emerged: Prophet achieved exceptional precision in coastal and urban provinces with stationary and high-volume time series—such as Guayas, which recorded the lowest MAPE (4.91%)—while SARIMA outperformed Prophet in the Andean highlands, particularly in non-stationary, medium-to-high-volume provinces such as Tungurahua (MAPE 6.07%) and Pichincha (MAPE 13.38%). Computational instability in MAPE was noted for provinces with extremely low accident counts (e.g., Galápagos, Carchi), though RMSE values remained low, indicating a metric rather than model limitation. Overall, the findings invalidate the notion of a universally optimal model and underscore the necessity of adopting adaptive, region-specific modeling frameworks that account for local geographic, demographic, and structural factors in predictive road safety analytics.
Article
Computer Science and Mathematics
Computer Science

Selvaprasanth P

Abstract: Secure Access Service Edge (SASE) is an innovative architectural model that combines wide area networking and comprehensive security services into a single cloud-delivered platform. Its deployment in distributed enterprise environments ensures robust protection of cloud applications by enforcing consistent, identity-driven access policies regardless of user location or device. SASE integrates essential capabilities such as Software-Defined Wide Area Networking (SD-WAN), secure web gateways, Cloud Access Security Brokers (CASB), firewall-as-a-service (FWaaS), and Zero Trust Network Access (ZTNA). This convergence enhances network optimization by reducing latency and backhauling traffic, thus improving user experience. The unified framework streamlines security operations, reduces complexity, and mitigates risks associated with disparate point solutions. Through dynamic policy enforcement, real-time threat detection, and end-to-end encryption, SASE addresses the complexity of securing modern distributed enterprises, enabling agile, scalable, and highly secure cloud connectivity.
Article
Computer Science and Mathematics
Computer Science

Argen Azanov

Abstract: Java Persistence API (JPA) paired with Hibernate is still the lifeblood of most Java back-end sys- tems, but its performance relies a great deal on how developers design entity mappings, queries, and caching strategies. We present four specific controlled experiments that measure the effect of most common ORM configuration assumptions. The experiments test: (1) relative performance effects of FetchType.LAZY and FetchType.EAGER to perform relational traversal; (2) performance be- tween JPQL queries, native SQL, and Criteria API; (3) the efficiency of batching JDBC during bulk inserts; and (4) the efficacy of first-level (L1) and second-level (L2) caches in repeated read tests. Findings indicate that optimally optimized fetch plans can achieve query load reductions of up to 80%, batching can accelerate inserts by an order of magnitude, and L2 caching dramatically reduces database load across transactions. In addition to numbers on performance, this paper presents useful recommendations for engineers to follow when selecting a Hibernate setting. The objective will be to link experimental insights and daily backend engineering practice to produce metrics—with specific recommendations for action.
Article
Computer Science and Mathematics
Computer Science

Nazmunisha N

Abstract: The integration of secure coding practices into agile development frameworks is a comprehensive strategy aimed at reducing the incidence of injection vulnerabilities and logic flaws that often compromise software security. Agile development, with its rapid iteration and continuous delivery models, demands that security be woven seamlessly into every phase of the lifecycle rather than isolated to specific testing stages. This integration ensures that potential security threats are addressed proactively, aligning with agile principles of constant feedback and improvement. Continuous developer training plays a pivotal role by equipping developers with up-to-date knowledge and skills to identify and mitigate security risks in real time, which directly complements the use of real-time code scanning tools embedded within development pipelines. These automated tools provide immediate feedback by detecting insecure coding patterns and vulnerabilities at the earliest stages, supporting swift remediation without disrupting the agile workflow. Together, these practices foster a secure coding culture that safeguards applications while preserving the flexibility and speed that agile methodologies offer, ultimately enhancing overall software quality and trustworthiness.

of 57

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated