Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Asadbek Yussupov,

Ruslan Isaev

Abstract: Conditional Value-at-Risk (CVaR) is one of the most popular risk measures in finance, used in risk management as a complementary measure to Value-at-Risk (VaR). VaR estimates potential losses within a given confidence level, such as 95% or 99%, but does not account for tail risks. CVaR addresses this gap by calculating the expected losses exceeding the VaR threshold, providing a more comprehensive risk assessment for extreme events. This research explores the application of Denoising Diffusion Probabilistic Models (DDPM) to enhance CVaR calculations. Traditional CVaR methods often fail to capture tail events accurately, whereas DDPMs generate a wider range of market scenarios, improving the estimation of extreme risks. However, these models require significant computational resources and may present interpretability challenges.
Article
Computer Science and Mathematics
Computer Vision and Graphics

Shahzad Ahmad Qureshi,

Muhammad Owais,

Ayesha Hakim,

Qasim Ijaz,

Syed Taimoor Hussain Shah,

Syed Adil Hussain Shah,

Adil Aslam Mir

Abstract: Training tools based on virtual reality (VR) are an up-and-coming solution in high-risk environments where practical and cognitive skills are vitally important and, at the same time, significantly cost-intensive. Presently, VR allows the most advanced simulation and comprehensive performance tracking, thus effectively supporting the practical training of personnel in the most complex and sensitive operations. The present paper aims to design and develop a VR training tool for the assembly of a prototype infrared computed tomography (CT) machine by providing users with high-precision, modular, and interactive components in a controlled virtual environment. The project implementation involved the wide-ranging use of tools, including Unity (runtime development), Visual Studio (integrated development), Blender (3D models’ creation), Substance Painter (texturing), Meta SDK (VR interaction), and AutoCAD for reference models. The augmented use of these tools resulted in creating a VR Training and Maintenance System (VRTMS), delivering highly optimized performance and overcoming the drawbacks of traditional training systems by providing an adaptive and immersive user experience. The tool enables the creation of a rich and user-friendly experience, crucial for training the employees in the assembly and maintenance procedures to impart precise enhancement in the skill development programs.
Review
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Janaka Ishan Senarathna,

Janaka Ishan Senarathna

Abstract: The National Institute of Standards and Technology (NIST) has recently concluded the third round of its Post Quantum Cryptography Standardization Process, selecting four finalist algorithms for standardization: CRYSTALS Kyber, CRYSTALS-Dilithium, FALCON, and SPHINCS+. These algorithms are designed to withstand attacks from both classical and quantum computers, ensuring the long-term security of digital communications. This paper presents a comprehensive comparative analysis of the security margins and practical deployment readiness of these finalist algorithms. CRYSTALS-Kyber, a key encapsulation mechanism based on the hardness of the Module Learning With Errors problem, offers strong security and efficient performance. CRYSTALS-Dilithium, a digital signature algorithm based on module lattices, provides robust security guarantees and relatively straightforward implementation. FALCON, a lattice-based digital signature algorithm utilizing the Fast Fourier Transform, offers compact signatures and fast verification but faces implementation challenges due to its reliance on floating-point arithmetic. SPHINCS+, a hash-based signature scheme, stands out as a conservative choice with security based solely on the well-established security of hash functions. The analysis reveals that while each algorithm has its strengths, they also face unique challenges in terms of side-channel vulnerabilities, formal security proofs, and performance trade-offs. The practical deployment of these algorithms requires careful consideration of specific security requirements, performance needs, and resource constraints. Ongoing research efforts aim to enhance the algorithms' resistance against advanced attacks, optimize their performance across diverse platforms, and develop standardized and secure hybrid cryptographic systems. The transition to post-quantum cryptography will involve challenges such as interoperability with legacy systems, the need for clear standards and regulatory guidance, and the costs associated with software and hardware updates. Continued engagement with the cryptographic community and monitoring of the evolving security landscape will be crucial for ensuring a secure and effective migration to post-quantum cryptography.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Mohammad Imtiyaz Gulbarga,

Khaled M.M. Alrantisi

Abstract: Urban resilience has become a critical paradigm for cities facing escalating threats from climate change, rapid urbanization, and infrastructure vulnerabilities. This paper presents a rigorous systematic review of 56 peer-reviewed studies (2015-2023) examining machine learning (ML) applications in urban resilience planning and disaster management. Our analysis reveals three dominant themes: (1) ML's growing role in predictive modeling of disasters through techniques like LSTM networks and CNNs, (2) emerging applications in infrastructure interdependency analysis using graph neural networks, and (3) innovative approaches to resource allocation through reinforcement learning. The review identifies significant gaps in geographic representation, with 78% of studies focused on developed nations, while vulnerable regions in the Global South remain understudied. We also highlight critical challenges in model interpretability, with only 15% of studies incorporating explainability tools like SHAP or LIME. The paper contributes a novel taxonomy classifying 12 major ML techniques by their urban resilience applications, computational requirements, and ethical considerations. Furthermore, we propose a framework for integrating ML into urban governance that emphasizes transfer learning for data-scarce regions and federated learning for privacy preservation. This work provides both researchers and policymakers with actionable insights for developing more equitable, robust, and transparent ML solutions for urban resilience challenges.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Nan Jiang,

Wenxuan Zhu,

Xu Han,

Weiqiang Huang,

Yumeng Sun

Abstract: This study focuses on the challenge of predicting network traffic within complex topological environments. It introduces a spatiotemporal modeling approach that integrates Graph Convolutional Networks (GCN) with Gated Recurrent Units (GRU). The GCN component captures spatial dependencies among network nodes, while the GRU component models the temporal evolution of traffic data. This combination allows for precise forecasting of future traffic patterns. The effectiveness of the proposed model is validated through comprehensive experiments on the real-world Abilene network traffic dataset. The model is benchmarked against several popular deep learning methods. Furthermore, a set of ablation experiments is conducted to examine the influence of various components on performance, including changes in the number of graph convolution layers, different temporal modeling strategies, and methods for constructing the adjacency matrix. Results indicate that the proposed approach achieves superior performance across multiple metrics, demonstrating robust stability and strong generalization capabilities in complex network traffic forecasting scenarios.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Menghan Cheng,

Changsheng Wang

Abstract: This study employed a hybrid research design that combines quantitative expert assessment and qualitative in-depth interviews to systematically examine the performance of AI-generated questionnaires based on single and integrated theoretical models, comparing them with manually created questionnaires. The research found that, under a single model, AI questionnaires showed significant differences from manual questionnaires in three dimensions: accuracy, comprehensiveness, and clarity. AI demonstrated limitations when handling complex and abstract variables, lacked the flexibility of human-designed questions, and tended to satisfy model requirements rather than research topic requirements. Under the integrated models, significant differences existed between the AI and manual questionnaires in four dimensions: accuracy, comprehensiveness, clarity, and redundancy. As the model complexity increased, the probability of problems in the AI questionnaires correspondingly increased. However, regardless of whether single or integrated models were used, AI questionnaires exhibited advantages in language fluency and generation efficiency and surpassed manual questionnaires in objectivity. Researchers may benefit from using AI to generate initial questionnaire drafts followed by manual corrections. This study not only reveals the advantages and limitations of AI-generated questionnaires but also provides theoretical and practical guidance for researchers. Future research should improve AI's contextual understanding capabilities and generation logic to further enhance the effectiveness of the AI-assisted questionnaire design.
Article
Computer Science and Mathematics
Computer Networks and Communications

Owen Graham,

Dave Wright

Abstract: This study investigates the application of the Vosk speech recognition toolkit for speech-to-text transcription in interactive gaming applications. As the gaming industry increasingly integrates voice commands and conversational interfaces, effective speech recognition technology becomes essential for enhancing player experience and engagement. This research aims to evaluate the performance of Vosk in real-time transcription within gaming contexts, focusing on its accuracy, latency, and user feedback. A comprehensive methodology was employed, including the selection of diverse gaming scenarios, participant recruitment, and implementation of the Vosk toolkit. Key evaluation metrics, such as Word Error Rate (WER) and real-time performance measures, were utilized to assess the effectiveness of Vosk in recognizing gameplay-related commands and dialogue. The results indicate that Vosk achieved a significant reduction in WER, showcasing its adaptability to gaming-specific vocabulary and accents. Furthermore, latency measurements revealed that Vosk can process voice commands with minimal delay, making it suitable for dynamic gaming environments. User feedback highlighted the positive impact of speech recognition on gameplay immersion, with participants expressing satisfaction regarding the accuracy and responsiveness of the system. Overall, the findings demonstrate the potential of Vosk as a viable solution for integrating speech recognition into interactive gaming applications. This research contributes to the growing body of knowledge on speech technology in gaming and offers insights for developers seeking to enhance user experience through voice interaction. Future directions include exploring multilingual capabilities and further customization options to optimize performance in diverse gaming contexts.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Arjay Alba,

Jocelyn Villaverde

Abstract: This paper introduces the Knowledge Retention Score (KRS) as a novel performance metric to evaluate the effectiveness of Knowledge Distillation (KD) across diverse tasks including image classification, object detection, and image segmentation. Unlike conventional metrics that solely rely on accuracy, mean Average Precision (mAP), or Intersection over Union (IoU), KRS captures both feature similarity and output agreement between the teacher and student networks, offering a more nuanced measure of knowledge transfer. A total of 36 experiments were conducted using various KD methods—Vanilla KD, SKD, FitNet, ART, UET, GKD, GLD, and CRCD—across multiple datasets and architectures. The results showed that KRS is strongly correlated with conventional metrics, validating its reliability. Moreover, ablation studies confirmed KRS's sensitivity in reflecting the knowledge gained post-distillation. The paper also ranked KD methods by performance gain, revealing that techniques like UET and GLD consistently outperform others. Lastly, the architectural generalization analysis demonstrated KRS's robustness across different teacher–student pairs. These findings establish KRS as a comprehensive and interpretable metric, capable of guiding KD method selection and providing deeper insight into knowledge transfer beyond standard performance measures.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Michael Reynolds

Abstract: As data-driven decision-making becomes increasingly important, the development and utilization of public data play a critical role in social governance and economic growth. However, public data often suffer from issues such as large scale, structural complexity, and inconsistent quality, which traditional data processing methods struggle to address effectively. In recent years, breakthroughs in large language models (LLMs) within the field of natural language processing have introduced new opportunities for processing and analyzing public data. This paper explores key aspects of public data development and utilization, focusing on typical application scenarios of LLMs in data cleaning, insight extraction, privacy protection, and compliance review. Additionally, it analyzes the technical limitations and ethical challenges associated with these models. Based on this analysis, the paper proposes suggestions for optimizing model capabilities, reducing resource costs, and establishing standardized application frameworks. Lastly, it anticipates the development prospects of LLMs in multimodal data processing and cross-domain collaborative analysis, aiming to provide theoretical support and practical guidance for applying LLMs in public data development.
Article
Computer Science and Mathematics
Computer Vision and Graphics

Michael Reynolds

Abstract: Image processing and computer vision are rapidly advancing technological fields that have been widely applied in medical imaging, autonomous driving, intelligent manufacturing, and other industries. With the rise of deep learning, traditional image processing methods have been gradually replaced by deep learning algorithms, achieving remarkable results in tasks such as object detection, image classification, and segmentation. This paper aims to explore the core algorithms and applications of image processing and computer vision, reviewing classical techniques and algorithms while analyzing advanced methods based on deep learning. By discussing applications across various fields, this paper not only demonstrates the current state of the technology but also highlights its challenges and developmental directions. Finally, it forecasts future research trends in image processing and computer vision, particularly the potential developments under the influence of artificial intelligence and big data.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Jiayang Zhuo,

Yuchen Han,

Hairu Wen,

Kejian Tong

Abstract: With the rapid growth of financial data, extracting accurate and contextually relevant information remains a challenge. Existing financial question-answering (QA) models struggle with domain-specific terminology, long-document processing, and answer consistency. To address these issues, this paper proposes the Intelligent-Aware Transformer (IAT), a financial QA system based on GLM4-9B-Chat, integrating a multi-level information aggregation framework. The system employs a Financial-Specific Attention Mechanism (FSAM) to enhance focus on key financial terms, a Dynamic Context Embedding Layer (DCEL) to improve long-document processing, and a Hierarchical Answer Aggregator (HAA) to ensure response coherence. Additionally, Knowledge-Augmented Textual Entailment (KATE) strengthens the model's generalization by inferring implicit financial knowledge. Experimental results demonstrate that IAT surpasses existing models in financial QA tasks, exhibiting superior adaptability in long-text comprehension and domain-specific reasoning. Future work will explore computational optimizations, advanced knowledge integration, and broader financial applications.
Brief Report
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Logan Brooks

Abstract: AI can play a prominent role in the medical field, specifically in physical therapy, by detecting injuries, or areas of concern, that may lead to injury. Many athletes are more susceptible to injury than the average person. AI has the potential of detecting many of these injuries, possibly before they even occur. Supervised learning used in conjunction with machine learning can play a significant role in identifying injuries. Machines can be trained with data such as x-rays and MRIs, showing healthy body scans in juxtaposition to unhealthy or injury prone ones.
Article
Computer Science and Mathematics
Mathematics

Faruk Alpay

Abstract: The Information Bottleneck (IB) framework formalizes the trade-off between compression and prediction in representation learning. A crucial parameter is the Lagrange multiplier β, which controls the balance between preserving information relevant to a target variable Y and compressing the representation Z of an input X. Selecting an optimal β (denoted β∗) is challenging and typically done via empirical tuning. In this paper, I present a rigorous theoretical analysis of β∗-optimization in both the Variational IB (VIB) and Neural IB (NIB) settings. I define β∗ as the critical value of β that marks the boundary between non-trivial (informative) and trivial (uninformative) representations, ensuring maximal compression before the representation collapses. I derive formal conditions for its existence and uniqueness. I prove several key results: (1) the IB trade-off curve (relevance–compression frontier) is concave under mild conditions, implying that β, as the slope of this curve, uniquely characterizes optimal operating points in regular cases; (2) there exists a critical β threshold, β∗ = F′(0+) (the slope of the IB curve at zero compression), beyond which the IB solution collapses to a trivial representation; (3) for practical IB implementations (VIB and NIB), I discuss how β∗ can be computed algorithmically, including complexity analysis of naive β-sweeping versus adaptive methods like binary search, for which pseudo-code is provided. I provide formal theorems and proofs for concavity properties of the IB Lagrangian, continuity of the IB curve, and boundedness of mutual information quantities. Furthermore, I compare standard IB, VIB, and NIB formulations in terms of the optimal β, showing that while standard IB provides a theoretical target for β∗, variational and neural approximations may deviate from this optimum. My analysis is complemented by a discussion on the implications for deep neural network representations. The results establish a principled foundation for β selection in IB, guiding practitioners to achieve maximal meaningful compression without exhaustive trial-and-error
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Zhang Shengnan,

Cui Xinming

Abstract: Background: Accurate medical image segmentation is essential for clinical diagnosis and treatment planning, yet precise boundary delineation remains a significant challenge, particularly in cases with highly ambiguous boundaries such as gastrointestinal polyps, skin lesions, and pathological tissues. Despite the considerable progress achieved by existing methods, their reliance on manual parameter tuning and limited capability in detecting highly obscure boundaries hinder their practical application in clinical settings. Purpose: To overcome these limitations, we propose FEA-Net, a novel medical image segmentation model that, for the first time, integrates Camouflaged Object Detection (COD) technology with frequency-domain feature enhancement to improve boundary delineation in medical images. By leveraging COD’s capability to detect hidden objects and combining it with frequency-domain processing, our method effectively addresses the challenges posed by boundary ambiguity. Methods: We propose FEA-Net, a segmentation framework leveraging FTEM to enhance high-frequency boundary signals while suppressing noise. It incorporates a Semantic Edge Enhancement Module (SEEM) for initial edge extraction, followed by FTEM for frequency-domain refinement. The Adaptive Edge-grained Feature Module (AEFM) and Context Attention Aggregation Module (CAAM) further refine edge granularity and enhance contextual aggregation. Results: Experiments on ISIC2018, Kvasir-SEG, and KPIs2024 show that FEA-Net outperforms state-of-the-art models, improving mIoU by 2.67%, mDice by 2.20%, and F1-score by 2.47%. Notably, it enhances boundary detection in ambiguous regions, reducing false positives and improving segmentation precision. Conclusion: By pioneering the fusion of COD technology with frequency-domain feature enhancement, FEA-Net establishes a new paradigm in medical image segmentation. It provides an automated and highly effective solution for tackling ambiguous boundaries, paving the way for more precise and clinically viable segmentation methods.
Article
Computer Science and Mathematics
Probability and Statistics

Moritz Sohns

Abstract: This paper develops a unified framework for mathematical finance under general semimartingale models that allow for dividend payments, negative asset prices, and unbounded jumps. We present a rigorous approach to the mathematical modeling of financial markets with dividend-paying assets by defining appropriate concepts of numéraires, discounted processes, and self-financing trading strategies. While most of the mathematical results are not new, this unified framework has been missing in the literature. We carefully examine the transition between nominal and discounted price processes and define appropriate notions of admissible strategies that work naturally in both settings. By establishing the equivalence between these models and providing clear conditions for their applicability, we create a mathematical foundation that encompasses a wide range of realistic market scenarios and can serve as a basis for future work on mathematical finance and derivative pricing.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Aivar Sakhipov,

Zhanbai Uzdenbayev,

Diar Begisbayev,

Aruzhan Mektepbayeva,

Ramazan Seiitbek,

Didar Yedilkhan

Abstract: Transit operators need accurate and privacy-preserving passenger-flow forecasts to enable dynamic headway control and crowd management. We introduce FedST-GNN, a federated spatio-temporal graph neural network that fuses encrypted federated averaging (FedAvg) with a frequency-domain Transformer and an Adaptive Windowing (ADWIN) - triggered meta-learning loop for fast concept-drift recovery. Experiments on the public Copenhagen-Flow dataset (18.7 M events, 312 stops, 2022–2024) show that FedST-GNN cuts mean-absolute-error by 5% and root-mean-square-error by 7% relative to the strongest deep baseline (Temporal Fusion Transformer), while sustaining a median inference latency of 38 ms on a GTX 1660 SUPER. During a city half-marathon, the ADWIN trigger and two inner meta-updates lowered peak error by 41% without exceeding a 5 MB communication budget per 15-minute federated round. These results demonstrate that privacy-compliant, drift-resilient graph learning can deliver real-time accuracy on commodity hardware, offering a practical blueprint for intelligent transport analytics.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Trinh Quoc Nguyen,

Oky Dicky Ardiansyah Prima,

Syahid Al Irfan,

Hindriyanto Dwi Purnomo,

Radius Tanone

Abstract: This study presents CORE-ReID V2, an enhanced framework building upon CORE-ReID. The new framework extends its predecessor by addressing Unsupervised Domain Adaptation (UDA) challenges in Person ReID and Vehicle ReID, with further applicability to Object ReID. During pre-training, CycleGAN is employed to synthesize diverse data, bridging image characteristic gaps across different domains. In the fine-tuning, an advanced ensemble fusion mechanism, consisting of the Simplified Efficient Channel Attention Block (SECAB) and Efficient Channel Attention Block (ECAB), enhances both local and global feature representations while reducing ambiguity in pseudo-labels for target samples. Experimental results on widely used UDA Person ReID and Vehicle ReID datasets demonstrate that the proposed framework outperforms state-of-the-art methods, achieving top performance in Mean Average Precision (mAP) and Rank-k Accuracy (Top-1, Top-5, Top-10). Moreover, the framework supports lightweight backbones such as ResNet18 and ResNet34, ensuring both scalability and efficiency. Our work not only pushes the boundaries of UDA-based Object ReID but also provides a solid foundation for further research and advancements in this domain.
Article
Computer Science and Mathematics
Computer Science

Azem Kakitaeva,

Mekia Shigute Gaso

Abstract: Microservice architecture has become a prevalent paradigm in modern soft-ware development, emphasizing modularity, scalability, and efficient servicecommunication. Communication between microservices that is secure andof high performance is of paramount importance. This study evaluates twocommonly used authentication methods such as API key authentication andbasic authentication using bcrypt hashing within a trusted intranetwork en-vironment. The primary objective of this study is to compare their per-formance characteristics, specifically response time and throughput in orderto determine which is more suitable for high-load scenarios. Two SpringBoot microservices were developed and tested using Apache JMeter undervarying load conditions. The results indicate that API Key Authenticationoutperforms Basic Authentication, exhibiting reduced latency and increasedthroughput. However, due to the limitations of the testing setup, the findingsshould be considered approximate. This study was conducted as part of the“Basics of Scientific Research Methods” course, demonstrating foundationalresearch skills rather than offering definitive conclusions.
Article
Computer Science and Mathematics
Algebra and Number Theory

Mehdi Rahbar Matak

Abstract: We present a rigorous proof of the Riemann Hypothesis, asserting that all non-trivial zeros of the Riemann zeta function have real part 1 2. By assuming a non-trivial zero off the critical line, we derive three independent contradictions using the Hadamard product, functional equation, and oscillations in the Chebyshev function ψ(x). This revised version strengthens zero-density estimates, clarifies bounds on the zeta function, and provides a comprehensive Hardy space analysis, addressing potential concerns from prior approaches.
Review
Computer Science and Mathematics
Other

Mingchu Li,

Jiangyuan Gan,

Runfa Zhang

Abstract: Nonlinear partial differential equations (PDEs) form the mathematical backbone for modeling phenomena across diverse fields such as physics, biology, engineering, and finance. Traditional numerical methods have limitations, particularly for high- dimensional or parameterized problems, due to the "curse of dimensionality" and computational expense. Artificial Intelligence (AI) is currently a valuable tool and has extensive applications in various fields. AI-driven approaches offer a promising alternative by leveraging machine learning techniques to efficiently approximate solutions, especially in high-dimensional or complex problems. This paper surveys state-of-the-art AI techniques for solving nonlinear PDEs, including Physics-Informed Neural Networks (PINNs), Deep Galerkin Methods (DGM), and Neural Operators. Symbolic computation methods, Hirota bilinear methods, bilinear neural network methods. We explore their theoretical foundations, architectures, advantages, limitations, and applications. Finally, we discuss open challenges and future directions in the field.

of 483

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2025 MDPI (Basel, Switzerland) unless otherwise stated