Computer Science and Mathematics

Sort by

Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Yafeng Zhou,

Fadilla ’Atyka Nor Rashid,

Marizuana Mat Daud,

Mohammad Kamrul Hasan,

Wangmei Chen

Abstract: Background: Machine learning-based computer vision techniques using depth cameras have shown potential in physiotherapy movement assessment. However, a comprehensive understanding of their implementation, effectiveness, and limitations remains needed. Methods: We conducted a systematic review following PRISMA guidelines, searching Web of Science, Scopus, PubMed, and Astrophysics Data System databases (2020-2024). From 371 initially identified publications, 18 met the inclusion criteria for detailed analysis. Results: The analysis revealed three primary implementation scenarios: local (50\%), clinical (33.4\%), and remote (22.3\%). Depth cameras, particularly the Kinect series (65.4\%), dominated data collection methods. Data processing approaches primarily utilized RGB-D (55.6\%) and skeletal data (27.8\%), with algorithms split between traditional machine learning (44.4\%) and deep learning (41.7\%). Key challenges included limited real-world validation, insufficient dataset diversity, and algorithm generalization issues. Conclusions: While machine learning-based computer vision systems demonstrated effectiveness in movement assessment tasks, further research is needed to address validation in clinical settings and improve algorithm generalization. This review provides a foundation for enhancing computer vision-based assessment tools in physiotherapy practice.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Salma Ali,

Arthit Wongsawat

Abstract:

News summarization is a critical task in natural language processing (NLP) due to the increasing volume of information available online. Traditional extractive summarization methods often fail to capture the nuanced and contextual nature of news content, leading to a growing interest in using large language models (LLMs) like GPT-4 for more sophisticated, abstractive summarization tasks. However, LLMs face challenges in maintaining factual consistency and accurately reflecting the core content of news articles. This research addresses these challenges by proposing a novel prompt engineering method designed to guide LLMs, specifically GPT-4, in generating high-quality news summaries. Our approach utilizes a multi-stage prompt framework that ensures comprehensive coverage of essential details and incorporates an iterative refinement process to improve summary coherence and relevance. To enhance factual accuracy, we include built-in validation mechanisms using entailment-based metrics and question-answering techniques. Experiments conducted on a newly collected dataset of diverse news articles demonstrate the effectiveness of our approach, showing significant improvements in summary quality, coherence, and factual accuracy

Review
Computer Science and Mathematics
Applied Mathematics

Sourangshu Ghosh

Abstract: In this article, we mathematically rigorously derive the expressions for the Del Operator ∇, Divergence ∇ ·⃗v, Curl ∇ ×⃗v, Vector gradient ∇⃗v of Vector Fields ⃗v, Laplacian ∇2f ≡ ∆f of Scalar Fields f and Divergence ∇ · T of 2nd order Tensor Fields T in both Cylindrical and Spherical Coordinates. We also derive the Directional Derivative (A · ∇)⃗v and Vector Laplacian ∇2⃗v ≡ ∆⃗v of Vector Fields ⃗v using metric coefficients in Rectangular, Cylindrical and Spherical Coordinates. We then generalized the concept of gradient, divergence and curl to Tensor Fields in any Curvilinear Coordinates. After that we rigorously discuss the concepts of Christoffel Symbols, Parallel Transport in Riemann Space, Covariant Derivative of Tensor Fields and Various Applications of Tensor Derivatives in Curvilinear Coordinates (Geodesic Equation, Riemann Curvature Tensor, Ricci Tensor and Ricci Scalar).
Article
Computer Science and Mathematics
Analysis

Saeed Hashemi Sababe,

Nader Biranvand

Abstract:

Weighted Reproducing Kernel Banach Spaces (WRKBS) extend kernel theory by incorporating weights to enhance modeling flexibility. This paper defines WRKBS, explores their theoretical foundations, and demonstrates their effectiveness in regression, classification, and clustering. Numerical experiments validate their advantages in structured data modeling and symmetry-aware learning. Applications span computer vision, physics-based modeling, and graph-based learning, with future directions in scalable algorithms and deep learning integration.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Daniel Oluwatise Owolabi,

Desmond Moru

Abstract: Worker safety is notably improved through the application of personal protective equipment (PPE), which effectively reduces the severity of injuries or fatal incidents in environments like construction sites, chemical facilities and hazardous areas. PPE is extensively mandated to ensure an acceptable level of safety, addressing not just accidents at the mentioned sites but also the risks posed by chemical hazards. Due to various factors or oversights, workers may intermittently fail to adhere to safety regulations regarding wearing protective equipment. Traditional manual monitoring is both labor-intensive and prone to errors. Thus, there is a pressing need for the advancement of intelligent monitoring systems capable of providing automated and precise detection of such safety equipment. As a solution, we present a deep learning approach for the real-time detection of PPE components, including helmets, safety boots, vests and gloves. The proposed deep learning model exhibited a remarkable mean average precision of 97.1%, indicating the model’s proficiency in object localization and recognition. These results not only underline the effectiveness of the deep learning -based PPE detection system but also emphasize its practicality in diverse industrial and occupational settings. By surpassing established benchmarks attained in literature, this research contributes significantly to enhancing safety standards and reducing the risk of workplace accidents.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Christine Bukola Asaju,

Pius Olawale Owolawi,

Chuling Tu,

Etienne Van Wyk

Abstract: Cloud-based License Plate Recognition (LPR) systems have emerged as essential tools in modern traffic management and security applications. Determining the best approach remains paramount in the field of computer vision. This study presents a comparative analysis of various versions of the YOLO (You Only Look Once) object detection models, namely YOLO 5, 7, 8 and 9, applied to LPR tasks in a cloud computing environment. Using live video, We performed experiments on YOLOv5, YOLOv7, YOLOv8, and YOLOv9 models to detect number plates in real-time. According to the results, YOLOv8 reported the most effective model for real-world deployment due to its strong cloud performance. It achieved an accuracy of 78\% during cloud testing, while YOLOv5 showed consistent performance with 71\%. YOLOv7 performed poorly in cloud testing (52\%), indicating potential issues, while YOLOv9 reported 70\% accuracy. This tight alignment of results shows consistent, although modest, performance across scenarios. The findings highlight the evolution of the YOLO architecture and its impact on enhancing LPR accuracy and processing efficiency. The results provide valuable insights into selecting the most appropriate YOLO model for cloud-based LPR systems, balancing the trade-offs between real-time performance and detection precision. This research contributes to advancing the field of intelligent transportation systems by offering a detailed comparison that can guide future implementations and optimisations of LPR systems in cloud environments.
Article
Computer Science and Mathematics
Applied Mathematics

Eugene Kagan,

Alexander Novoselsky

Abstract: Based on the observations of dynamical system, we define partition ξ_ε, which represents dynamical system as good as possible in the sense of its entropy. The suggested method utilizes ε-entropy, ε-capacity, and the introduced ε-information. The resulting algorithm is also useful for defining bin lengths of histograms, especially for multimodal distributions.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Jun Li,

Yanwei Xu,

Yaocun Hu,

Yongyong Ma,

Xin Yin

Abstract: Adversarial attacks expose the latent vulnerabilities within artificial intelligence systems, necessitating a reassessment and enhancement of model robustness to ensure the reliability and security of deep learning models against malicious attacks. We propose a fast method designed to efficiently find sample points close to the decision boundary. By computing the gradient information of each class in the input samples and comparing these gradient differences with the true class, we can identify the target class most sensitive to the decision boundary, thus generating adversarial examples. This technique is referred to as the "You Only Attack Once" (YOAO) algorithm. Compared to the DeepFool algorithm, this method requires only a single iteration to achieve effective attack results. The experimental results demonstrate that the proposed algorithm outperforms the original approach in various scenarios, especially in resource-constrained environments. Under a single iteration, it achieves a 70.6% higher success rate of the attacks compared to the DeepFool algorithm. Our proposed method shows promise for widespread application in both offensive and defensive strategies for diverse deep learning models. We investigated the relationship between classifier accuracy and adversarial attack success rate, comparing the algorithm with others. Our experiments validated that the proposed algorithm exhibits higher attack success rates and efficiency. Furthermore, we performed data visualization on the ImageNet dataset, demonstrating that the proposed algorithm focuses more on attacking important features. Finally, we discussed the existing issues with the algorithm and outlined future research directions. Our code has been made public and can be found at https://github.com/dawei7777/YOAO.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Daniel Oluwatise Owolabi,

Pius Onobhayedo

Abstract:

Kaggle is an online platform for data scientists, machine learning engineers, and researchers to access datasets, compete in machine learning competitions, collaborate with other data scientists, and develop and showcase their data science skills. Bot accounts can cause a variety of issues, including inflating the popularity of certain content artificially, simulating user activity to affect rankings or ratings, spreading spam, stealing data, or carrying out cyberattacks. Despite Kaggle's prominent focus on data science and its robust community of data scientists, the platform has been notably neglected in terms of addressing the pervasive issue of bot activity within the platform. Recognizing this gap, this study embarks on a comparative investigation of supervised machine learning algorithms tailored for detecting bot accounts effectively within the Kaggle ecosystem. The dataset consists of 799 users, of which 400 were labeled as bots, and 399 were labeled as real users. The study found that the Random Forest classification algorithm had the best evaluation metrics compared to other algorithms used in detecting bots. Feature importance analysis was also conducted to identify the most relevant features in differentiating between bot and real accounts. Overall, the study provides a useful framework for identifying bot accounts on Kaggle, which can be applied in other similar platforms to improve their user verification and security systems.

Article
Computer Science and Mathematics
Computer Vision and Graphics

Zhaodi Wang,

Shuqiang Yang,

Huafeng Qin,

Yike Liu,

Junqiang Wang

Abstract:

Finger vein recognition has gained significant attention for its importance in enhancing security, safeguarding privacy, and ensuring reliable liveness detection. As a foundation of vein recognition systems, vein detection faces challenges including low feature extraction efficiency, limited robustness, and a heavy reliance on real-world data. Additionally, environmental variability and advancements in spoofing technologies further exacerbate data privacy and security concerns. To address these challenges, this paper proposes MixCFormer, a hybrid CNN-Transformer architecture that incorporates Mixup data augmentation to improve the accuracy of finger vein liveness detection and reduce dependency on large-scale real datasets. First, The MixCFormer model applies baseline drift elimination, morphological filtering, and Butterworth filtering techniques to minimize the impact of background noise and illumination variations, thereby enhancing the clarity and recognizability of vein features. Next, finger vein video data is transformed into feature sequences, optimizing feature extraction and matching efficiency, effectively capturing dynamic time-series information and improving discrimination between live and forged samples. Furthermore, Mixup data augmentation is used to expand sample diversity and decrease dependency on extensive real datasets, thereby enhancing the model’s ability to recognize forged samples across diverse attack scenarios. Finally, the CNN and Transformer architecture leverages both local and global feature extraction capabilities to capture vein feature correlations and dependencies. Residual connections improve feature propagation, enhancing the stability of feature representations in liveness detection. Rigorous experimental evaluations demonstrate that MixCFormer achieves a detection accuracy of 99.51% on finger vein datasets, significantly outperforming existing methods.

of 803

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated