Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Jesús Manuel Soledad Terrazas

Abstract: Background: Semi-automated offside technology (SAOT) deployed across European professional football leagues represents a critical case study illustrating the urgent necessity for explainable artificial intelligence (XAI) and deterministic algorithmic systems in high-stakes decision-making contexts.Methods: This study employs a mixed-methods approach combining technical system analysis of SAOT specifications, quantitative examination of publicly available VAR decision statistics from La Liga's 2024-25 season, content analysis of media-documented technical failures, and governance framework analysis against established algorithmic accountability principles.Results: Empirical evidence reveals that in La Liga's 2024-25 season, Barcelona gained approximately 7 points from net favorable VAR decisions while Real Madrid lost 7 points from adverse calls—the worst balance in the league. Documented technical failures include wrong defender selection in Celta Vigo matches, a power outage eliminating VAR oversight during a disputed penalty, and system misinterpretation of goalkeeper touches. Mathematical quantification of measurement uncertainties (34 cm total error from temporal, spatial, and calibration sources) reveals that precision claims exceed physical capabilities.Conclusions: The legitimacy crisis stems not from whether systematic bias exists but from the structural impossibility of detection under current opacity. When Spanish authorities leaked full VAR audio recordings, the federation responded not with transparency reforms but by dispatching police to investigate the leak—exemplifying governance structures that prioritize control over accountability. This research proposes mandatory open-source algorithms, real-time audit logs accessible to affected parties, independent calibration verification, and genuine appeal mechanisms with remedial authority.
Article
Computer Science and Mathematics
Algebra and Number Theory

Felipe Oliveira Souto

Abstract: This paper presents a novel multi-faceted approach to the Riemann Hypothesis (RH) through the synthesis of quantum operator theory, conformal geometry, and spectral analysis. We construct a quantum helical system whose Hamiltonian spectrum, when transformed by a conformal map Phi(z) = alpha * arcsinh(beta * z) + gamma, shows remarkable numerical correspondence with the imaginary parts of non-trivial zeros of zeta(s) to precision 10^{-12} for the first 2000 zeros. We further develop an analytical framework consisting of six interconnected theorems that establish constraints on possible zero locations based on conformal symmetry and functional equation properties. While these results provide substantial evidence and new insights, we present them as a significant step toward rather than a final resolution of RH. The work opens new connections between spectral theory, quantum physics, and analytic number theory.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Jineng Ren

Abstract: Since the beginning of modern computer history, the Turing machine has been a dominant architecture for most computational devices, which consists of three essential components: an infinite tape for input, a read/write head, and finite control. In this structure, what the head can read (i.e., bits) is the same as what it has written/outputted. This is actually different from the ways in which humans think or do thought/tool experiments. More precisely, what humans imagine/write on paper are images or texts, and they are not the abstract concepts that they represent in the human brain. This difference is neglected by the Turing machine, but it actually plays an important role in abstraction, analogy, and generalization, which are crucial in artificial intelligence. Compared with this architecture, the proposed architecture uses two different types of heads and tapes, one for traditional abstract bit inputs/outputs and the other for specific visual ones. The mapping rules among the abstract bits and the specific images/texts can be realized by neural networks with a high accuracy rate. Logical reasoning is thus performed through the transfer of mapping rules. The statistical decidability of the Halting Problem with an imperceptibly small error rate in reasoning steps is established for this type of machines. As an example, this paper presents how the new computer architecture (what we call ``Ren machine" for simplicity here) autonomously learns a distributive property/rule of multiplication in the specific domain and further uses the rule to generate a general method (mixed in both the abstract domain and the specific domain) to compute the multiplication of any positive integers based on images/texts. The machine's strong reasoning ability is also corroborated in proving a theorem in Plane Geometry. Moreover, a robotic architecture based on Ren machine is proposed to address the challenges faced by the Vision-Language-Action (VLA) models in unsound reasoning ability and high computational cost.
Article
Computer Science and Mathematics
Computer Science

Nektarios Deligiannakis

,

Vassilis Papataxiarhis

,

Michalis Loukeris

,

Stathes Hadjiefthymiades

,

Marios Touloupou

,

Syed Mafooq Ul Hassan

,

Herodotos Herodotou

,

Athanasios Moustakas

,

Emmanouil Bampis

,

Konstantinos Ioannidis

+8 authors

Abstract: Recently, the need for unified orchestration frameworks that can manage extremely heterogeneous, distributed, and resource-constrained environments has arisen due to the rapid development of cloud, edge, and IoT computing. Kubernetes and other traditional cloud-native orchestration systems are not built to facilitate autonomous, decentralized decision-making across the computing continuum or to seamlessly integrate non-container-native devices. This paper presents the Distributed Adaptive Cloud Continuum Architecture (DACCA), a Kubernetes-native architecture that extends orchestration beyond the data center to encompass edge and Internet of Things infrastructures. Decentralized self-awareness and swarm formation are supported for adaptive and resilient operation, a resource and application abstraction layer is established for uniform resource representation, and a Distributed and Adaptive Resource Optimization (DARO) framework based on multi-agent reinforcement learning is integrated for intelligent scheduling in the proposed architecture. Verifiable identity, access control, and tamper-proof data exchange across heterogeneous domains are further guaranteed by a distributed-ledger-technology-based zero-trust security framework. When combined, these elements enable completely autonomous workload orchestration with enhanced interoperability, scalability, and trust. Thus, the proposed architecture enables self-managing and context-aware orchestration systems that support next-generation AI-driven distributed applications across the entire computing continuum.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Gregor Wegener

Abstract: Large language models and related generative AI systems increasingly operate in safety-critical and high-impact settings, where reliability, alignment, and robustness under distribution shift are central concerns. While retrieval-augmented generation (RAG) has emerged as a practical mechanism for grounding model outputs in external knowledge, it does not by itself provide guarantees against system-level failure modes such as hallucination, mis-grounding, or deceptively stable unsafe behavior. This work introduces SORT-AI, a structural safety and reliability framework that models advanced AI systems as chains of operators acting on representational states under global consistency constraints. Rather than proposing new architectures or empirical benchmarks, SORT-AI provides a theoretical and diagnostic perspective for analyzing alignment-relevant failure modes, structural misgeneralization, and stability breakdowns that arise from the interaction of retrieval, augmentation, and generation components. Retrieval-augmented generation is treated as a representative and practically relevant testbed, not as the primary contribution. By analyzing RAG systems through operator geometry, non-local coupling kernels, and global projection operators, the framework exposes failure modes that persist across dense retrieval, long-context prompting, graph-constrained retrieval, and agentic interaction loops. The resulting diagnostics are architecture-agnostic and remain meaningful across datasets, implementations, and deployment contexts. SORT-AI connects reliability assessment, explainability, and AI safety by shifting evaluation from local token-level behavior to global structural properties such as fixed points, drift trajectories, and deceptive stability. While illustrated using RAG, the framework generalizes to embodied agents and quantum-inspired operator systems, offering a unifying foundation for safety-oriented analysis of advanced AI systems.
Article
Computer Science and Mathematics
Computer Science

Muntaqim Ahmed Raju

,

Priyanka Siddappa

,

Md Shifat Haider Al Amin

,

Ruizhe Ma

Abstract: Integration of deep learning in healthcare has revolutionized the analysis of complex, high-dimensional, and heterogeneous data. However, traditional single-modal approaches often fail to grasp the multi-faceted nature of human health, in which genetic, environmental, lifestyle, and physiological factors interact in complex ways. The rapid development of multimodal machine learning (MML) has been a transformational paradigm that allows seamless integration of these heterogeneous data sources toward a better understanding of health and disease. This review goes in-depth with the methodologies of MML, with special emphasis on the main strategies of fusion and advanced techniques. We also discuss the wide applications of MML in different health domains, such as brain disorders, cancer prediction, chest-related conditions, skin diseases, and other medical challenges. We illustrate, through detailed case studies, how MML provides better diagnostic accuracy, and personalized treatment strategies. While it has seen huge progress, MML is confronted with a few major challenges around data heterogeneity, alignment complexities, and the subtleties of effective fusion strategies. The review concludes with a discussion on the future directions calling for robust data integration techniques, efficient and scalable architectures, and fairness and bias mitigation. MML is still an evolving field, and it has the potential to revolutionize healthcare delivery and drive innovations in the direction of more personalized, equitable, and effective patient care globally.
Article
Computer Science and Mathematics
Computer Vision and Graphics

Lorenzo Alejandro Matadamas-Torres

,

Juan Regino Maldonado

,

Idarh Matadamas

,

Luis Alberto Alonso-Hernandez

,

Manuel de Jesus Melo-Monterrey

,

Lorena Juith Ramírez-López

,

Luis Enrique Rodríguez-Antonio

Abstract: The article analyzes scientific information concerning the effects of computer science on the creative industries, an approach that has been consolidated as a driver of the global economy fundamentally based on knowledge, innovation, and creativity. A bibliometric review of articles in the Scopus database (1983–November 2025) was applied to evaluate the conceptual evolution, fundamental themes, and most influential authors. The research was developed in three phases: (1) search criteria within the research field, (2) perfor-mance analysis, and (3) results analysis. The results showed a steady increase in the pro-duction of studies, particularly since 2019 due to the COVID-19 pandemic, focusing pri-marily on the areas of digitalization, innovation, and artificial intelligence. The authors with the highest number of publications originate from China, Indonesia, and Malaysia. The research determines the convergence between computing and creativity, which con-stitutes a strategic opportunity for the global economy. However, it acknowledges re-strictions linked to the period of analysis and the dependence on a single database, thus suggesting future studies be expanded to include other sources and temporal contexts.
Article
Computer Science and Mathematics
Computer Vision and Graphics

Pengju Liu

,

Hongzhi Zhang

,

Chuanhao Zhang

,

Feng Jiang

Abstract: In clinical CT imaging, high-density metallic implants often induce severe metal artifacts that obscure critical anatomical structures and degrade image quality, thereby hindering accurate diagnosis. Although deep learning has advanced CT metal artifact reduction (CT-MAR), many methods do not effectively use frequency information, which can limit the recovery of both fine details and overall image structure. To address this limitation, we propose a Hybrid-Frequency-Aware Mixture-of-Experts (HFMoE) network for CT-MAR. The proposed method synergizes the spatial-frequency localization of the wavelet transform with the global spectral representation of the Fourier transform to achieve precise multi-scale modeling of artifact characteristics. Specifically, we design a Hybrid-Frequency Interaction Encoder with three specialized branches, incorporating wavelet-domain, Fourier-domain, and cascaded wavelet–Fourier modulation, to distinctively refine local details, global structures, and complex cross-domain features. Then, they are fused via channel attention to yield a comprehensive representation. Furthermore, a frequency-aware Mixture-of-Experts (MoE) mechanism is introduced to dynamically route features to specific frequency experts based on the degradation severity, thereby adaptively assigning appropriate receptive fields to handle varying metal artifacts. Evaluations on synthetic (DeepLesion) and clinical (SpineWeb, CLINIC-metal) datasets show that HFMoE outperforms existing methods in both quantitative metrics and visual quality. Our method demonstrates the value of explicit frequency-domain adaptation for CT-MAR and could inform the design of other image restoration tasks.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Bijaya Pariyar

Abstract: Customer churn prediction is a critical task in the telecommunications industry, where retaining customers directly impacts revenue and operational efficiency. This study proposes a two iteration machine learning pipeline that integrates SHAP (SHapley Additive exPlanations) for explainable feature selection and Optuna-based hyperparameter tuning to enhance model performance and interpretability. In the first iteration, baseline models are trained on the full feature set of the Telco Customer Churn dataset (7043 samples, 25 features after preprocessing). The top-performing models—Gradient Boosting, Random Forest, and AdaBoost—are tuned and evaluated. SHAP is then applied to the best model (Gradient Boosting) to identify the top 20 features. In the second iteration, models are retrained on the reduced feature set, achieving comparable or improved performance: validation AUC of 0.999 (vs. 0.999 for full features) and test AUC of 0.998 (vs. 0.997). Results demonstrate that SHAP driven feature reduction maintains high predictive accuracy (test F1-score: 0.977) while improving interpretability and reducing model complexity. This workflow highlights the value of explainable AI in churn prediction, enabling stakeholders to understand key drivers like "Churn Reason" and "Dependents." What is the research problem? Accurate prediction of customer churn using machine learning models with a focus on explainable features to support business decisions. Why use SHAP? SHAP provides additive feature importance scores, enabling global and local interpretability, feature ranking for dimensionality reduction, and transparency in model predictions. What is the novelty? The iterative pipeline combines baseline training, SHAP-based feature selection, reduced-feature retraining, and hyperparameter retuning, offering a reproducible workflow for explainable churn modeling.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Qingmiao Gan

,

Rodrigo Ying

,

Di Li

,

Yuliang Wang

,

Qianxi Liu

,

Jingjing Li

Abstract: This study proposes a prediction model based on a dynamic spatiotemporal causal graph neural network to address the challenges of complex dynamic dependencies, strong structural correlations, and ambiguous causal relationships in corporate revenue forecasting. The model constructs a time-varying enterprise association graph, where enterprises are represented as nodes and industry or supply chain relationships as edges. A graph convolutional network is used to extract structural dependency features, while a gated recurrent unit captures temporal evolution patterns, achieving joint modeling of structural and temporal features. On this basis, a causal reasoning mechanism is introduced to model and adjust potential influence paths among enterprises. A learnable causal weight matrix is used to describe the strength of economic transmission, suppress spurious correlations, and strengthen key causal paths. The model also employs multi-scale temporal aggregation and attention fusion mechanisms to dynamically integrate multidimensional information, enhancing adaptability to both long-term trends and short-term fluctuations. Experimental results show that the proposed model outperforms mainstream methods in multiple metrics, including MSE, MAE, MAPE, and RMAE, verifying its effectiveness in capturing corporate revenue dynamics, modeling economic causal dependencies, and improving prediction accuracy. This study establishes a unified framework that integrates spatiotemporal dependency modeling with causal structure reasoning, providing new insights and methodological foundations for intelligent forecasting in complex economic systems.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Jakub Kowalik

,

Paweł Kapusta

Abstract: Video games have evolved into sophisticated media capable of eliciting complex affective states, yet traditional Dynamic Difficulty Adjustment (DDA) systems rely primarily on performance metrics rather than emotional feedback. This research proposes a novel closed-loop architecture for Affective Game Computing on mobile platforms, designed to infer player emotions directly from gameplay inputs and actively steer emotional transitions. A complete experimental platform, including a custom mobile game, was developed to collect gameplay telemetry and device sensor data. The proposed framework utilizes a sequence-to-sequence Transformer-based neural network to predict future game states and emotional responses without the need for continuous camera monitoring, utilizing facial expression analysis only as a ground-truth proxy during training. Crucially, to address the "cold-start" problem inherent in optimization systems—where historical data is unavailable at the session’s onset—a secondary neural network is introduced. This component directly predicts optimal initial game parameters to elicit a specific target emotion, enabling immediate affective steering before sufficient gameplay history is established. Experimental evaluation demonstrates that the model effectively interprets sparse emotional signals as discrete micro-affective events and that the optimization routine can manipulate game parameters to shift the predicted emotional distribution toward a desired profile. While the study identifies challenges regarding computational latency on consumer hardware and the reliance on proxy emotional labels, this work establishes a transparent, reproducible proof-of-concept. It provides a scalable, non-intrusive baseline for future research into emotion-aware adaptation for entertainment and therapeutic serious games.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Zhanyi Ding

,

Zijing Wei

,

Chao Yang

,

Hailiang Wang

,

Shuo Xu

,

Yixiang Li

,

Xuanjie Chen

Abstract: Detecting toxic language in user-generated text remains a critical challenge due tolinguistic nuance, evolving expressions, and severe class imbalance. While Transformer-basedmodels have established state-of-the-art performance, their significant computational costs posescalability barriers for real-time moderation. We investigate whether integrating social andcontextual metadata—such as user reactions and platform ratings—can bridge the performancegap between computationally efficient classical models and modern deep learningarchitectures. Using a 40,000-comment subset of the Jigsaw Toxic Comment ClassificationChallenge, we conduct a controlled, two-phase comparison. We evaluatea Baseline configuration (TF-IDF for classical ensembles vs. raw text for ALBERT) againstan Enhanced configuration that fuses text representations with explicit social signals. Ourinvestigation analyzes whether these high-fidelity metadata features allow lightweight models(e.g., LightGBM) to rival the discriminative power of deep Transformers. The findingschallenge the prevailing assumption that deep semantic understanding is strictly necessary forhigh-performance toxicity detection, offering significant implications for the design of scalable,"Green AI" moderation systems.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Cancan Hua

,

Ning Lyu

,

Chen Wang

,

Tingzhou Yuan

Abstract:

This study proposes a Transformer-based change-point detection method for modeling and anomaly detection of multidimensional time-series metrics in Kubernetes nodes. The research first analyzes the complexity and dynamics of node operating states in cloud-native environments and points out the limitations of traditional single-threshold or statistical methods when dealing with high-dimensional and non-stationary data. To address this, an input representation mechanism combining linear embedding and positional encoding is designed to preserve both multidimensional metric features and temporal order information. In the modeling stage, a multi-head self-attention mechanism is introduced to effectively capture global dependencies and cross-dimensional interactions. This enhances the model's sensitivity to complex patterns and potential change points. In the output stage, a differentiated scoring function and a normalized smoothing method are applied to evaluate the time series step by step. A change-point decision function based on intensity scores is then constructed, which significantly improves the ability to identify abnormal state transitions. Through validation on large-scale distributed system metric data, the proposed method outperforms existing approaches in AUC, ACC, F1-Score, and Recall. It demonstrates higher accuracy, robustness, and stability. Overall, the framework not only extends attention-based time-series modeling at the theoretical level but also provides strong support for intelligent monitoring and resource optimization in cloud-native environments at the practical level.

Article
Computer Science and Mathematics
Probability and Statistics

Indika Dewage

,

Austin Webber

Abstract: This work presents a unified mathematical framework for understanding how monotone nonlinear transformations reshape data and generate structural forms of distortion, even when order is preserved. We model perception and algorithmic processing as the action of a monotone mapping h(x) applied to an underlying truth variable, showing that curvature alone can alter scale, emphasis, and information content. Using synthetic data drawn from uniform, normal, and bimodal distributions, we evaluate power, root, logarithmic, and logistic transformations and quantify their effects through four complementary measures: Truth Drift for positional change, Differential Entropy Difference for information content, Confidence Distortion Index for confidence shifts, and Kullback–Leibler Divergence for structural variation. Across all experiments, power functions with large exponents and steep logistic curves produced the strongest distortions, particularly for bimodal inputs. Even moderate transformations resulted in measurable changes in entropy, confidence, and positional truth, with strong correlations among the four metrics. The findings provide a geometric interpretation of bias, demonstrating that distortion arises naturally whenever a system curves the input space—whether in human perception or algorithmic pipelines. This framework offers a principled foundation for evaluating the hidden effects of scaling, compression, and saturation, and highlights how the appearance of neutrality can conceal systematic informational shifts.
Article
Computer Science and Mathematics
Computer Vision and Graphics

Doo-Ho Choi

,

Youn-Lee Oh

,

Minji Oh

,

Eun-Ji Lee

,

Sung-I Woo

,

Minseek Kim

,

Ji-Hoon Im

Abstract: Mushrooms have long been economically and nutritionally important crops, and recent advances in digital agriculture have increased interest in automating phenotypic evaluation. Due to the limitation of traditional phenotype assessment, various artificial intelligence (AI) models including YOLOv8 have been introduced to evaluate mushroom phenotypes non-destructively and efficiently. However, unlike previous models, few studies of mushroom phenotype assessment with YOLOv11 were published. In this study, using Pleurotus ostreatus and Flammulina velutipes, comparison of mushroom phenotype analysis between YOLOv8 and YOLOv11 was processed. All images were captured under controlled conditions and conducted to be preprocessed for the model evaluation. The results demonstrated that YOLOv11 achieved segmentation accuracy comparable to YOLOv8 (ΔmAP50–95 < 0.01) while substantially improving computational efficiency with a reduction of approximately 15–20%. In validation with the physical measurements of mushroom phenotype, both models showed biologically meaningful and moderate correlations across phenotypic traits (r ≈ 0.2–0.44; R² ≈ 0.72–0.83), confirming that YOLO-derived measurements captured essential dimensional variation. Inter-model comparisons revealed strong consistency (r ≥ 0.94, R² ≥ 0.96, MAE ≤ 0.40), indicating that YOLOv11 maintained the predictive reliability of YOLOv8 while operating with superior computational efficiency. This study establishes YOLOv11 as a robust foundation for AI-assisted digital breeding and automated quality monitoring systems in fungal research and precision agriculture.
Article
Computer Science and Mathematics
Mathematics

Raoul Bianchetti

Abstract: This work introduces a novel reformulation of mathematical divergence, inspired by Erdős’ classical harmonic conjecture, through the lens of the Viscous Time Theory (VTT). We propose that the traditional view of divergence as an unbounded arithmetic summation can be reinterpreted as an emergent property of informational coherence fields governed by the IRSVT (Informational Residue in Suspended Viscous Time) framework. At the core of this formulation lies the concept that divergence is not only the accumulation of magnitude, but the persistence of logical connectivity within an informational structure. In this approach, the discrete series ∑ 1/ais associated with a coherent density field ρi (x), defined over a discrete topological structure on the integers, and divergence occurs when informational coherence paths Φα between adjacent nodes maintain non-vanishing probability. In other words, the growth of partial sums corresponds to sustained coherence rather than unbounded addition. The paper establishes the IRSVT Divergence Theorem, which defines divergence in terms of the continuity of Φα – tunnel probabilities and minimal coherence – gradient separation ΔC. This yields a general principle: if no irreversible collapse in informational flow occurs across the series, the system diverges not in quantity alone, but through the extension of topological coherence. This reframing of divergence leads to potential consequences in both mathematics and engineering. Mathematically, it suggests a class of divergence results linked to Φα – connectivity and coherence – field geometry, with potential implications for questions such as prime-gap distribution and density fluctuation patterns. In engineering and artificial intelligence, the approach provides a predictive framework for coherence stability, informing the design of adaptive metamaterials, load distribution in AI-pilot systems, and resilience architecture in complex sensor networks.
Article
Computer Science and Mathematics
Analysis

Mohsen Soltanifar

Abstract: Classical real analysis rigorously defines convergence via εN criteria, yet it frequently regards the specific entry index N as a mere artifact of proof rather than an intrinsic property. This paper fills this quantitative void by developing a radius of convergence framework for the sequence space Seq(R). We define an index-based radius ρa(ε) alongside a rescaled geometric radius ρa (ε); the latter maps the unbounded index domain to a finite interval, establishing a structural analogy with spatial radii familiar in analytic function theory. We systematically analyze these radii within a seven-block partition of the sequence space, linking them to liminf-limsup profiles and establishing their stability under algebraic operations like sums, products, and finite modifications. The framework’s practical power is illustrated through explicit asymptotic inversions for sequences such as Fibonacci ratios, prime number distributions, and factorial growth. By transforming the speed of convergence into a geometric descriptor, this approach bridges the gap between asymptotic limit theory and constructive analysis, offering a unified, fine-grained measure for both convergent and divergent behaviors.
Article
Computer Science and Mathematics
Computational Mathematics

Valery Y. Glizer

,

Vladimir Turetsky

Abstract: An infinite-horizon H linear-quadratic control problem is considered. This problem has the following features: (i) the quadratic form of the control in the integrand of the cost functional is with a positive small multiplier (small parameter), meaning that the control cost is much smaller than the state cost; (ii) the current cost of the fast state variable in the cost functional is a positive semi-definite (but non-zero) quadratic form. These features require developing a significantly novel approach to asymptotic solution of the matrix Riccati algebraic equation associated with the considered H problem by solvability conditions. Using this solution, an asymptotic analysis of the H problem is carried out. This analysis yields parameter-free solvability conditions for this problem and a simplified controller solving this problem. An illustrative example is presented.
Article
Computer Science and Mathematics
Applied Mathematics

Sergey Lychev

,

Alexander Digilov

Abstract: Accurate displacement field measurement by holographic interferometry requires robust analysis of high-density fringe patterns, which is hindered by speckle noise inherent in any interferogram, no matter how perfect. Conventional skeletonization methods, such as edge detection algorithms and active contour models, often fail under these conditions, producing fragmented and unreliable fringe contours. This paper presents a novel skeletonization procedure that overcomes these limitations through a threefold approach: (1) representation of the entire fringe family within a physics-informed, finite-dimensional parametric subspace (e.g., a collection of Fourier-based contours), ensuring global smoothness and connectivity of each fringe; (2) introduction of a robust strip-integration functional, which replaces noisy point sampling with a Gaussian-weighted intensity integral across a narrow strip, yielding a smooth objective function, which is convenient to optimize with standard gradient-based techniques; and (3) a recursive quasi-optimization algorithm that takes into account fringe similarity for efficient and stable identification. The method's efficiency is quantitatively validated on synthetic interferograms with controlled noise, demonstrating significantly lower error compared to baseline techniques. Its practical utility is confirmed by successful processing of a real, interferogram of a bent plate containing over 100 fringes, enabling precise reconstruction of the displacement field that closely matches result of independent theoretical modelling. The proposed procedure provides a reliable tool for processing challenging interferograms where traditional methods may fail to obtain satisfactory result.
Article
Computer Science and Mathematics
Analysis

Cristian Octav Olteanu

Abstract: The first aim of this study is to point out new aspects of approximation theory applied to a few classes of holomorphic functions, via Vitali’s theorem. The approximation is made with the aid of the complex moments of the involved functions, that are defined similarly to the moments of a real valued continuous function. Applying uniform approximation of continuous functions on compact intervals via Korovkin’s theorem, the hard part concerning uniform approximation on compact subsets of the complex plane follows according to Vitali’s theorem. The theorem on the set of zeros of a holomorphic function is also applied. In the end, existence and uniqueness of solution for a mul-tidimensional moment problem is characterized in terms of limits of sums of quadratic expressions. This is an application appearing at the end of the title. Consequences resulting from the first part of the paper are pointed out with the aid of functional calculus for self-adjoint operators.

of 620

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated