Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Jolien Van Bossche,

Thibault Clercq,

Callum Hensley,

Rune Peeters

Abstract: Visual question answering~(VQA) fundamentally requires a model to interpret heterogeneous semantic cues in an image and align them with a natural-language query. Traditional approaches benefit from scene graph representations, yet they often suffer from severe imbalances when handling rich semantic structures, especially when reasoning demands simultaneous consideration of objects, relations, and fine-grained attributes. Existing models frequently overlook the subtle interactions among these three information streams, leading to faulty attribute inference or overlooked relational cues. Addressing these long-standing limitations calls for a more principled integration of all semantic constituents within a unified and expressive reasoning space. In this paper, we introduce \textbf{\textsc{TriUnity-GNN}}, a tri-modal fusion framework that redefines scene graph reasoning by jointly enhancing object-centric, relation-centric, and attribute-centric representations under a unified graph neural paradigm. Instead of treating scene graphs as monolithic structures, our approach restructures the given graph into two complementary modalities, an object-dominant perspective and a relation-dominant perspective, thereby enabling the model to capture multi-granular semantics that are typically under-explored. To further strengthen the expressivity of these representations, \textsc{TriUnity-GNN} integrates attribute cues through an explicit fusion design, significantly enlarging the impact of attribute signals that are otherwise marginalized in classic architectures. Moreover, we design a novel message-passing enhancement module that substantially increases cross-type semantic exchange among objects, relations, and attributes, ensuring that all three modalities collectively shape the final reasoning embedding. We perform comprehensive evaluations on benchmark datasets including GQA, VG, and motif-VG. Across all benchmarks, \textsc{TriUnity-GNN} consistently surpasses prior graph-based VQA systems by a clear margin, demonstrating robustness in handling both straightforward and semantically composite queries. The results verify that a tri-modal, explicitly balanced graph reasoning mechanism is crucial for improving interpretability and accuracy in challenging visual question answering scenarios.
Review
Computer Science and Mathematics
Computer Vision and Graphics

Md Iqbal Hossain,

Neeresh Kumar Perla,

Afia Sajeeda,

Siyu Xia,

Ming Shao

Abstract: In the rapidly advancing domain of artificial intelligence, Vision-Language Models (VLMs) have emerged as critical tools by synergizing visual and textual data processing to facilitate a multitude of applications including automated image captioning, accessibility enhancements, and intelligent responses to multimodal queries. This survey explores the evolving paradigm of Pre-training, Fine-tuning, and Inference that has notably enhanced the capabilities of VLMs, allowing them to perform effectively across various downstream tasks and even enable zero-shot predictions. Despite their advancements, VLMs are vulnerable to adversarial attacks, largely because of their reliance on large-scale, internet-sourced pre-training datasets. These attacks can significantly undermine the models' integrity by manipulating their input interpretations, posing severe security risks and eroding user trust. Our survey delves into the complexities of these adversarial threats, which range from single-modal to sophisticated multimodal strategies, highlighting the urgent need for robust defense mechanisms. We discuss innovative defense strategies that adapt model architectures, integrate adversarially robust training objectives, and employ fine-tuning techniques to counteract these vulnerabilities. This paper aims to provide a comprehensive overview of current challenges and future directions in the adversarial landscape of VLMs, emphasizing the importance of securing these models to ensure their safe integration into various real-world applications.
Article
Computer Science and Mathematics
Probability and Statistics

Zdeněk Kala

Abstract: A Hermite-based framework for reliability assessment within the limit state method is developed in this paper. Closed-form design quantiles under a four-moment Hermite density are derived by inserting the Gaussian design quantile into a calibrated cubic translation. Admissibility and implementation criteria are established, including a monotonicity bound, a positivity condition for the platykurtic branch, and a balanced Jacobian for the leptokurtic branch. Material data for the yield strength and ductility of structural steel are fitted using moment-matched Hermite models and validated through goodness-of-fit tests. A truss structure is then analysed to quantify how non-Gaussian input geometry influences structural resistance and its corresponding design value. Variance-based Sobol sensitivity analysis demonstrates that departures of the radius distribution towards negative skewness and higher kurtosis increase the first-order contribution of geometric variables and thicken the lower tail of the resistance distribution. Closed-form Hermite design resistances are shown to agree with numerical integration results and reveal systematic deviations from FORM estimates, which rely solely on the mean and standard deviation. Monte Carlo simulation studies confirm these trends and highlight the slow convergence of tail quantiles and higher-order moments. The proposed approach remains fully compatible in the Gaussian limit and offers a practical complement to EN 1990 verification procedures when skewness and kurtosis have a significant influence on design quantiles.
Article
Computer Science and Mathematics
Computational Mathematics

Jean Chien,

Lily Chuang,

Nail Tang,

Eric Lee

Abstract: Nanoimprint lithography (NIL) master fidelity is governed by coupled variations beginning with resist spin-coating, proceeding through electron-beam exposure, and culminate in anisotropic etch transfer. We present an integrated, physics-based simulation chain. First, it includes a spin-coating thickness model that combines Emslie–Meyerhofer scaling with a Bornside edge correction. Second, it couples an e-beam lithography (EBL) module in which column electrostatics and trajectory-derived spot size feed a hybrid Gaussian–Lorentzian proximity kernel, followed by development thresholds are modulated by local thickness. Finally, it passes the exposure results to a level-set reactive ion etching (RIE) model with angular anisotropy and aspect-ratio-dependent etching (ARDE). With isolated and dense design layouts as bounding conditions, pattern fidelity is quantified by NMSE, ΔCD, and LER. The coupled analysis indicates that a low single-nanometer spot-size window trades dimensional accuracy for edge continuity; that over-widening generates proximity-dominated bias and feature coalescence; and that ARDE-informed evolution reproduces inward critical dimension (CD) drift in narrow openings, consistent with transport limitation. Collectively, the simulation chain accounts for stage-to-stage propagation from spin-coating thickness variation and EBL proximity to ARDE-informed etch profiles, and provides OPC-aligned metrics as outputs. In practical, mask process correction (MPC) is necessary rather than optional: the simulator serves as the predictive model, metrology supplies updates, and constrained optimization sets dose, focus, and etch set-points under CD/LER constraints.
Concept Paper
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Feng Chen

Abstract: Large language models (LLMs) are usually developed and evaluated as solitary agents: a single, monolithic network trained on static corpora and queried one prompt at a time. This single-agent paradigm has produced impressive capabilities, yet it fundamentally mismatches the structure of many real-world problems in science, engineering, and governance, which are inherently multi-actor, iterative, and argumentative. In this Perspective, we argue that the next scaling frontier for LLMs is not simply “bigger models with more data”, but societies of models and tools designed as structured collective intelligences. We first outline why classical scaling laws, which relate performance primarily to parameter counts, token volume, and compute, are insufficient for tasks that require debate, division of labor, and long-horizon coordination. We then introduce a conceptual framework based on three interaction regimes—competition, collaboration, and coordination—and show how different task families naturally demand different regime designs, incentives, and communication protocols. Building on emerging multi-agent LLM systems in reasoning, code generation, and autonomous science, we sketch a research programmer for “multi-agent pretraining”, in which agents jointly learn not only language and world models, but also norms of discourse, peer review, and self-correction. We further discuss how multi-agent architectures reshape scaling laws, evaluation methodology, and safety: performance becomes a function not only of model size and data, but also of team composition, interaction topology, and institutional memory. Finally, we argue that carefully engineered artificial communities may approximate the epistemic dynamics of real scientific communities more faithfully than any single, static model, opening a path toward more robust, transparent, and controllable AI systems.
Article
Computer Science and Mathematics
Mathematical and Computational Biology

Natalya Maxutova,

Akmaral Kassymova,

Kuanysh Kadirkulov,

Aisulu Ismailova,

Gulkiz Zhidekulova,

Zhanar Azhibekova,

Jamalbek Tussupov,

Quvvatali Rakhimov,

Zhanat Kenzhebayeva

Abstract: This paper proposes an intelligent and explainable ensemble system for predicting as-partate aminotransferase (AST) levels based on routine biochemical and demographic data from the NHANES dataset. The framework integrates robust preprocessing, adaptive feature encoding, and multi-level ensemble learning within a nested cross-validation (5×3) structure to ensure reproducibility and prevent data leakage. Several regression mod-els—including Random Forest, XGBoost, CatBoost, and stacking ensembles—were sys-tematically compared using R², RMSE, MAE, and MAPE metrics. The results show that the Stacking v2 architecture, combining CatBoost, LightGBM, and Ridge meta-regression, achieves the highest predictive accuracy and stability. Explainable AI analysis using SHAP revealed key biochemical and lifestyle factors influencing AST variability. The pro-posed system provides a modular, interpretable, and reproducible foundation for deci-sion-support applications in intelligent healthcare analytics, aligning with the goals of applied system innovation.
Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Aiperi Zhenishova

Abstract: This report presents a thorough consideration of the nascent area of Human-Centered Explainable Artificial Intelligence (XAI), concentrating on the crucial task of ensuring that AI decisions are understandable and credible to human users. With the spread of AI across sensitive domains like healthcare, finance, and online retail, the need for clear and understandable explanations increases. The review considers different formats of explanation such as visual aids (saliency maps, textual summaries) and analysis challenges faced in the evaluation process. Main finding: The research interest has also radically changed since 2021 from focusing on purely technical approaches to more on human perception, interaction and trust. We combine results of 73 published papers that exist until the year of 2024 from empirical research and show that local post-hoc explanation (particularly feature importance methods [e.g., LIME, SHAP]) is the current focus of much of the literature but that inherently interpretable models are treated with relatively little attention. Despite the large pool of explanation techniques, there is a dearth of standardized metrics to evaluate interpretability, user confidence, and impact on decision making. This gap restricts comparability of evidences between studies and hampers efforts to bring about efficient and user-friendly AI explanations. The paper calls for structured frameworks as well as a harmonized protocol for analyzing explainability - specifying how explanatory explanations would lead to greater user trust, understanding and support in makingdecisions. Ultimately, a humane, rigorous approach towards evaluating AI systems is necessary to not only make these transparent but also make them really understandable on the part of the reader. The goal of this work, in turn, is to drive further exploration to more trustworthy, human-centered modes of explanation that will bridge that the chasm between the complexity of algorithms and human understanding.
Article
Computer Science and Mathematics
Computer Networks and Communications

Lijuan Wang,

Mee Loong Yang,

Krassie Petrova

Abstract: Wireless sensor networks (WSNs) including Software Defined Wireless Sensor are partic-ularly vulnerable to Denial-of-Service (DoS) attacks. Trust models are widely acknowl-edged as an effective strategy to mitigate the threat of a successful DoS attacks in WSNs. However, existing trust models commonly rely on threshold configurations that are based on the network administrator’s experience and leaving the challenging task of weight allocation for various trust metrics to network users. This limits the widespread application of trust models as a WSN defense mechanism. To address this issue, this study proposes and analyses theoretically an Adaptive, Threshold-Free, and Automati-cally Weighted Trust Model (ATAW-TM) for SDWSNs. The model architecture is aligned with the layered centralized management architecture of SDWSNs, which makes it flex-ible and enhances its responsiveness. The proposed model does not require manual threshold configuration and weight allocation, and allows for a rapid trust system re-covery. It has significant advantages compared to to existing trust models, and is po-tentially more feasible to implemented on a large scale.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Irina Radeva,

Ivan Popchev,

Lyubka Doukovska,

Miroslava Dimitrova

Abstract: This paper evaluates multi-agent coordination strategies against retrieval-augmented generation for 7-8B open-source models. Four coordination strategies (collaborative, sequential, competitive and hierarchical) were evaluated across three open-source models: Mistral 7B, Llama 3.1 8B and Granite 3.2 8B. The study determined whether multi-agent reasoning enhances retrieval-augmented generation performance. The evaluation employed 100 question-answer pairs. In total, 2,000 model-question evaluations were conducted. Performance was assessed using Composite Performance Score (CPS) and Threshold-aware Composite Performance Score (T-CPS), two metrics developed to aggregate nine dimensions spanning lexical overlap, semantic similarity, and linguistic quality. Results revealed that 87.5% of multi-agent configurations underperformed baseline systems, with coordination overhead identified as the primary limiting factor. Llama 3.1 8B tolerated Sequential and Hierarchical coordination with minimal degradation, while Granite 3.2 8B and Mistral 7B showed severe degradation across all strategies. Collaborative coordination failed universally despite highest output consistency. These findings suggest that single-agent baselines may be preferable for most deployment scenarios under similar conditions. Future research should explore following developments: evaluation of role-specific prompts, investigation of advanced consensus methods, exploration of adaptive systems for strategy selection, and joint tuning of retrieval thresholds and coordination strategies.
Article
Computer Science and Mathematics
Computer Networks and Communications

Yu Chen,

Lucy Chen,

Erik Blasch

Abstract: Digital twins (DT) have emerged as transformative tools for smart city management, enabling urban planners and administrators to monitor infrastructure, simulate scenarios, and optimize operations through virtual representations of physical systems. As urban populations approach five billion people by 2030 and cities confront escalating challenges around congestion, housing, environmental quality, and resource management. A monitoring-only paradigm proves increasingly inadequate for enabling the proactive, adaptive, and participatory governance that future urban systems require. This paper articulates a vision for metaverse-enabled DTs that fundamentally reconceives urban management by transforming passive observation into immersive collaboration and automated action. We present a four-layer Metaverse-Enabled DIGital Twins Enterprise (MEDIGATE) architectural framework that creates capabilities that no isolated technology can achieve, including real-time information fusion across urban domains, immersive interfaces supporting collaborative decision-making among distributed stakeholders, anticipatory response to emerging challenges before they escalate, and continuous learning that improves system performance over time. Recognizing that comprehensive urban-scale implementation presents significant complexity and risk, we introduce the microverse concept as a practical pathway for incremental deployment through domain-specific immersive environments that generate immediate value while building toward eventual integration. We examine healthcare as an illustrative domain demonstrating how immersive DTs transform reactive service delivery into proactive wellness management through multi-modal information fusion and automated intervention. The paper addresses technical challenges around privacy, security, data reliability, computational requirements, and interoperability. We conclude by articulating a research agenda spanning technical development, social innovation, and policy frameworks necessary to realize genuinely proactive smart cities that anticipate needs, engage citizens meaningfully, and deliver equitable outcomes for all urban residents.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Adeel Amanat,

Manzoor Hussain

Abstract: Artificial intelligence and machine learning are increasingly used to support the diagnosis of liver diseases. They help in diagnosing liver diseases more comfortably, offering alternatives to painful treatments like liver biopsy. This research paper reviews 15 recent studies that discuss how AI and ML improve the diagnosis and accuracy of liver diseases. These studies also highlight improvements in feature selection and data handling. The data on liver diseases is imbalanced, with some conditions having much less data than others, making comparison between studies difficult due to varying methods. Additionally, dynamic datasets that change over time are rarely used, limiting deeper analysis of disease progression. This study highlights how AI and ML can improve liver disease diagnosis by addressing data imbalance, enhancing accuracy, and reducing reliance on invasive procedures.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Gabriela Fernandes

Abstract: Peri-implantitis (PI) is a highly destructive inflammatory disease characterized by progressive bone loss surrounding dental implants. Although it shares clinical features with chronic periodontitis (CP), its aggressive and often refractory nature suggests a distinct pathophysiology. This study employs an AI-driven transcriptomic approach using publicly available human gingival gene expression data (GEO accession: GSE106090¹) to delineate the molecular mechanisms that distinguish PI from CP. Differential expression analysis identified approximately 240 significantly deregulated genes (|log₂FC| > 1, p < 0.05) between PI and CP. The PI transcriptome was dominated by genes associated with inflammation and tissue destruction — notably IL1B, TNF, MMP9, CXCL8, and SPP1 — while structural and reparative genes such as COL1A1 and BMP2 were markedly downregulated. Pathway enrichment analysis revealed strong activation of Cytokine–cytokine receptor interaction, NF κB signaling, and Osteoclast differentiation pathways, indicating a hyper-inflammatory, bone resorptive phenotype. Unsupervised Uniform Manifold Approximation and Projection (UMAP) analysis confirmed two distinct molecular clusters separating PI from CP, validating their transcriptomic divergence. Furthermore, an L1-regularized logistic regression (LASSO) model identified a compact seven-gene biomarker panel that classified PI and CP with high accuracy (AUC ≈ 0.90). Collectively, these findings provide the first AI-based molecular evidence that peri-implantitis represents a foreign-body–driven inflammatory disorder distinct from classic periodontitis. This work establishes a reproducible molecular map that may serve as a foundation for precision diagnostics and targeted therapeutics in implant dentistry.
Technical Note
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Sujit Bhattacharya,

Naveen Ashish

Abstract: Artificial intelligence is increasingly deployed in high-stakes domains such as healthcare, public welfare, and autonomous transportation, where errors can cost lives or infringe on human rights. However, current AI approaches dominated by neural networks and generative models (e.g., large language models) have well-documented shortcomings: they can hallucinate false information, exhibit bias, and lack explainability. This paper argues that these limitations make purely neural AI insufficient for safety-critical applications like medical diagnostics (where misdiagnosis or unsafe advice can be deadly), public welfare decision-making (where biased algorithms have unfairly denied benefits or targeted vulnerable groups), and autonomous systems (where failures can result in fatal accidents). We then introduce neurosymbolic AI – a hybrid paradigm combining data-driven neural networks with rule-based symbolic reasoning – as a viable path toward trustworthy AI. By integrating neural perception with symbolic knowledge and logic, neurosymbolic systems can provide built-in safety guardrails, robust reasoning abilities, and transparent decision traces. We survey evidence that neurosymbolic architectures can mitigate hallucinations and bias by enforcing domain constraints (e.g. medical guidelines or legal rules), while also enhancing explainability and accountability through explicit reasoning steps. Through examples and literature (including the IEEE's “Neurosymbolic Artificial Intelligence: Why, What, and How”), we illustrate how neurosymbolic AI can bridge the gap between the accuracy of neural methods and the reliability required in life-critical environments. Diagrams comparing architectures and error mitigation strategies are provided to visualize how the neurosymbolic approach improves safety.
Article
Computer Science and Mathematics
Computer Networks and Communications

Arul Selvan M

Abstract: The utilization of secure bootloaders in embedded systems represents a fundamental security mechanism to ensure device integrity and prevent unauthorized firmware tampering. Secure bootloaders leverage cryptographic validation techniques to establish a chain of trust from the hardware root of trust through the boot process, allowing only authenticated and unmodified firmware to execute. This approach mitigates risks such as persistent attacks, intellectual property theft, and system compromise by verifying firmware authenticity using digital signatures and cryptographic hashes. Implementing secure bootloaders effectively combines hardware trust anchors with public-key cryptography to protect embedded devices in diverse applications such as automotive, industrial, IoT, and medical sectors. This paper details the principles, architecture, and cryptographic mechanisms behind secure bootloaders, highlighting their role in preventing rollback attacks and ensuring firmware integrity over the device lifecycle.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Yiwen Huang,

Huikang Zhang,

Junchan Liao,

Ruhong Zhuang,

Honggou Yang,

Xianming Liu

Abstract: Spatio-temporal planning has emerged as a robust methodology for solving trajectory planning challenges in complex autonomous driving scenarios. By integrating both spatio and temporal variables, this approach facilitates the generation of highly accurate, human-like, and interpretable trajectory decisions. This paper presents a novel model-based spatio-temporal behavior decider, engineered to produce optimal and explainable driving trajectories with enhanced efficiency and passenger comfort. The proposed decider systematically evaluates the action space of the ego vehicle, selecting the trajectory that optimizes overall driving performance. This method is particularly significant for autonomous driving systems, as it ensures the generation of human-like trajectories while maintaining high driving efficiency. The efficacy of the proposed framework has been comprehensively validated through rigorous simulations and real-world experimental trials on a commercial passenger vehicle platform, demonstrating its practical utility and performance advantages.
Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Frank Vega

Abstract: The Minimum Vertex Cover (MVC) problem is a fundamental NP-complete problem in graph theory that seeks the smallest set of vertices covering all edges in an undirected graph G = (V, E). This paper presents the find_vertex_cover algorithm, an innovative approximation method that transforms the problem to maximum degree-1 instances via auxiliary vertices. The algorithm computes solutions using weighted dominating sets and vertex covers on reduced graphs, enhanced by ensemble heuristics including maximum-degree greedy and minimum-to-minimum strategies. Our approach guarantees an approximation ratio strictly less than √2 ≈ 1.414, which would contradict known hardness results unless P = NP. This theoretical implication represents a significant advancement beyond classical approximation bounds. The algorithm operates in O(m log n) time for n vertices and m edges, employing component-wise processing and linear-space reductions for efficiency. Implemented in Python as the Hvala package, it demonstrates excellent performance on sparse and scale-free networks, with profound implications for complexity theory. The achievement of a sub-√2 approximation ratio, if validated, would resolve the P versus NP problem in the affirmative. This work enables near-optimal solutions for applications in network design, scheduling, and bioinformatics while challenging fundamental assumptions in computational complexity.
Article
Computer Science and Mathematics
Applied Mathematics

Fabio Botelho

Abstract: This short communication develops a formal proof of Castilgiano Theorem in a elasticity context. The results are base on standard tools of applied functional analysis and calculus of variations. It is worth mentioning such results here presented may be easily extended to a non-linear elasticity context. Finally, in the last section we present a numerical example in order to illustrate the results applicability.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Shalawati,

Arbi Haza Nasution,

Winda Monika,

Tatum Derin,

Aytug Onan,

Yohei Murakami

Abstract: Recent progress in large language models (LLMs) has rekindled the promise of high-quality machine translation (MT), yet evaluation remains a bottleneck. Traditional automatic metrics (e.g., BLEU) are fast but fail to capture semantic and pragmatic nuances reflected in human judgments. We present a multidimensional framework—inspired by MQM—that augments standard metrics (Adequacy, Fluency) with three linguistic dimensions: Morphosyntactic, Semantic, and Pragmatic. We compare three Small Language Models for English→Indonesian: Qwen 3 (0.6B), LLaMA 3.2 (3B), and Gemma 3 (1B). Two controlled experiments are conducted: (i) Preliminary (1,000 translations, GPT-5-only scoring of Adequacy/Fluency + BLEU), and (ii) Final (100 translations, three human experts + GPT-5) on all five metrics. We compute inter-annotator reliability (Krippendorff’s α, weighted κ) and annotator competence (MACE). Results show consistent model ranking (Gemma 3 (1B) > LLaMA 3.2 (3B) > Qwen 3 (0.6B)) and strong GPT-5–human correlation (r = 0.822). To validate practical applicability, a classroom study with 26 translation students tested the metrics in real learning settings. Using the same multidimensional rubric, students rated MT outputs across pre-, post-, and final-test phases. Their mean absolute error (MAE) decreased from 0.97 to 0.83, while Exact Match Rate increased from 0.30 to 0.50 after rubric calibration, demonstrating that the proposed framework and GPT-5 evaluation can be effectively transferred to educational contexts for evaluator training and feedback alignment.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Haoyu Cen,

Yutian Gai

Abstract: Recent advances in language models have greatly improved their ability to understand and generate natural language. Yet, when applied to specialized fields such as financial decision support or complex system diagnosis, they often struggle with limited domain expertise, weak logical reasoning, and unreliable performance under uncertainty. Fine-tuning these large models is typically constrained by cost, privacy, and proprietary limitations. To overcome these issues, this study introduces CRONUS: Contextual Reasoning Orchestration for Navigating Uncertain Scenarios, a framework designed to enhance general-purpose models in domain-specific and decision-intensive tasks. CRONUS employs a lightweight, trainable agent named CARA (Context-Aware Reasoning Agent) to guide the reasoning process of black-box models through structured contextual instructions. CARA is developed via a three-stage training strategy that builds domain understanding, refines reasoning path generation, and optimizes dynamic decision prompts. Experiments in financial analysis tasks show that CRONUS markedly improves reasoning depth, consistency, and robustness compared with direct model use, retrieval-augmented methods, and specialized domain models, demonstrating its effectiveness for high-stakes decision-making in complex environments.
Article
Computer Science and Mathematics
Probability and Statistics

Anna V. Aleshina,

Andrey L. Bulgakov,

Yanliang Xin,

Larisa S. Skrebkova

Abstract: A mathematical model of sustainable resource allocation in a competitive economy is developed and studied, taking into account transaction costs and technological constraints. The model describes the interaction of producers and consumers, introduces a technological set and price dynamics through demand–supply imbalance. Using the theory of covering mappings and variational methods, the existence of equilibrium prices is proven. Issues of stability, numerical algorithms, and macroeconomic interpretation of the obtained results are considered.

of 602

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated