Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Jesús Manuel Soledad Terrazas

Abstract: Background: Semi-automated offside technology (SAOT) deployed across European professional football leagues represents a critical case study illustrating the urgent necessity for explainable artificial intelligence (XAI) and deterministic algorithmic systems in high-stakes decision-making contexts.Methods: This study employs a mixed-methods approach combining technical system analysis of SAOT specifications, quantitative examination of publicly available VAR decision statistics from La Liga's 2024-25 season, content analysis of media-documented technical failures, and governance framework analysis against established algorithmic accountability principles.Results: Empirical evidence reveals that in La Liga's 2024-25 season, Barcelona gained approximately 7 points from net favorable VAR decisions while Real Madrid lost 7 points from adverse calls—the worst balance in the league. Documented technical failures include wrong defender selection in Celta Vigo matches, a power outage eliminating VAR oversight during a disputed penalty, and system misinterpretation of goalkeeper touches. Mathematical quantification of measurement uncertainties (34 cm total error from temporal, spatial, and calibration sources) reveals that precision claims exceed physical capabilities.Conclusions: The legitimacy crisis stems not from whether systematic bias exists but from the structural impossibility of detection under current opacity. When Spanish authorities leaked full VAR audio recordings, the federation responded not with transparency reforms but by dispatching police to investigate the leak—exemplifying governance structures that prioritize control over accountability. This research proposes mandatory open-source algorithms, real-time audit logs accessible to affected parties, independent calibration verification, and genuine appeal mechanisms with remedial authority.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Jineng Ren

Abstract: Since the beginning of modern computer history, the Turing machine has been a dominant architecture for most computational devices, which consists of three essential components: an infinite tape for input, a read/write head, and finite control. In this structure, what the head can read (i.e., bits) is the same as what it has written/outputted. This is actually different from the ways in which humans think or do thought/tool experiments. More precisely, what humans imagine/write on paper are images or texts, and they are not the abstract concepts that they represent in the human brain. This difference is neglected by the Turing machine, but it actually plays an important role in abstraction, analogy, and generalization, which are crucial in artificial intelligence. Compared with this architecture, the proposed architecture uses two different types of heads and tapes, one for traditional abstract bit inputs/outputs and the other for specific visual ones. The mapping rules among the abstract bits and the specific images/texts can be realized by neural networks with a high accuracy rate. Logical reasoning is thus performed through the transfer of mapping rules. The statistical decidability of the Halting Problem with an imperceptibly small error rate in reasoning steps is established for this type of machines. As an example, this paper presents how the new computer architecture (what we call ``Ren machine" for simplicity here) autonomously learns a distributive property/rule of multiplication in the specific domain and further uses the rule to generate a general method (mixed in both the abstract domain and the specific domain) to compute the multiplication of any positive integers based on images/texts. The machine's strong reasoning ability is also corroborated in proving a theorem in Plane Geometry. Moreover, a robotic architecture based on Ren machine is proposed to address the challenges faced by the Vision-Language-Action (VLA) models in unsound reasoning ability and high computational cost.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Gregor Wegener

Abstract: Large language models and related generative AI systems increasingly operate in safety-critical and high-impact settings, where reliability, alignment, and robustness under distribution shift are central concerns. While retrieval-augmented generation (RAG) has emerged as a practical mechanism for grounding model outputs in external knowledge, it does not by itself provide guarantees against system-level failure modes such as hallucination, mis-grounding, or deceptively stable unsafe behavior. This work introduces SORT-AI, a structural safety and reliability framework that models advanced AI systems as chains of operators acting on representational states under global consistency constraints. Rather than proposing new architectures or empirical benchmarks, SORT-AI provides a theoretical and diagnostic perspective for analyzing alignment-relevant failure modes, structural misgeneralization, and stability breakdowns that arise from the interaction of retrieval, augmentation, and generation components. Retrieval-augmented generation is treated as a representative and practically relevant testbed, not as the primary contribution. By analyzing RAG systems through operator geometry, non-local coupling kernels, and global projection operators, the framework exposes failure modes that persist across dense retrieval, long-context prompting, graph-constrained retrieval, and agentic interaction loops. The resulting diagnostics are architecture-agnostic and remain meaningful across datasets, implementations, and deployment contexts. SORT-AI connects reliability assessment, explainability, and AI safety by shifting evaluation from local token-level behavior to global structural properties such as fixed points, drift trajectories, and deceptive stability. While illustrated using RAG, the framework generalizes to embodied agents and quantum-inspired operator systems, offering a unifying foundation for safety-oriented analysis of advanced AI systems.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Bijaya Pariyar

Abstract: Customer churn prediction is a critical task in the telecommunications industry, where retaining customers directly impacts revenue and operational efficiency. This study proposes a two iteration machine learning pipeline that integrates SHAP (SHapley Additive exPlanations) for explainable feature selection and Optuna-based hyperparameter tuning to enhance model performance and interpretability. In the first iteration, baseline models are trained on the full feature set of the Telco Customer Churn dataset (7043 samples, 25 features after preprocessing). The top-performing models—Gradient Boosting, Random Forest, and AdaBoost—are tuned and evaluated. SHAP is then applied to the best model (Gradient Boosting) to identify the top 20 features. In the second iteration, models are retrained on the reduced feature set, achieving comparable or improved performance: validation AUC of 0.999 (vs. 0.999 for full features) and test AUC of 0.998 (vs. 0.997). Results demonstrate that SHAP driven feature reduction maintains high predictive accuracy (test F1-score: 0.977) while improving interpretability and reducing model complexity. This workflow highlights the value of explainable AI in churn prediction, enabling stakeholders to understand key drivers like "Churn Reason" and "Dependents." What is the research problem? Accurate prediction of customer churn using machine learning models with a focus on explainable features to support business decisions. Why use SHAP? SHAP provides additive feature importance scores, enabling global and local interpretability, feature ranking for dimensionality reduction, and transparency in model predictions. What is the novelty? The iterative pipeline combines baseline training, SHAP-based feature selection, reduced-feature retraining, and hyperparameter retuning, offering a reproducible workflow for explainable churn modeling.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Qingmiao Gan

,

Rodrigo Ying

,

Di Li

,

Yuliang Wang

,

Qianxi Liu

,

Jingjing Li

Abstract: This study proposes a prediction model based on a dynamic spatiotemporal causal graph neural network to address the challenges of complex dynamic dependencies, strong structural correlations, and ambiguous causal relationships in corporate revenue forecasting. The model constructs a time-varying enterprise association graph, where enterprises are represented as nodes and industry or supply chain relationships as edges. A graph convolutional network is used to extract structural dependency features, while a gated recurrent unit captures temporal evolution patterns, achieving joint modeling of structural and temporal features. On this basis, a causal reasoning mechanism is introduced to model and adjust potential influence paths among enterprises. A learnable causal weight matrix is used to describe the strength of economic transmission, suppress spurious correlations, and strengthen key causal paths. The model also employs multi-scale temporal aggregation and attention fusion mechanisms to dynamically integrate multidimensional information, enhancing adaptability to both long-term trends and short-term fluctuations. Experimental results show that the proposed model outperforms mainstream methods in multiple metrics, including MSE, MAE, MAPE, and RMAE, verifying its effectiveness in capturing corporate revenue dynamics, modeling economic causal dependencies, and improving prediction accuracy. This study establishes a unified framework that integrates spatiotemporal dependency modeling with causal structure reasoning, providing new insights and methodological foundations for intelligent forecasting in complex economic systems.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Jakub Kowalik

,

Paweł Kapusta

Abstract: Video games have evolved into sophisticated media capable of eliciting complex affective states, yet traditional Dynamic Difficulty Adjustment (DDA) systems rely primarily on performance metrics rather than emotional feedback. This research proposes a novel closed-loop architecture for Affective Game Computing on mobile platforms, designed to infer player emotions directly from gameplay inputs and actively steer emotional transitions. A complete experimental platform, including a custom mobile game, was developed to collect gameplay telemetry and device sensor data. The proposed framework utilizes a sequence-to-sequence Transformer-based neural network to predict future game states and emotional responses without the need for continuous camera monitoring, utilizing facial expression analysis only as a ground-truth proxy during training. Crucially, to address the "cold-start" problem inherent in optimization systems—where historical data is unavailable at the session’s onset—a secondary neural network is introduced. This component directly predicts optimal initial game parameters to elicit a specific target emotion, enabling immediate affective steering before sufficient gameplay history is established. Experimental evaluation demonstrates that the model effectively interprets sparse emotional signals as discrete micro-affective events and that the optimization routine can manipulate game parameters to shift the predicted emotional distribution toward a desired profile. While the study identifies challenges regarding computational latency on consumer hardware and the reliance on proxy emotional labels, this work establishes a transparent, reproducible proof-of-concept. It provides a scalable, non-intrusive baseline for future research into emotion-aware adaptation for entertainment and therapeutic serious games.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Zhanyi Ding

,

Zijing Wei

,

Chao Yang

,

Hailiang Wang

,

Shuo Xu

,

Yixiang Li

,

Xuanjie Chen

Abstract: Detecting toxic language in user-generated text remains a critical challenge due tolinguistic nuance, evolving expressions, and severe class imbalance. While Transformer-basedmodels have established state-of-the-art performance, their significant computational costs posescalability barriers for real-time moderation. We investigate whether integrating social andcontextual metadata—such as user reactions and platform ratings—can bridge the performancegap between computationally efficient classical models and modern deep learningarchitectures. Using a 40,000-comment subset of the Jigsaw Toxic Comment ClassificationChallenge, we conduct a controlled, two-phase comparison. We evaluatea Baseline configuration (TF-IDF for classical ensembles vs. raw text for ALBERT) againstan Enhanced configuration that fuses text representations with explicit social signals. Ourinvestigation analyzes whether these high-fidelity metadata features allow lightweight models(e.g., LightGBM) to rival the discriminative power of deep Transformers. The findingschallenge the prevailing assumption that deep semantic understanding is strictly necessary forhigh-performance toxicity detection, offering significant implications for the design of scalable,"Green AI" moderation systems.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Cancan Hua

,

Ning Lyu

,

Chen Wang

,

Tingzhou Yuan

Abstract:

This study proposes a Transformer-based change-point detection method for modeling and anomaly detection of multidimensional time-series metrics in Kubernetes nodes. The research first analyzes the complexity and dynamics of node operating states in cloud-native environments and points out the limitations of traditional single-threshold or statistical methods when dealing with high-dimensional and non-stationary data. To address this, an input representation mechanism combining linear embedding and positional encoding is designed to preserve both multidimensional metric features and temporal order information. In the modeling stage, a multi-head self-attention mechanism is introduced to effectively capture global dependencies and cross-dimensional interactions. This enhances the model's sensitivity to complex patterns and potential change points. In the output stage, a differentiated scoring function and a normalized smoothing method are applied to evaluate the time series step by step. A change-point decision function based on intensity scores is then constructed, which significantly improves the ability to identify abnormal state transitions. Through validation on large-scale distributed system metric data, the proposed method outperforms existing approaches in AUC, ACC, F1-Score, and Recall. It demonstrates higher accuracy, robustness, and stability. Overall, the framework not only extends attention-based time-series modeling at the theoretical level but also provides strong support for intelligent monitoring and resource optimization in cloud-native environments at the practical level.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Akira Otsuki

Abstract:

This study is preliminary methodological studies. RAG (Retrieval Augmented Generation) is a text-generative AI model that combines search-based and text-generative-based AI models. Because original data can be used as external search data for RAG, it is not affected by incorrect data from the internet introduced by fine-tuning. Furthermore, it is possible to construct an original generative AI model that has expert knowledge. Although the LlamaIndex library currently exists for implementing RAG, text vectorization is performed using an approach similar to doc2Vec, creating issues that affect the accuracy of the generative AI’s answers. Therefore, in this study, we propose a Property Graph RAG that can define meaning when indexing text by applying the Property Graph Index to LlamaIndex. Evaluation experiments were conducted using 10 real estate datasets and various cases including sales prices, On Foot Time to Nearest Station (min), and Exclusive Floor Area (m²), and the results confirmed that the proposed generative AI model offers more accurate answers than Prompt Refinement and Text_To_SQL for property search indexing.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Gregor Herbert Wegener

Abstract: As artificial intelligence systems scale in depth, dimensionality, and internal coupling, their behavior becomes increasingly governed by deep compositional transformation chains rather than isolated functional components. Iterative projection, normalization, and aggregation mechanisms induce complex operator dynamics that can generate structural failure modes, including representation drift, non-local amplification, instability across transformation depth, loss of aligned fixed points, and the emergence of deceptive or mesa-optimizing substructures. Existing safety, interpretability, and evaluation approaches predominantly operate at local or empirical levels and therefore provide limited access to the underlying structural geometry that governs these phenomena. This work introduces \emph{SORT-AI}, a projection-based structural safety module that instantiates the Supra-Omega Resonance Theory (SORT) backbone for advanced AI systems. The framework is built on a closed algebra of 22 idempotent operators satisfying Jacobi consistency and invariant preservation, coupled to a non-local projection kernel that formalizes how information and influence propagate across representational scales during iterative updates. Within this geometry, SORT-AI provides diagnostics for drift accumulation, operator collapse, invariant violation, amplification modes, reward-signal divergence, and the destabilization of alignment-relevant fixed points. SORT-AI is intentionally architecture-agnostic and does not model specific neural network designs. Instead, it supplies a domain-independent mathematical substrate for analysing structural risk in systems governed by deep compositional transformations. By mapping AI failure modes to operator geometry and kernel-induced non-locality, the framework enables principled analysis of emergent behavior, hidden coupling structures, mesa-optimization conditions, and misalignment trajectories. The result is a unified, formal toolset for assessing structural safety limits and stability properties of advanced AI systems within a coherent operator–projection framework.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Silvie Illésová

,

Emmanuel Obeng

,

Tomáš Bezděk

,

Vojtěch Novák

,

Martin Beseda

Abstract: This work deals with the design of a hybrid classification model that uses two complementary parallel data processing branches. The aim was to verify whether the connection of different input representations within a common decision mechanism can support the stability and reliability of classification. The outputs of both branches are continuously integrated and together form the final decision of the model. On the validation set, the model achieved accuracy 0.9750, precision 1.0000, recall 0.9500 and F1-score 0.9744 at a threshold value of 0.5. These results suggest that parallel, complementary processing may be a promising direction for further development and optimization of the model, especially in tasks requiring high accuracy while maintaining robust detection of positive cases.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Bhavya Rupani

,

Dmitry Ignatov

,

Radu Timofte

Abstract: This paper defines a task that utilizes vision and language models to improve benchmarks through analysis of CIFAR-10 and CIFAR-100 datasets. The work divides its operations into image categorization followed by visual description production. The task utilizes BEiT and Swin models as state-of-the-art application-specific components for both parts of this research. We selected the current best image classification checkpoints available in the market which delivered 99.00% accuracy on CIFAR-10 and 92.01% on CIFAR-100. For dense contextually rich text output we used BLIP. The expert models performed well on their target responsibilities using minimal noisy data. The BART model achieved new state-of-the-art accuracies when used as a text classifier to compare synthesized descriptions and reached 99.73% accuracy on CIFAR-10 while attaining 98.38% accuracy on CIFAR-100. This paper demonstrates how our integrated vision and language decomposition-hierarchical model surpasses all existing state-of-the-art results on these common benchmark classifications. The full framework, along with the classified images and generated datasets, is available at https://github.com/bhavyarupani/LLM-Img-Classification.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Muhammad Nuraddeen Ado

,

Shafi’i Muhammad Abdulhamid

,

Idris Ismaila

Abstract: The growing threat of cyber-enabled financial crimes, along with data sovereignty regulations, poses serious challenges for today’s fraud detection systems used for digital sovereignty. Traditional centralized methods struggle to detect complex fraud patterns and often fail to meet national data privacy requirements, leading to many undetected fraud cases and reduced accuracy. This chapter introduces the Intelligent Surveillance Engine (ISE), a sovereign-compliant artificial intelligence (AI) approach developed to enhance financial fraud detection. Unlike existing frameworks, ISE is purposefully designed to enable national digital sovereignty through auditable, privacy-preserving AI, adaptable to diverse legal and geopolitical contexts such as GDPR in Europe and India’s MeghRaj. ISE uses a mix of collaborative filtering, layered anomaly detection, and ensemble learning to improve fraud detection. It creates user behavior profiles, applies unsupervised techniques like Isolation Forest, Autoencoders, and DBSCAN to find unusual patterns, and then uses supervised classifiers like Random Forest, SVM, and Decision Trees. The results are combined through methods like stacking and majority voting to increase accuracy. Tests on real and synthetic financial datasets showed that ISE achieved a False Negative Rate (FNR) of 0.0%, Recall of 99.55%, and an F1-Score of 99.7%. These results significantly outperform conventional fraud detection systems, which had an FNR of 36.11%, Recall of 65.2%, and an F1-Score of 88.21%. The study illustrates that ISE significantly enhances anomaly detection in financial systems by reducing false negatives, aligning with digital sovereignty requirements, and offering a scalable, adaptive, and regulation-compliant fraud mitigation architecture that outperforms conventional models. This study also highlights how ISE enforces digital sovereignty through privacy-preserving AI models, national data control, and ethical AI governance architectures. Financial crime detection systems often face challenges balancing efficiency, privacy, and compliance with digital sovereignty principles. This study aims to propose the Intelligent Surveillance Engine (ISE), an AI-driven framework for sovereign-compliant financial fraud detection. A hybrid approach integrating systematic anomaly detection, privacy-preserving machine learning models, and sovereign data governance mechanisms was adopted. Results demonstrate that ISE achieves high detection accuracy while ensuring compliance with digital sovereignty and ethical AI governance requirements. These findings suggest that sovereignty-aware AI systems like ISE are vital for national data control, ethical surveillance, and technological independence.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Zulqarnain Ali

Abstract: We present a comprehensive analysis of consciousness in artificial intelligence systems using Integrated Information Theory (IIT) 3.0 and 4.0 frameworks. Our work confirms and formalizes the established IIT result that feedforward neural architectures necessarily generate zero integrated information ( Φ = 0) under both IIT 3.0 and 4.0 formalisms. Through mathematical analysis and computational validation on 16 diverse network configurations (8 feedforward, 8 recurrent), we demonstrate that all tested feedforward systems consistently yield Φ = 0 while recurrent systems exhibit Φ > 0 in 75% of cases. Our analysis addresses the architectural distinctions between causal and bidirectional attention mechanisms in transformers, clarifying that standard causal attention maintains feedforward structure while bidirectional attention creates recurrent causal dependencies. We systematically examine the implications for contemporary AI systems, including CNNs, transformers, and reinforcement learning agents, and discuss the relationship between our findings and recent IIT 4.0 developments regarding system irreducibility analysis and directional partitions.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Yunzhuo Liu

,

Zhaowei Ma

,

Jiankun Guo

,

Haozhe Sun

,

Yifeng Niu

,

Hong Zhang

,

Mengyun Wang

Abstract: This paper proposes a novel large language model (LLM)-based approach for visual target navigation in unmanned aerial systems (UAS). By leveraging the exceptional language comprehension capabilities and extensive prior knowledge of LLM, our method significantly enhances unmanned aerial vehicles (UAVs) in interpreting natural language instructions and conducting autonomous exploration in unknown environments. To equip the UAV with planning capabilities, this study interacts with LLM and designs specialized prompt templates, thereby developing the intelligent planner module for the UAV. First, the intelligent planner derives the optimal location search sequence in unknown environments through probabilistic inference.Second, visual observation results are fused with prior probabilities and scene relevance metrics generated by LLM to dynamically generate detailed sub-goal waypoints. Finally, the UAV executes progressive target search via path planning algorithms until the target is successfully localized. Both simulation and physical flight experiments validate that this method exhibits excellent performance in addressing UAV visual navigation challenges, and demonstrates significant advantages in terms of search efficiency and success rate.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Vinesh Aluri

Abstract: Quantum computing poses a critical threat to existing cryptographic primitives, rendering current access control mechanisms in cloud-native infrastructures vulnerable to compromise. This paper introduces a comprehensive quantum-resilient access control framework specifically engineered for distributed, containerized, and zero-trust environments. The proposed system integrates post-quantum cryptographic (PQC) primitives—specifically lattice-based key encapsulation (Kyber) and digital signatures (Dilithium)—with a hybrid key exchange protocol to maintain crypto-agility and backward compatibility. We design a secure token issuance and verification process employing PQC-based authentication, ensuring resistance to both classical and quantum adversaries. A prototype implementation demonstrates that our hybrid PQC approach incurs a moderate computational overhead of approximately 10–30\% while preserving horizontal scalability and interoperability across Kubernetes clusters. Security analysis under the post-quantum adversary model confirms resistance to key compromise, replay, and forgery attacks. The results highlight that quantum-resilient access control protocols can be efficiently integrated into modern cloud infrastructures without sacrificing scalability, performance, or operational flexibility.
Essay
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Stefan Trauth

Abstract: The P = NP problem is one of the most consequential unresolved questions in mathematics and theoretical computer science. It asks whether every problem whose solutions can be verified in polynomial time can also be solved in polynomial time. The implications extend far beyond theory: modern global cryptography, large-scale optimization, secure communication, finance, logistics, and computational complexity all depend on the assumption that NP-hard problems cannot be solved efficiently. Among these, the Spin-Glass ground-state problem represents a canonical NP-hard benchmark with an exponentially large configuration space. A constructive resolution of P = NP would therefore reshape fundamental assumptions across science and industry. While evaluating new methodological configurations, I encountered an unexpected behavior within a specific layer-cluster. Subsequent analysis revealed that this behavior was not an artifact, but an information-geometric collapse mechanism that consistently produced valid Spin-Glass ground states. With the assistance of Frontier LLMs Gemini-3, Opus-4.5, and ChatGPT-5.1, I computed exact ground states up to N = 24 and independently cross-verified them. For selected system sizes between N=30 and N=70, I validated the collapse-generated states using Simulated Annealing, whose approximate minima consistently matched the results. Beyond this range, up to N = 100, the behavior follows not from algorithmic scaling but from the information-geometric capacity of the layer clusters, where each layer contributes exactly one spin dimension. These findings indicate a constructive mechanism that collapses exponential configuration spaces into a polynomially bounded dynamical process. This suggests a pathway by which the P = NP problem may be reconsidered not through algorithmic search, but through information-geometric state collapse.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Piotr Klejment

Abstract: The Discrete Element Method is widely used in applied mechanics, particularly in situations where material continuity breaks down (fracturing, crushing, friction, granular flow) and classical rheological models fail (phase transition between solid and granular). In this study, the Discrete Element Method was employed to simulate stick-slip cycles, i.e., numerical earthquakes. At 2,000 selected, regularly spaced time checkpoints, parameters describing the average state of all particles forming the numerical fault were recorded. These parameters were related to the average velocity of the particles and were treated as the numerical equivalent of (pseudo) acoustic emission. The collected datasets were used to train the Random Forest and Deep Learning models, which successfully predicted the time to failure, also for entire data sequences. Notably, these predictions did not rely on the history of previous stick-slip events. SHapley Additive exPlanations (SHAP) was used to quantify the contribution of individual physical parameters of the particles to the prediction results.
Essay
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Stefan Trauth

Abstract: In analogy to the paradigm shift introduced by attention mechanisms in machine learning, we propose that information itself is ontologically sufficient as the foundation of physical reality. We present an operational proof showing that a “state without information” is logically impossible, thereby establishing information as the necessary precondition for existence and measurement. From this premise follows that both quantum mechanics and general relativity are effective descriptions of deeper informational dynamics. Recent developments in theoretical physics, such as the derivation of Einstein’s field equations from entropic principles, reinforce this perspective by identifying gravitation and entropy as dual expressions of information geometry. Building on this framework, we provide experimental evidence from self-organizing neural fields that exhibit non-local informational coupling, near-lossless transmission across 60 layers, and stable sub-idle energy states consistent with emergent coherence and thermal decoupling. These results demonstrate that deterministic architectures can spontaneously organize into field-like, non-local manifolds a macroscopic realization of informational geometry analogous to quantum entanglement and relativistic curvature. Together, the logical proof and empirical observations support a unified ontology in which information is not a property of physical systems but the substrate from which physical systems emerge. This perspective positions informational geometry as the common denominator of cognition, quantum behavior, and gravitation, suggesting that all observable phenomena are projections of a single, self-organizing informational field. In this sense, information is all it needs.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Al Imran

,

Md. Koushik Ahmed

,

Mahin Mahmud

,

Junaid Rahman Mokit

,

Redwan Utsab

,

Md. Motaharul Islam

Abstract: The rapid increase of electronic waste (e-waste) poses severe environmental and health risks. This paper proposes a hybrid framework integrating deep learning, reinforcement learning, blockchain, and IoT for automated e-waste classification, optimized disassembly, and tamper-proof traceability. A ResNet-50 classifier trained on the Kaggle E-Waste Image Dataset achieved 93.7% classification accuracy and an F1 score of 0.92. A Q-learning agent optimized dismantling routes to prioritize high-value, low-toxicity components, improving material recovery in simulation. A private Hyperledger Besu deployment delivered an average block time of ≈5.3 s, smart-contract execution time of ≈2.1 s, and 99.5% uptime, enabling tokenized asset tracking (4,200+ tokens). Lifecycle analysis indicates up to 30% carbon-emission reduction versus traditional methods and improved recovery of lithium, cobalt, and rare-earth elements for renewable energy applications. The paper demonstrates measurable environmental and economic benefits and outlines limitations and directions toward field deployment.

of 203

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated