Sort by
Differential Entangled Topology: A Mathematical Model for Simulating the Dynamic Nature of Consciousness
Richard Murdoch Montgomery
This article presents the Differential Entangled Topology (DET) model, a sophisticated mathematical framework designed to simulate the complex, dynamic, and evolving nature of consciousness. By conceptualizing consciousness as an intricate network of entangled points distributed on the surface of a unit sphere, the model captures the fluid and unpredictable interactions that characterize conscious experience. Each point represents a neural region or conscious element, with its position undergoing differential, random shifts that simulate the process of entanglement. These shifts, governed by a specified entanglement factor, reflect the ongoing reconfiguration of consciousness over time. The system's progression is quantitatively analyzed through topological entropy, providing a measure of the system's complexity and the degree of unpredictability inherent in the entanglement process. The model's evolution is visualized in a 3D space, where bright points are projected onto the sphere's surface, and the background is shaded in a soft rosaceous hue, symbolizing the fundus of the system. This framework offers a novel mathematical perspective on the interconnected, non-linear nature of consciousness, providing insight into its continuous transformation and the underlying dynamics that drive cognitive states.
This article presents the Differential Entangled Topology (DET) model, a sophisticated mathematical framework designed to simulate the complex, dynamic, and evolving nature of consciousness. By conceptualizing consciousness as an intricate network of entangled points distributed on the surface of a unit sphere, the model captures the fluid and unpredictable interactions that characterize conscious experience. Each point represents a neural region or conscious element, with its position undergoing differential, random shifts that simulate the process of entanglement. These shifts, governed by a specified entanglement factor, reflect the ongoing reconfiguration of consciousness over time. The system's progression is quantitatively analyzed through topological entropy, providing a measure of the system's complexity and the degree of unpredictability inherent in the entanglement process. The model's evolution is visualized in a 3D space, where bright points are projected onto the sphere's surface, and the background is shaded in a soft rosaceous hue, symbolizing the fundus of the system. This framework offers a novel mathematical perspective on the interconnected, non-linear nature of consciousness, providing insight into its continuous transformation and the underlying dynamics that drive cognitive states.
Posted: 13 February 2025
Dynamics of Membrane Tension Propagation in Eukaryotic Cells
Shahid Mubasshar
Posted: 13 February 2025
Employing Blockchain, NFTs, and Digital Certificates for Unparalleled Authenticity and Data Protection of Source Code
Leonardo Juan Ramirez Lopez,
Genesis Gabriela Morillo Ledezma
Posted: 13 February 2025
A Federated Machine Learning Approach to Predicting Traffic Flow for Virtual Traffic Lights
Samuel Chukwuemeka Egere
Posted: 13 February 2025
A Proof of the Riemann Hypothesis Based on a New Expression of the Completed Zeta Function
Weicun Zhang
The Riemann Hypothesis (RH) is proved based on a new expression of the completed zeta function , which was obtained through pairing the conjugate zeros and in the Hadamard product, with consideration of the multiplicity of zeros. That is, where , , and , with and as real numbers. is the multiplicity of , and .Then, according to the functional equation , we have:Owing to the divisibility contained in the above equation and the uniqueness of , each polynomial factor can only divide (and thereby equal) the corresponding factor on the opposite side of the equation.Thus, we obtain:This is further equivalent to:Thus, we conclude that the Riemann Hypothesis is true.
The Riemann Hypothesis (RH) is proved based on a new expression of the completed zeta function , which was obtained through pairing the conjugate zeros and in the Hadamard product, with consideration of the multiplicity of zeros. That is, where , , and , with and as real numbers. is the multiplicity of , and .Then, according to the functional equation , we have:Owing to the divisibility contained in the above equation and the uniqueness of , each polynomial factor can only divide (and thereby equal) the corresponding factor on the opposite side of the equation.Thus, we obtain:This is further equivalent to:Thus, we conclude that the Riemann Hypothesis is true.
Posted: 13 February 2025
Text Mining Approaches for Exploring Research Trends in the Security Applications of Generative Artificial Intelligence
Jinsick Kim,
Byeongsoo Koo,
Moonju Nam,
Kukjin Jang,
Jooyeoun Lee,
Myoungsug Chung,
Yungseo Song
This study examines the security implications of generative artificial intelligence (GAI), focusing on models such as ChatGPT. As GAI technologies are increasingly integrated into industries like healthcare, education, and media, concerns are growing regarding security vulnerabilities, ethical challenges, and potential for misuse. To address these concerns, this research analyzes 1,047 peer-reviewed academic articles from the SCOPUS database using scientometric methods, including term frequency-inverse document frequency (TF-IDF) analysis, keyword centrality analysis, and latent dirichlet allocation (LDA) topic modeling. The results highlight significant contributions from countries such as the United States, China, and India, with leading institutions like the Chinese Academy of Sciences and the National University of Singapore driving research on GAI security. In the keyword centrality analysis, "ChatGPT" emerged as a highly central term, reflecting its prominence in the research discourse. However, despite its frequent mention, "ChatGPT" showed lower proximity centrality than terms like "model" and "AI." This suggests that while ChatGPT is broadly associated with other key themes, it has a less direct connection to specific research subfields. Topic modeling identified six major themes, including AI and security in education, language models, data processing, and risk management. The analysis emphasizes the need for robust security frameworks to address technical vulnerabilities, ensure ethical responsibility, and manage risks in the safe deployment of AI systems. These frameworks must not only incorporate technical solutions but also ethical accountability, regulatory compliance, and continuous risk management. This study underscores the importance of interdisciplinary research that integrates technical, legal, and ethical perspectives to ensure the responsible and secure deployment of GAI technologies.
This study examines the security implications of generative artificial intelligence (GAI), focusing on models such as ChatGPT. As GAI technologies are increasingly integrated into industries like healthcare, education, and media, concerns are growing regarding security vulnerabilities, ethical challenges, and potential for misuse. To address these concerns, this research analyzes 1,047 peer-reviewed academic articles from the SCOPUS database using scientometric methods, including term frequency-inverse document frequency (TF-IDF) analysis, keyword centrality analysis, and latent dirichlet allocation (LDA) topic modeling. The results highlight significant contributions from countries such as the United States, China, and India, with leading institutions like the Chinese Academy of Sciences and the National University of Singapore driving research on GAI security. In the keyword centrality analysis, "ChatGPT" emerged as a highly central term, reflecting its prominence in the research discourse. However, despite its frequent mention, "ChatGPT" showed lower proximity centrality than terms like "model" and "AI." This suggests that while ChatGPT is broadly associated with other key themes, it has a less direct connection to specific research subfields. Topic modeling identified six major themes, including AI and security in education, language models, data processing, and risk management. The analysis emphasizes the need for robust security frameworks to address technical vulnerabilities, ensure ethical responsibility, and manage risks in the safe deployment of AI systems. These frameworks must not only incorporate technical solutions but also ethical accountability, regulatory compliance, and continuous risk management. This study underscores the importance of interdisciplinary research that integrates technical, legal, and ethical perspectives to ensure the responsible and secure deployment of GAI technologies.
Posted: 13 February 2025
Explainable Supervised Learning Models for Aviation Predictions in Australia
Aziida Nanyonga,
Hassan Wasswa,
Keith Joiner,
Ugur Turhan,
Graham Wild
Despite its recent success in various industries, artificial intelligence has not received full acceptance; hence, it has been fully deployed by the aviation industry. This is partly attributed to, among other factors, the AI (Artificial Intelligence) model works as a black-box model with no clear explanations of how outputs are generated from the input samples. Aviation is an extremely sensitive application field, and this model’s opaqueness makes it hard for a human user in the aviation industry to trust such a model. The work in this study examines the classification performance of various AI algorithms. Then it applies the SHAP (SHapley Additive exPlanations) framework to generate and visualize global-based model explanations to understand which features are learned for the decision boundary of each model and how much each model contributes to the final model output. We also deployed a variation autoencoder to handle the imbalanced class distribution nature of the ATSB (Australian Transport Safety Bureau) dataset. We recorded competitive classification performance in accuracy, precision, recall, and F1-score for a three-class supervised learning-based classification problem.
Despite its recent success in various industries, artificial intelligence has not received full acceptance; hence, it has been fully deployed by the aviation industry. This is partly attributed to, among other factors, the AI (Artificial Intelligence) model works as a black-box model with no clear explanations of how outputs are generated from the input samples. Aviation is an extremely sensitive application field, and this model’s opaqueness makes it hard for a human user in the aviation industry to trust such a model. The work in this study examines the classification performance of various AI algorithms. Then it applies the SHAP (SHapley Additive exPlanations) framework to generate and visualize global-based model explanations to understand which features are learned for the decision boundary of each model and how much each model contributes to the final model output. We also deployed a variation autoencoder to handle the imbalanced class distribution nature of the ATSB (Australian Transport Safety Bureau) dataset. We recorded competitive classification performance in accuracy, precision, recall, and F1-score for a three-class supervised learning-based classification problem.
Posted: 13 February 2025
Penalty Strategies in Semiparametric Regression Models
Ayuba Jack Alhassan,
S. Ejaz Ahmed,
Dursun Aydin,
Ersin Yilmaz
Posted: 13 February 2025
A p-Value Paradox in Proportion Tests and Its Resolution
Hening Huang
This technical note investigates a p-value paradox that emerges in the conventional proportion test. The paradox isdefined as the phenomenon where “decisions made on the same effect size from data of different sample sizes may be inconsistent.” It is illustrated with two examples from clinical trial research. We argue that this p-value paradox stems from the use (or misuse) of p-values to compare two proportions and make decisions. We propose replacing the conventional proportion test and its p-value with estimation statistics that include both the observed effect size and a reliability measure known as the signal content index (SCI).
This technical note investigates a p-value paradox that emerges in the conventional proportion test. The paradox isdefined as the phenomenon where “decisions made on the same effect size from data of different sample sizes may be inconsistent.” It is illustrated with two examples from clinical trial research. We argue that this p-value paradox stems from the use (or misuse) of p-values to compare two proportions and make decisions. We propose replacing the conventional proportion test and its p-value with estimation statistics that include both the observed effect size and a reliability measure known as the signal content index (SCI).
Posted: 13 February 2025
Research on Ginger Price Prediction Model Based on Deep Learning
Fengyu Li,
Xianyong Meng,
Ke Zhu,
Jun Yan,
Lining Liu,
Pingzeng Liu
Posted: 13 February 2025
Optimizing Planning Functions in Nigeria’s Telecom Companies with Information Systems and Technology
Barnty William,
Clement Adebayo
Posted: 13 February 2025
Fixed Point Results and the Ekeland Variational Principle in Vector B-Metric Spaces
Radu Precup,
Andrei Stan
In this paper, we extend the concept of b-metric spaces to the vectorial case, where the distance is vector-valued, and the constant in the triangle inequality axiom is replaced by a matrix. For such spaces, we establish results analogous to those in the b-metric setting: fixed-point theorems, stability results, and a variant of Ekeland’s variational principle. As a consequence, we also derive a variant of Caristi’s fixed-point theorem.
In this paper, we extend the concept of b-metric spaces to the vectorial case, where the distance is vector-valued, and the constant in the triangle inequality axiom is replaced by a matrix. For such spaces, we establish results analogous to those in the b-metric setting: fixed-point theorems, stability results, and a variant of Ekeland’s variational principle. As a consequence, we also derive a variant of Caristi’s fixed-point theorem.
Posted: 13 February 2025
Australian Supermarket Object Set (ASOS): A Benchmark Dataset of Physical Objects and 3D Models for Robotics and Computer Vision
Lachlan Chumbley,
Benjamin Meyer,
Akansel Cosgun
Posted: 13 February 2025
Design of the New Foot Psychomotor Vigilance Test (PVT) for Screening Driving Ability
Yutaka Yoshida,
Emi Yuda,
Kiyoko Yokoyama
Posted: 13 February 2025
A Hybrid MOO-MCDM-PSOM Approach for Mapping Urban Carbon Trade Policies
Muhammad Faisal,
Ery Muchyar Hasiri,
Darniati Darniati,
Titik Khawa Abd Rahman,
Billy Eden William Asrul,
Hamdan Gani,
Respaty Namruddin,
Najirah Umar,
Nurul Aini,
Sri Wahyuni
Posted: 12 February 2025
An Introduction to the Semantic Information G Theory and Applications
Chenguang Lu
Posted: 12 February 2025
A Topological Approach to Protein-Protein Interaction Networks: Persistent Homology and Algebraic Connectivity
José Alberto Rodrigues
Posted: 12 February 2025
Detecting Financial Fraud in Listed Companies via a CNN-Transformer Framework
Qian Yu,
Yuchen Yin,
Shicheng Zhou,
Huailing Mu,
Zhuohuan Hu
Posted: 12 February 2025
Smart Grid IoT Framework Integrating Peer-to-Peer Federated Learning with Homomorphic Encryption
Filip Jerkovic,
Nurul I. Sarkar,
Jahan Ali
Homomorphic Encryption (HE) introduces new dimensions of security and privacy within federated learning (FL) and Internet of Things (IoT) frameworks that allow preservation of user privacy when handling data for FL occurring Smart Grid (SG) technologies. In this paper, we propose a novel SG IoT framework to provide a solution of predicting energy consumption while preserving user-privacy in a smart grid system. The proposed framework is based on the integration of FL, edge computing, and HE principles to provide a robust and secure framework to conduct machine learning workloads end-to-end. In the proposed framework, edge devices are connected to each other using P2P networking and the data exchanged between peers is encrypted using CKKS fully HE. The results obtained show that the system can predict energy consumption as well as preserve user privacy in SG scenarios. The findings provide an insight into the SG IoT framework that can help network researchers and engineers to contribute further towards developing a next generation SG IoT system.
Homomorphic Encryption (HE) introduces new dimensions of security and privacy within federated learning (FL) and Internet of Things (IoT) frameworks that allow preservation of user privacy when handling data for FL occurring Smart Grid (SG) technologies. In this paper, we propose a novel SG IoT framework to provide a solution of predicting energy consumption while preserving user-privacy in a smart grid system. The proposed framework is based on the integration of FL, edge computing, and HE principles to provide a robust and secure framework to conduct machine learning workloads end-to-end. In the proposed framework, edge devices are connected to each other using P2P networking and the data exchanged between peers is encrypted using CKKS fully HE. The results obtained show that the system can predict energy consumption as well as preserve user privacy in SG scenarios. The findings provide an insight into the SG IoT framework that can help network researchers and engineers to contribute further towards developing a next generation SG IoT system.
Posted: 12 February 2025
Neural Information Organizing and Processing Principles
Iosif Iulian Petrila
Posted: 12 February 2025
of 434