REVIEW | doi:10.20944/preprints202303.0246.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: artificial intelligence; artificial intelligence and its application on human health; AI and its appli-cation on human health; AI and human health
Online: 14 March 2023 (04:18:22 CET)
Background: Artificial intelligence can help improve the quality of healthcare by analyzing vast amounts of data and providing more effective and personalized treatment plans. Researchers are working on developing AI-powered solutions that can help improve the outcomes of patients. Objective: To explore the potential of AI in improving healthcare outcomes and patient experience. Results: The study suggests that AI can improve healthcare efficiency and patient outcomes but cannot fully replace human healthcare professionals. AI can assist healthcare professionals in their work, leading to better resource utilization and improved patient care. However, there is still a need for human healthcare professionals to oversee AI systems and provide empathy and personalized care to patients. Conclusion: While there is immense potential for AI in healthcare, it is not yet feasible to replace human healthcare workers. Instead, it should be viewed as a tool that can help improve the efficiency and effectiveness of human healthcare.
ARTICLE | doi:10.20944/preprints202305.0195.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: llm; impact; society; ai; large-language-model; transformer; natural language processing; nlp
Online: 4 May 2023 (05:15:50 CEST)
Large Language Models (LLMs) have revolutionized the field of Artificial Intelligence with their ability to understand and generate natural language discourse. This has led to the use of LLMs in various creative fields such as music, art, and storytelling, which has sparked a debate on the impact of LLMs on human creativity. This article explores the cyclic effect of LLMs on human creativity, where LLMs can generate an ever-evolving cycle of creative output from both humans and machines. The article also highlights the importance of responsible and ethical use of LLMs, including privacy and copyright issues with the training data used by LLMs. The article emphasizes the need for transparency, data security measures, and AI governance policies to protect user data and ensure the safe use of LLMs. It also calls for the development of educational materials and resources to increase public understanding of LLMs and their ethical implications. The article concludes by highlighting the potential of LLMs as powerful tools for communication and education, but also emphasizes the need for ethical considerations to ensure the responsible and beneficial use of LLMs in society.
ARTICLE | doi:10.20944/preprints202304.0429.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Artificial intelligence; Super Artificial intelligence; ChatGPT; ChatGPT-4; AI harm; Negative impacts of AI; AI use in Weapons; AI-driven unemployment; AI use in research
Online: 17 April 2023 (09:06:16 CEST)
AI is extraordinarily useful technology but it is becoming increasingly powerful with rapidly growing capabilities to disrupt and harm human society. We call on international and national organizations and individuals to join forces in banning the development of superintelligent AI and introducing regulations to prevent and mitigate AI-caused harm.
ARTICLE | doi:10.20944/preprints202202.0233.v1
Subject: Computer Science And Mathematics, Computer Science Keywords: AI; disease surveillance; pandemics; global public health; ethics
Online: 18 February 2022 (10:36:04 CET)
Infectious diseases, as COVID-19 is proving, pose a global health threat in an interconnected world. In the last 20 years, resistant infectious diseases such as SARS, MERS, H1N1, Ebola, Zika and now COVID-19 have been impacting global health defences, and aggressively flourishing within the rise of global travel, urbanization, climate change and ecological degradation. In parallel, this extraordinary episode in global human health highlights the potential for artificial intelligence (AI)-enabled disease surveillance to collect and analyse vast amounts of unstructured and real-time data to inform epidemiological and public health emergency responses. The uses of AI in these dynamic environments are increasingly complex, challenging the potential for human autonomous decisions. In this context, our study of qualitative perspectives will consider a responsible AI framework to explore its potential application to disease surveillance in a global health context. Thus far, there is a gap in the literature in considering these multiple and interconnected levels of disease surveillance and emergency health management through the lens of a responsible AI framework.
ARTICLE | doi:10.20944/preprints202303.0375.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: ChatGPT; machine learning; NLP; problem solving; AI; artificial intelligence
Online: 21 March 2023 (09:28:14 CET)
Backgrounds: The field of Artificial Intelligence (AI) has seen a major shift in recent years due to the development of new Machine Learning (ML) models such as Generative Pre-trained Transformer (GPT) and AI which are progressively becoming part our everyday lives. GPT has achieved previously unheard-of levels of accuracy in most computerized language processing tasks and their chat-based variations. Aim: The aim of this study was to investigate the problem-solving abilities of ChatGPT using two sets of verbal insight problems, with a known performance level established by a sample of human participants. Material and Methods: A total of 30 problems labelled as “practice problems” and “transfer problems”, as listed by Ansburg and Dominowski (2000), were administered to ChatGPT. The answers provided by ChatGPT received a score of "0" for each problem answered incorrectly and a score of "1" for each correct response, as per the correct solution specified in Ansburg and Dominowski's (2000) study. The highest score that could be attributed to both the practice and transfer problems was 15 out of 15. In order to compare ChatGPT performance to the performance to that of human subjects, the solution rate for each problem (based on a sample of 20 subjects) was used, as indicated by Ansburg and Dominowski (2000). Results: The study highlighted that ChatGPT can be trained in out-of-the-box thinking and demonstrated potential in solving verbal insight problems. The global performance of ChatGPT equalled the most probable outcome for the human sample in both practice problems and transfer problems as well as upon their combination. Additionally, ChatGPT answer combinations were among the 5% of most probable outcomes for the human sample both when considering practice problems and pooled problem sets. These findings demonstrate that ChatGPT performance on both set of problems was in line with the mean rate of success of human subjects, indicating that it performed reasonably well. Conclusions: The use of transformer architecture and self-attention in ChatGPT may have helped to prioritize inputs while predicting, contributing to its potential in verbal insight problem-solving. ChatGPT has shown potential in solving insight problems, thus highlighting the importance of incorporating AI in psychological research. However, it is acknowledged that open challenges still exist. Indeed, further research is needed to fully understand the capabilities and limitations of AI in insight problem-solving.
ARTICLE | doi:10.20944/preprints202301.0521.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: ChatGPT; GPT-3; OpenAI; chatbots; digital health; artificial intelligence; automation; technological advancement; human-AI interaction; collaboration; open science
Online: 28 January 2023 (07:56:35 CET)
Artificial intelligence (AI) has the potential to revolutionize research by automating data analysis, generating new insights, and supporting the discovery of new knowledge. The top 10 contribution areas of AI towards public health were gathered in this feasibility study. We utilized the “text-davinci-003” model of GPT-3, using OpenAI playground default parameters. The model was trained with the largest training dataset any AI had, limited to a cut-off date in 2021. This study aimed to test the ability of GPT-3 to advance public health and to explore the feasibility of using AI as scientific co-author. The AI was asked for input including scientific quotations and the human authors reviewed responses for plausibility. We found that GPT-3 was able to assemble, summarize, and generate plausible text blocks relevant for public health concerns, elucidating valuable areas of application for itself. However, most quotations were invented by GPT-3 and thus, invalid. Ac-cording to today’s rules, we conclude that AI can contribute to public health research as a team member. Nevertheless, good scientific practice needs to be also followed for AI contributions, and a broad scientific discourse on AI contributions is needed. Policies for good scientific practice should be updated timely following this discourse.
ARTICLE | doi:10.20944/preprints202301.0406.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: AI; Sustainability; Energy efficiency; Deep learning; Neural networks
Online: 23 January 2023 (09:38:57 CET)
As AI models become more and more common in business and even in our daily lives, it is important to understand what the carbon impact of these models is. Recent papers have shown that this impact can be quite great, i.e., the training of a single high-end model can result in emissions of more than 500t of CO2eq. In this paper we discuss the factors that influence the carbon footprint of AI models, explore what impact different decisions have, and show how the footprint can be reduced. We also examine the footprint of different models to give a guideline on how urgent action is for different organizations.
ARTICLE | doi:10.20944/preprints202208.0197.v1
Subject: Computer Science And Mathematics, Computer Science Keywords: Deep neural networks; Adversarial Attacks; Poisoning; Backdoors; Trojans; Taxonomy; Ontology; Knowledge Base; Explainable AI; Green AI
Online: 10 August 2022 (09:39:07 CEST)
Deep neural networks (DNN) have successfully delivered a cutting-edge performance in several fields. With the broader deployment of DNN models on critical applications, the security of DNNs becomes an active and yet nascent area. Attacks against DNNs can have catastrophic results, according to recent studies. Poisoning attacks, including backdoor and Trojan attacks, are one of the growing threats against DNNs. Having a wide-angle view of these evolving threats is essential to better understand the security issues. In this regard, creating a semantic model and a knowledge graph for poisoning attacks can reveal the relationships between attacks across intricate data to enhance the security knowledge landscape. In this paper, we propose a DNN Poisoning Attacks Ontology (DNNPAO) that would enhance knowledge sharing and enable further advancements in the field. To do so, we have performed a systematic review of the relevant literature to identify the current state. We collected 28,469 papers from IEEE, ScienceDirect, Web of Science, and Scopus databases, and from these papers, 712 research papers were screened in a rigorous process, and 55 poisoning attacks in DNNs were identified and classified. We extracted a taxonomy of the poisoning attacks as a scheme to develop DNNPAO. Subsequently, we used DNNPAO as a framework to create a knowledge base. Our findings open new lines of research within the field of AI security.
Subject: Engineering, Automotive Engineering Keywords: AI; Machine Learning; ISE; Analog Signal Processing; Horticulture; Aqua Culture
Online: 15 October 2020 (16:51:23 CEST)
We suggest a deep learning based sensor signal processing method to remove chemical, kinetic and electrical artifacts from ion selective electrodes’ measured values. An ISE is used to investigate the concentration of a specific ion from aqueous solution, by measuring the Nernst potential along the glass membrane. However, application of ISE on a mixture of multiple ion has some problem. First problem is a chemical artifact which is called ion interference effect. Electrically charged particles interact with each other and flows through the glass membrane of different ISEs. Second problem is the kinetic artifact caused by the movement of the liquid. Water molecules collide with the glass membrane causing abnormal peak values of voltage. The last artifact is the interference of ISEs. When multiple ISEs are dipped into same solution, one electrode’s signal emission interference voltage measurement of other electrodes. Therefore, an ISE is recommended to be applied on single-ion solution, without any other sensors applied at the same time. Deep learning approach can remove both 3 artifacts at the same time. The proposed method used 5 layers of artificial neural networks to regress correct signal to remove complex artifacts with one-shot calculation. Its MAPE was less than 1.8% and R2 of regression was 0.997. A randomly chosen value of AI-processed data has MAPE less than 5% (p-value 0.016).
ARTICLE | doi:10.20944/preprints202202.0109.v1
Subject: Computer Science And Mathematics, Information Systems Keywords: Industry 4.0; industry 5.0 interoperability; Machine Learning; AI; HR; Attrition
Online: 8 February 2022 (12:31:01 CET)
This paper aims to raise awareness on certain interoperability issues as we intend to shape industry 5.0 in order to enable a human-centric resilient society. We advocate that the need of sharing small and specific data will become more intensive as AI-based solutions will become more pervasive. Consequently, dataspaces should be carefully designed to address this need. We advance the conversation by presenting a case study from HR demonstrating how to predict the possibility of an employee experiencing attrition. Our experimental results show that we need more than 500 samples for developing a machine learning model to be sufficiently capable to generalize the problem. Consequently, our experimental results show the feasibility of the idea. However, in small and medium sized companies this approach cannot be implemented due to the limited number of samples. At the same time, we advocate that this obstacle may be overcome if multiple companies will join a shared dataspace, thus raising interoperability issues
ARTICLE | doi:10.20944/preprints202110.0085.v1
Subject: Social Sciences, Law Keywords: Artificial intelligence; Legal research; Disruption; Legal AI tools
Online: 5 October 2021 (13:04:06 CEST)
Legal research is an indispensable skill for lawyers. Therefore, it is always necessary for lawyers to engage in legal research in due course of trying to alleviate various legal problems. Although the purpose and methodology of the research may vary from lawyer to lawyer, doing research is a common activity. As a result, the quest to assess the impacts of artificial intelligence (hereinafter ‘AI’) on legal research allows one to measure the influence of AI on the legal profession in general. Moreover, with the advent of Legal AI, it is now evident that the legal profession is not immune from disruption. According to the above, this article discusses the impacts of AI on research in the legal profession in general in accomplishing various lawyerly tasks by different legal professionals.
ARTICLE | doi:10.20944/preprints201906.0149.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: artificial general intelligence; AGI; blockchain; distributed ledger; AI containment; AI safety; AI value alignment; ASILOMAR
Online: 16 June 2019 (11:19:23 CEST)
Artificial general intelligence (AGI) progression metrics indicate AGI will occur within decades. No proof exists that AGI will benefit humans and not harm or eliminate humans. I propose a set of logically distinct conceptual components that are necessary and sufficient to 1) ensure various AGI scenarios will not harm humanity and 2) robustly align AGI and human values and goals. By systematically addressing pathways to malevolent AI we can induce the methods/axioms required to redress them. Distributed ledger technology (DLT, ‘blockchain’) is integral to this proposal, e.g. ‘smart contracts’ are necessary to address evolution of AI that will be too fast for human monitoring and intervention. The proposed axioms: 1) Access to technology by market license. 2) Transparent ethics embodied in DLT. 3) Morality encrypted via DLT. 4) Behavior control structure with values at roots. 5) Individual bar-code identification of critical components. 6) Configuration Item (from business continuity/disaster recovery planning). 7) Identity verification secured via DLT. 8) ‘Smart’ automated contracts based on DLT. 9) Decentralized applications - AI software modules encrypted via DLT. 10) Audit trail of component usage stored via DLT. 11) Social ostracism (denial of resources) augmented by DLT petitions. 12) Game theory and mechanism design.
ARTICLE | doi:10.20944/preprints202001.0163.v1
Subject: Computer Science And Mathematics, Computer Science Keywords: symbolic; neurosymbolic; eplainable AI; trustworthy
Online: 16 January 2020 (10:49:10 CET)
AI research and implementations are growing, and so are the risks associated with AI (Artificial Intelligence) developments, especially when it’s difficult to understand exactly what they do and how they work, both at a localized level, and at deployment, in particular when distributed and on a large scale. Governments are pouring massive funding to promote AI research and education, yet research results and claims, as well as the effectiveness of educational programmes, can be difficult to evaluate given the limited reproducibility of computations based on ML (machine learning) and poor explainability, which in turn limits the accountability of the systems and can cause cascading systemic problems and challenges including poor reproducibility, reliability, and overall lack of trustworthiness. This paper addresses some of the issues in Knowledge Representation for AI at system level, identifies a number of knowledge gaps and epistemological challenges as root causes of risks and challenges for AI, and proposes that neurosymbolic and hybrid KR approaches can serve as mechanisms to address some of the challenges. The paper concludes with a postulate and points to related and future research
ARTICLE | doi:10.20944/preprints202109.0358.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Artificial general intelligence; AGI; AI safety; AI value alignment; AI containment; interactive proof systems; multiple-prover systems
Online: 21 September 2021 (11:35:34 CEST)
Methods are currently lacking to prove artificial general intelligence (AGI) safety. An AGI ‘hard takeoff’ is possible, in which first generation AGI1 rapidly triggers a succession of more powerful AGIn that differ dramatically in their computational capabilities (AGIn≪AGIn+1). No proof exists that AGI will benefit humans or of a sound value-alignment method. Numerous paths toward human extinction or subjugation have been identified. We suggest that probabilistic proof methods are the fundamental paradigm for proving safety and value-alignment between disparately powerful autonomous agents. Interactive proof systems (IPS) describe mathematical communication protocols wherein a Verifier queries a computationally more powerful Prover and reduces the probability of the Prover deceiving the Verifier to any specified low probability (e.g., 2-100). IPS procedures can test AGI behavior control systems that incorporate hard-coded ethics or value-learning methods. Mapping the axioms and transformation rules of a behavior control system to a finite set of prime numbers allows validation of ‘safe’ behavior via IPS number-theoretic methods. Many other representations are needed for proving various AGI properties. Multi-prover IPS, program-checking IPS, and probabilistically checkable proofs further extend the paradigm. In toto, IPS provides a way to reduce AGIn↔AGIn+1 interaction hazards to an acceptably low level.
REVIEW | doi:10.20944/preprints202107.0375.v1
Subject: Engineering, Automotive Engineering Keywords: edge AI accelerator; CGRA; CNN
Online: 16 July 2021 (14:31:17 CEST)
Edge AI accelerators have been emerging as a solution for near customers’ applications in areas such as unmanned aerial vehicles (UAVs), image recognition sensors, wearable devices, robotics, and remote sensing satellites. These applications not only require meeting performance targets but also meeting strict reliability and resilience constraints due to operations in harsh and hostile environments. Numerous research articles have been proposed, but not all of these include full specifications. Most of these tend to compare their architecture with other existing CPUs, GPUs, or other reference research. This implies that the performance results of the articles are not comprehensive. Thus, this work lists the three key features in the specifications such as computation ability, power consumption, and the area size of prior art edge AI accelerators and the CGRA accelerators during the past few years to define and evaluate the low power ultra-small edge AI accelerators. We introduce the actual evaluation results showing the trend in edge AI accelerator design about key performance metrics to guide designers on the actual performance of existing edge AI accelerators’ capability and provide future design directions and trends for other applications with challenging constraints.
ARTICLE | doi:10.20944/preprints202004.0029.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Mobility; infrastructure; flexible pavement; pavement condition index (PCI); international roughness index (IRI); artificial intelligence (AI); predictive models; ensemble learning; structural health monitoring; machine learning
Online: 3 April 2020 (09:35:44 CEST)
The construction of different roads, such as freeways, highways, major roads or minor roads must be accompanied by constant monitoring and evaluation of service delivery. Pavements are generally assessed by engineers in terms of the smoothness, surface condition, structural condition and surface safety. Pavement assessment is often conducted using the qualitative indices such as international roughness index (IRI), pavement condition index (PCI), structural condition index (SCI) and skid resistance value (SRV), which are used for smoothness assessment, surface condition assessment, structural condition assessment, and surface safety assessment, respectively. In this paper, Tehran-Qom Freeway in Iran has been selected as the case study and its smoothness and pavement surface conditions are assessed. At 2-km intervals, a 100-meter sample unit is selected in the slow-speed lane (totally, 118 sample units). In these sample units, the PCI is calculated after a visual inspection of the pavement and the recording of distresses. Then, in each sample unit, the average IRI is computed. The purpose of this study is to provide a method for estimating PCI based on IRI. The proposed theory was developed by Random Forest (RF), and Random Forest optimized by Genetic Algorithm (RF-GA) methods and these methods were validated using correlation coefficient (CC), scattered index (SI), and Willmott’s index of agreement (WI) criteria. The proposed method reduces costs, saves time and eliminates the safety risks.
ARTICLE | doi:10.20944/preprints201810.0595.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: machine learning; AI; BST; diameter; algorithms; d-dimensional datasets; decision tree
Online: 25 October 2018 (06:01:21 CEST)
Big Data classification has recently received a great deal of attention due to the main properties of Big Data, which are volume, variety, and velocity. The furthest-pair-based binary search tree (FPBST) shows a great potential for Big Data classification. This work attempts to improve the performance the FPBST in terms of computation time, space consumed and accuracy. The major enhancement of the FPBST includes converting the resultant BST to a decision tree, in order to remove the need for the slow K-nearest neighbors (KNN), and to obtain a smaller tree, which is useful for memory usage, speeding both training and testing phases and increasing the classification accuracy. The proposed decision trees are based on calculating the probabilities of each class at each node using various methods; these probabilities are then used by the testing phase to classify an unseen example. The experimental results on some (small, intermediate and big) machine learning datasets show the efficiency of the proposed methods, in terms of space, speed and accuracy compared to the FPBST, which shows great potential for further enhancements of the proposed methods to be used in practice.
ARTICLE | doi:10.20944/preprints202101.0621.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: Speech Command; MFCC; Tsetlin Machine; Learning Automata; Pervasive AI; Machine Learning; Artificial Neural Network; Keyword Spotting
Online: 29 January 2021 (13:01:47 CET)
The emergence of Artificial Intelligence (AI) driven Keyword Spotting (KWS) technologies has revolutionized human to machine interaction. Yet, the challenge of end-to-end energy efficiency, memory footprint and system complexity of current Neural Network (NN) powered AI-KWS pipelines has remained ever present. This paper evaluates KWS utilizing a learning automata powered machine learning algorithm called the Tsetlin Machine (TM). Through significant reduction in parameter requirements and choosing logic over arithmetic based processing, the TM offers new opportunities for low-power KWS while maintaining high learning efficacy. In this paper we explore a TM based keyword spotting (KWS) pipeline to demonstrate low complexity with faster rate of convergence compared to NNs. Further, we investigate the scalability with increasing keywords and explore the potential for enabling low-power on-chip KWS.
ARTICLE | doi:10.20944/preprints202005.0106.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Machine Learning; Training Data; alawa; AI
Online: 7 May 2020 (06:20:09 CEST)
Accurate lithium battery diagnosis and prognosis is critical to increase penetration of electric vehicles and grid-tied storage systems. They are both complex due to the intricate, nonlinear, and path-dependent nature of battery degradation. Data-driven models are anticipated to play a significant role in the behavioral prediction of dynamical systems such as batteries. However, they are often limited by the amount of training data available. In this work, we generated the first big data comprehensive synthetic datasets to train diagnosis and prognosis algorithms. The proof-of-concept datasets are over three orders of magnitude larger than what is currently available in the literature. With benchmark datasets, results from different studies could be easily equated, and the performance of different algorithms can be compared, enhanced, and analyzed extensively. This will expend critical capabilities of current AI algorithms, tools, and techniques to predict scientific data.
ARTICLE | doi:10.20944/preprints202305.1488.v1
Subject: Computer Science And Mathematics, Computer Science Keywords: Explainability; Explainable AI; XAI; Recommendation; Bugs
Online: 22 May 2023 (09:42:14 CEST)
Software engineering is a comprehensive process that requires developers and team members to collaborate across multiple tasks. In software testing, bug triaging is a tedious and time-consuming process. Assigning bugs to the appropriate developers can save time and maintain their motivation. However, without knowledge about a bug's class, triaging is difficult. Motivated by this challenge, this paper focuses on the problem of assigning the suitable developer to new bug by analyzing the history of developers’ profiles and analyzing history of bugs for all developers using machine learning-based recommender systems. Explainable AI (XAI) is AI that humans can understand. It contrasts with "black box" AI, which even its designers can't explain. By providing appropriate explanations for results, users can better comprehend the underlying insight behind the outcomes, boosting the recommender system's effectiveness, transparency, and confidence. In this paper, we propose two explainable models for recommendation. The first one is an explainable recommender model for personalized developers generated from bug history to know what the preferred type of bug is for each developer. The second model is an explainable recommender model based on bugs to generate the best developer for each bug from bug history.
ARTICLE | doi:10.20944/preprints202210.0027.v1
Subject: Computer Science And Mathematics, Computer Science Keywords: cross-disciplinary; AI; blockchain; investment; protection
Online: 5 October 2022 (04:04:40 CEST)
This article presents the results of a cross-disciplinary applied study exploring investors’ protections in the context of distributed ledger technology (DLT) smart contracts. Fusing legal, business, and technical perspectives, we developed a framework for protection from non-commercial risks for stablecoins, taking advantage of DLT and AI. A key concept we propose is the monitoring of disinformation and fake news to prevent malicious parties from abusing our solution. Based on the similarities between central bank digital currencies (CBDCs) and stablecoins, we propose scaling up our results to all future internet investments performed without face-to-face contact between the investor and the company.
ARTICLE | doi:10.20944/preprints202210.0157.v1
Subject: Business, Economics And Management, Finance Keywords: AI; insurance; adversarial attacks
Online: 11 October 2022 (15:44:36 CEST)
Artificial intelligence (AI) is a tool that financial intermediaries and insurance companies use in most cases or are willing to use it in almost all their activities. AI can have a positive impact on almost all aspects of the insurance value chain.: pricing, underwriting, marketing, claims management, after-sales services. While it is very important and useful, AI is not free of risks, including its robustness against cyber-attacks and so-called adversarial attacks. Adversarial attacks are conducted by external entities to misguide and defraud the AI algorithms. The paper is designed to provide a review of adversarial AI and discuss its implications for the insurance sector. The study starts with a taxonomy of adversarial attacks and presents a fully-fledged example of claims falsification in health insurance. Some remedies, consistent with the current regulatory framework, are presented.
REVIEW | doi:10.20944/preprints202004.0383.v1
Subject: Medicine And Pharmacology, Other Keywords: COVID-19; coronavirus pandemic; big data; epidemic outbreak; artificial intelligence (AI); deep learning
Online: 21 April 2020 (09:01:45 CEST)
The very first infected novel coronavirus case (COVID-19) was found in Hubei, China in Dec. 2019. The COVID-19 pandemic has spread over 215 countries and areas in the world, and has significantly affected every aspect of our daily lives. At the time of writing this article, the numbers of infected cases and deaths still increase significantly and have no sign of a well-controlled situation, e.g., as of 14 April 2020, a cumulative total of 1,853,265 (118,854) infected (dead) COVID-19 cases were reported in the world. Motivated by recent advances and applications of artificial intelligence (AI) and big data in various areas, this paper aims at emphasizing their importance in responding to the COVID-19 outbreak and preventing the severe effects of the COVID-19 pandemic. We firstly present an overview of AI and big data, then identify their applications in fighting against COVID-19, next highlight challenges and issues associated with state-of-the-art solutions, and finally come up with recommendations for the communications to effectively control the COVID-19 situation. It is expected that this paper provides researchers and communities with new insights into the ways AI and big data improve the COVID-19 situation, and drives further studies in stopping the COVID-19 outbreak.
ARTICLE | doi:10.20944/preprints202305.0808.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: artificial intelligence; AI; education; international students; personalized learning; adaptive learning; predictive analytics; chatbots
Online: 11 May 2023 (05:57:02 CEST)
The use of artificial intelligence (AI) applications in education has the potential to revolutionize the learning experience for international students, who face unique challenges when studying in a foreign country. This paper explores various examples of AI applications in education and their potential impact on international students. AI applications such as personalized learning experiences, adaptive testing, predictive analytics, and chatbots for learning and research are examined for their potential to improve learning efficiency and provide customized education support. It also explores the significant risks and limitations associated with AI technologies, such as privacy, cultural differences, language proficiency, and ethical implications. To maximize the potential benefits of AI applications in higher education, it is crucial to implement appropriate safeguards and regulations. This paper provides a starting point for research on the potential impact of artificial intelligence on international students’ educational experiences and how AI may be integrated into educational administration and learning processes.
ARTICLE | doi:10.20944/preprints202304.0647.v1
Subject: Medicine And Pharmacology, Other Keywords: COVID-19; literature processing; AI
Online: 20 April 2023 (10:11:03 CEST)
The COVID-19 pandemic has resulted in an unprecedented acceleration in scientific production across multiple disciplines. The vast number of publications available makes it challenging for healthcare professionals and researchers to keep up with the current state of knowledge regarding COVID-19. This article presents covid19-help.org, a free expert-curated database designed to increase the availability of relevant original data related to COVID-19 treatment and prevention via immunization. To accelerate the process of identifying relevant original scientific publications and to simplify annotation of their content, the database uses our artificial intelligence in medical literature (AIM.lit) tool. The article provides an overview of the covid19-help.org database design, the criteria used to select publications, and the use of the AIM.lit tool. The database allows users to easily search and filter records, provides concise information on individual substances and their mechanisms of action, lists relevant original scientific publications with annotations, and offers links to external resources. The AIM.lit tool increases the speed of publication selection and extraction of basic relevant information, without compromising the validity of the data. The technology and experience gained from creating the covid19-help.org database and its tools could also be useful in other areas where scientific information organization is a challenge.
ARTICLE | doi:10.20944/preprints201704.0138.v1
Subject: Computer Science And Mathematics, Computer Science Keywords: AI Agents; emotive layer; judgement; behaviour; probability; decision-making; gameplay; backpropagation.
Online: 21 April 2017 (12:07:53 CEST)
AI is often looked at as a logical rational way to develop a games agent that methodically looks at options and delivers rational solutions. This paper is based on developing an AI agent that plays a game with a similar emotive content like a human. The purpose of the study was to see if the incorporation of this emotive content would influence the outcomes within the game Love Letter. In order to do this an AI agent with an emotive layer was developed to paly the game over a million times. A lower win/loss ratio demonstrates that to some extent this methodology was vindicated and a 100 per cent win for the AI agent did not happen. Machine learning techniques were modelled purposely so as to match extreme models of behavioural change. The results demonstrated a win/loss ration of 0.67 for the AI agent and in many ways reflectd the frustration that a normal player would exhibit during game play. As was hypothesised the final agent investment value was, on average, lower after matchplay than its initial value.
REVIEW | doi:10.20944/preprints202105.0567.v1
Subject: Chemistry And Materials Science, Analytical Chemistry Keywords: Artificial intelligence AI; machine learning; algorithms; QSAR; drug discovery
Online: 24 May 2021 (13:02:21 CEST)
Artificial learning and machine learning is playing a pivotal role is the society especially in the field of medicinal chemistry and drug discovery. Particularly its algorithms, neural networks or other recurrent networks drive this area. In this review, we have taken into account the diverse use of AI in a number of pharmaceutical industries including discovery of drugs, repurposing, development of pharmaceutical drug and its clinical trials. In addition, the efficiency of these artificial or machine learning programs in achieving the target drugs in short time period, along with accurate dosage and cost effectively of the drug has also been discussed. Numerous applications of AI in property prediction such as ADMET have been used for prediction of strength of this technology in QSAR. In case of de-novo synthesis, it results in generation of novel drug molecules with unique design proving this a promising field fir drug design. Moreover, its involvement in synthetic planning, ease of synthesis and much more contribute to automated drug discovery in near future.
ARTICLE | doi:10.20944/preprints201908.0318.v1
Subject: Computer Science And Mathematics, Computer Science Keywords: AI; ethics; safety; autonomy; free energy principle; reductionism; symbiosis
Online: 30 August 2019 (07:42:42 CEST)
The free energy principle states that self-organization occurs through minimization of free energy, which is a measure of potential thermodynamic work. By minimizing free energy, the organism happens to also minimize surprise over its boundary, promoting chances of survival. We discuss the ethical implications of the cognitive goal in detail from an empirical point of view, highlighting the principle of least action as a physical basis of Occam's razor, the universality of the free energy principle, and its explanation of natural selection. We explain that the free energy principle extends to groups of organisms and helps us understand group-scale adaptations and selection in biology. The free energy principle applies to all scales of organization in the organism from single cells to the entire nervous system. When this principle is taken to its logical extremes of modeling groups, populations and ecosystems, we uncover a new, evolutionarily sensible path at explaining puzzling aspects of human motivation and judgement, including ethical decisions. To minimize free energy, populations have to act to maximize gathering of information, while building effective models at mitigating changes to its dynamic structure. The free energy principle thus provides a naturalistic explanation of some of our deepest ethical intuitions, and valuable principles of social behavior. We interpret the cognitive goal that corresponds to the principle as seeking a dynamic, fruitful, yet peaceful activity that sustains the organism. This state of mind is interestingly similar to the Buddhist intuition of mental equanimity; the organism's final goal is to be at peace and harmony with the environment. Another immediately relevant aspect is that assemblies must form to promote symbiotic, synergistic, positive feedback loops, which coincides with the findings of ecologists. Therefore, ethics naturally emerges in self-organizing systems. Assemblies of organisms must ultimately unite in macro-minds to achieve the greatest reduction in free energy, as well as building technological extensions of themselves to improve their capacity to do such, therefore the principle also predicts a post-singularity world-mind composed mostly of artificial intelligence.
ARTICLE | doi:10.20944/preprints202210.0191.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: SEPIC; control; renewable energy; ANFIS; AI
Online: 13 October 2022 (09:38:24 CEST)
This research presents a maximum power point tracking of traditional single-ended primary converter (SEPIC) with the aid of fuzzy logic controller (FLC) for regulating the voltage and perturbation and observation(P&O) algorithms for regulating the reference voltage and reaching the maximum power point tracking. This is a promising technique for the conventional SEPIC converter to achieve maximum power point tracking with much less error. Moreover, the method is simple, reliable, and understandable for photovoltaic systems.
ARTICLE | doi:10.20944/preprints202202.0024.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Smishing; Deep learning; NLP; AI; Cybersecurity
Online: 2 February 2022 (09:29:22 CET)
Smartphones are prone to SMS phishing due to the rapid growth in the availability of smart mobile technologies driven by Internet connections. Also, detecting phishing SMS is a challenging task due to the unstructured nature of SMS text data with non-linear complex correlations. In this concern, considering the recent advancements in the domain of cybersecurity, we have proposed a hybrid deep learning framework that extracts robust features from SMS texts followed by an automatic detection of Phishing SMS. Due to combining the potential capability of individual models into one hybrid framework, it has outperformed various other individual machine learning and deep learning models. The proposed Phishing Detection framework is an effective hybrid combination of pretrained transformer model, MPNet (Masked and Permuted Language Modeling), with supervised ConvNets (CNN) and Bi-directional Gated Recurrent Units (GRU). It is intended to successfully detect unstructured short phishing text messages that contain complex patterns.
ARTICLE | doi:10.20944/preprints202209.0276.v1
Subject: Computer Science And Mathematics, Computer Science Keywords: Sensor fusion; Camera and LiDAR fusion; Odometry; Explainable AI
Online: 19 September 2022 (10:27:42 CEST)
Recent deep learning frameworks draw a strong research interest in the application of ego-motion estimation as they demonstrate a superior result compared to geometric approaches. However, due to the lack of multimodal datasets, most of these studies primarily focused on a single sensor-based estimation. To overcome this challenge, we collect a unique multimodal dataset named LboroAV2, using multiple sensors including camera, Light Detecting And Ranging (LiDAR), ultrasound, e-compass and rotary encoder. We also propose an end-to-end deep learning architecture for fusion of RGB images and LiDAR laser scan data for odometry application. The proposed method contains a convolutional encoder, a compressed representation and a recurrent neural network. Besides feature extraction and outlier rejection, the convolutional encoder produces a compressed representation which is used to visualise the network's learning process and to pass useful sequential information. The recurrent neural network uses this compressed sequential data to learn the relation between consecutive time steps. We use the LboroAV2 and KITTI VO datasets to experiment and evaluate our results. In addition to visualising the network's learning process, our approach gives superior results compared to other similar methods. The code for the proposed architecture is released in GitHub and accessible publicly.
ARTICLE | doi:10.20944/preprints202109.0390.v1
Subject: Biology And Life Sciences, Biophysics Keywords: COVID-19; vibraimage; behavioral parameters; diagnosis accuracy; ANN; AI
Online: 22 September 2021 (16:28:12 CEST)
The Covid-19 pandemic spreads in waves for a year and a half, despite significant worldwide efforts, the development of biochemical diagnostic methods and population vaccination. One of the reasons for the infection spread is the impossibility of early disease detection through biochemical diagnostics, since biochemical processes slowly develop in a body. At the same time, well known that behavioral characteristics of a person, measured based on reflex movements, are capable for inertialess assessment of psychophysiological parameters. Vibraimage technology is the method of head micromovements video processing by inter-frame difference accumulation and converting spatial and temporal characteristics of the inter-frame difference into behavioral and psychophysiological parameters. Here we shown that behavioral parameters measured by vibraimage changed during COVID-19 infection. The identification of changes signs in behavioral parameters detected by AI trained on patients and controls. The best diagnostic accuracy (higher 94%) obtained using instantaneous values of behavioral parameters measured with the following vibraimage settings: 10Hz frequency of basic measurements; 25 inter-frame difference accumulations and averaging the diagnostic results over period of at least 5 seconds. COVID-19 diagnoses by behavioral parameters showed earlier (5-7 days) detection of the disease compared to symptoms and positive results of biochemical RT-PCR testing. Proposed method for COVID-19 diagnosis indicates infected persons within 5 seconds video processing using standard television cameras (web, IP) and computers, allows mass testing/selftesting and will stop the pandemic spread. We assume that head micromovements analysis for diagnosis of various diseases is possible not only with the help of vibraimage technology. Further research of human head micromovement analysis will help stop the COVID-19 pandemic and will contribute to the development of new contactless and environmentally friendly methods for early diagnosis of diseases.
ARTICLE | doi:10.20944/preprints202302.0250.v1
Subject: Medicine And Pharmacology, Other Keywords: AI drug discovery; mTOR; rapalog; C. elegans; cancer; longevity
Online: 15 February 2023 (02:42:57 CET)
The mechanistic target of rapamycin (mTOR) kinase is one of the top drug targets for promoting health and lifespan extension. Besides rapamycin, only a few other mTOR inhibitors have been developed and shown their ability to slow aging. We used machine learning to predict novel small molecules targeting mTOR. We selected one small molecule, TKA001, based on in-silico predictions of a high on-target probability, low toxicity, favorable physicochemical properties, and preferable ADMET profile. We confirmed TKA001 binding in silico by molecular docking. TKA001 potently inhibits both TOR complex 1 and 2 downstream signaling in vitro. Furthermore, TKA001 inhibits human cancer cell proliferation in vitro and extended the lifespan of C. elegans, suggesting that TKA001 can slow aging in vivo.
ARTICLE | doi:10.20944/preprints202301.0115.v1
Subject: Computer Science And Mathematics, Security Systems Keywords: Artificial Intelligence (AI); Machine Learning (ML); Cyber security; Uses of Cyber security; Future in Cyber security; AI and ML in Cyber security
Online: 6 January 2023 (04:39:20 CET)
Although, Artificial intelligence (AI) helps experts with crime analysis, research, and understanding, it has a favourable influence on cyber security. It strengthens the tools that businesses use to safeguard their networks, clients, and workers against dangerous online behaviour. However, artificial intelligence is infamous for requiring a lot of resources. It may not, however, always be relevant. Additionally, it can provide hackers a new tool and advance their abilities. Actually, the VPN industry benefits from AI in the same way. The threat posed by machine learning in AI to user data privacy may be lessened by using a VPN on all of your devices. Because they use machine learning algorithms, VPNs are better equipped to shield their users from internet-based threats. Artificial intelligence (AI) has reportedly being investigated as a means of enhancing internet security for a considerable amount of time, according to Smart Data Collective. We anticipated that AI and machine learning will have a substantial impact on the future of cyber security around two years ago.
ARTICLE | doi:10.20944/preprints202303.0025.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: OpenAI; ChatGPT; GPT-3; text-davinci-003; chatbots; artificial intelligence; human-AI interface; collaboration; sustainability; social development; human development
Online: 1 March 2023 (14:17:24 CET)
Artificial Intelligence (AI) experienced significant advancements in recent years, and its impact is already recognized across various industries. Yet, the rise of AI has led to a growing concern about its impact on meeting the Sustainable Development Goals (SDGs). The aim of this paper was to evaluate contributions and the potential impact of AI to sustainable development in the society domain. Furthermore, the study analyzed GPT-3 responses, as one of the largest language models developed by OpenAI, descriptively. We conducted a set of queries on the SDGs to gather infor-mation on GPT-3`s perceptions of AI impact on sustainable development. Analysis of GPT-3’s contribution potential towards the SDGs showcased its broad range of capabilities for contributing to the SDGs in areas such as education, health, and communication. The study findings provide valuable insights into the contributions of AI to sustainable development in the society domain and highlight the importance of proper regulations to promote the responsible use of AI for sustainable development. We plan subsequent research for effects on ecologic and economic domains of SDGs. We highlighted the potential for improvement in neural language processing skills of GPT-3 by avoiding imitating weak human writing styles with more mistakes in longer texts.
DATA DESCRIPTOR | doi:10.20944/preprints202303.0379.v1
Subject: Computer Science And Mathematics, Mathematics Keywords: Jaundice; Hyperbilirubinemia; skin color analysis; NICU; artificial intelligence (AI) techniques
Online: 21 March 2023 (14:01:10 CET)
Jaundice is a common condition for newborns, and its complications can be severe and cause permanent damage to the patient’s brain if no action is taken at its early stages. Current methods for jaundice detection are invasive, which include collecting blood samples from the patient, which can be painful and stressful and may cause some complications. Alternatively, a non-invasive approach can be used to diagnose jaundice through image-processing and artificial intelligence (AI) techniques, requiring a database of infant images to achieve a high-accuracy diagnosis. This data article provides a collection of newborn images, called NJN, with various birthweight and skin tones, with ages ranging from 2 to 8 days, and an excel sheet file in CSV format for the values of RGB and YCrCb channels and the status for each raw which is freely accessible at (https://sites.google.com/view/neonataljaundice). It also provides Python code for data testing using different AI techniques. Thus, this article offers a unique resource for all AI researchers to train their AI system and develop algorithms to help neonatal intensive care unit (NICU) healthcare specialists monitor neonates and provide fast, real-time, non-invasive, and accurate jaundice diagnosis.
Subject: Medicine And Pharmacology, Immunology And Allergy Keywords: reproductive health; infertility; big data; Machine Learning; AI; Systems Biology
Online: 18 November 2020 (13:51:46 CET)
Advances in machine learning (ML) and artificial intelligence (AI) are transforming the way we treat patients in ways not even imagined a few years ago. Cancer research is at the forefront of this movement. Infertility, though not a life-threatening condition, affects around 15% of couples trying for a pregnancy. Increasing availability of large datasets from various sources creates an opportunity to introduce ML and AI into infertility prevention and treatment. At present in the field of assisted reproduction, very little is done in order to prevent infertility from arising, with the main focus put on treatment when often advanced maternal age and low ovarian reserve make it very difficult to conceive. A shift from this disease-centric model to a health centric model in infertility is already taking place with more emphasis on the patient as an active participator in the process. Poor quality and incomplete data as well as biological variability remain the main limitations in the widespread and reliable implementation of AI in the field of reproductive medicine. That said, one of the areas where this technology managed to find a foothold is identification of developmentally competent embryos. More work is required however to learn about ways to improve natural conception, the detection and diagnosis of infertility, and improve assisted reproduction treatments (ART) and ultimately, develop clinically useful algorithms able to adjust treatment regimens in order to assure a successful outcome of either fertility preservation or infertility treatment. Progress in genomics, digital technologies and advances in integrative biology has had a tremendousimpact on research and clinical medicine. With the rise of ‘big data’, artificial intelligence, and the advances in molecular profiling, there is an enormous potential to transform not only scientific research progress, but also clinical decision making towards predictive, preventive, and personalized medicine. In the field of reproductive health, there is now an exciting opportunity to leverage these technologies and develop more sophisticated approaches to diagnose and treat infertility disorders. In this review, we present a comprehensive analysis and interpretation of different innovation forces that are driving the emergence of a system approach to the infertility sector. Here we discuss recent influential work and explore the limitations of the use of Machine Learning models in this rapidly developing area.
ARTICLE | doi:10.20944/preprints202210.0081.v1
Subject: Computer Science And Mathematics, Data Structures, Algorithms And Complexity Keywords: CNN; AI; Causality; Understandability; Object Features; Excitation Weight; Multi-model Neural Network; Model Selection
Online: 7 October 2022 (14:58:09 CEST)
Object recognition is an essential element of machine intelligence tasks. However, one model cannot practically be trained to identify all the possible objects it encounters. An ensemble of models may be needed to cater to a broader range of objects. Building a mathematical understanding of the relationship between various objects that share comparable outlined features is envisaged as an effective method of improving the model ensemble through a pre-processing stage, where these objects' features are grouped under a broader classification umbrella. This paper proposes a mechanism to train an ensemble of recognition models coupled with a model selection scheme to scale-up object recognition in a multi-model system. An algorithmic relationship between the learnt parameters of a trained classification model and the features of input images is presented in the paper for the system to learn the model selection scheme. The multiple models are built with a CNN structure, whereas the image features are extracted using a CNN/VGG16 architecture. Based on the models' excitation weights, a neural network model selection algorithm, which links a new object with the models and decides how close the features of the object are to the trained models for selecting a particular model for object recognition is developed and tested on a five-model neural network platform. The experiment results show the proposed model selection scheme is highly effective and accurate in selecting an appropriate model for a network of multiple models.
REVIEW | doi:10.20944/preprints202201.0016.v2
Subject: Engineering, Control And Systems Engineering Keywords: AI; deep learning; video editing; image editing
Online: 4 February 2022 (13:40:05 CET)
Video editing is a high-required job, for it requires skilled artists or workers equipped with plentiful physical strength and multidisciplinary knowledge, such as cinematography, aesthetics. Thus gradually, more and more researches focus on proposing semi-automatical and even fully automatical solutions to reduce workloads. Since those conventional methods are usually designed to follow some simple guidelines, they lack flexibility and capability to learn complex ones. Fortunately, the advances of computer vision and machine learning make up the shortages of traditional approaches and make AI editing feasible. There is no survey to conclude those emerging researches yet. This paper summaries the development history of automatic video editing, and especially the applications of AI in partial and full workflows. We emphasizes video editing and discuss related works from multiple aspects: modality, type of input videos, methology, optimization, dataset, and evaluation metric. Besides, we also summarize the progresses in image editing domain, i.e., style transferring, retargeting, and colorization, and seek for the possibility to transfer those techniques to video domain. Finally, we give a brief conclusion about this survey and explore some open problems.
ARTICLE | doi:10.20944/preprints202111.0186.v1
Subject: Computer Science And Mathematics, Computer Science Keywords: Explainable AI; Convolutional Neural Network; Network Compression
Online: 9 November 2021 (15:03:27 CET)
Model understanding is critical in many domains, particularly those involved in high-stakes decisions, i.e., medicine, criminal justice, and autonomous driving. Explainable AI (XAI) methods are essential for working with black-box models such as Convolutional Neural Networks. This paper evaluates the traffic sign classifier of Deep Neural Network (DNN) from the Programmable Systems for Intelligence in Automobiles (PRYSTINE) project for explainability. The results of explanations were further used for the CNN PRYSTINE classifier vague kernels` compression. After all, the precision of the classifier was evaluated in different pruning scenarios. The proposed classifier performance methodology was realised by creating the original traffic sign and traffic light classification and explanation code. First, the status of the kernels of the network was evaluated for explainability. For this task, the post-hoc, local, meaningful perturbation-based forward explainable method was integrated into the model to evaluate each kernel status of the network. This method enabled distinguishing high and low-impact kernels in the CNN. Second, the vague kernels of the classifier of the last layer before the fully connected layer were excluded by withdrawing them from the network. Third, the network's precision was evaluated in different kernel compression levels. It is shown that by using the XAI approach for network kernel compression, the pruning of 5% of kernels leads only to a 1% loss in traffic sign and traffic light classification precision. The proposed methodology is crucial where execution time and processing capacity prevail.
ARTICLE | doi:10.20944/preprints201811.0270.v1
Subject: Computer Science And Mathematics, Computer Science Keywords: cloud systems; AI; cognitive; programming languages; algorithms
Online: 12 November 2018 (08:49:16 CET)
The web ecosystem is rapidly evolving with changing business and functional models. Cloud platforms are available in a SaaS, PaaS and IaaS model designed around commoditized Linux based servers. 10 billion users will be online and accessing the web and its various content. The industry has seen a convergence around IP based technology. Additionally, Linux based designs allow for a system wide profiling of application characteristics. The customer is an OEM who provides Linux based servers for telecom solutions. The end customer will develop business applications on the server. Customers are interested in a latency profiling mechanism which helps them to understand how the application behaves at run time. The latency profiler is supposed to find the code path which makes an application block on I/O, and other synchronization primitives. This will allow the customer to understand the performance bottleneck and tune the system and application parameters.
ARTICLE | doi:10.20944/preprints202305.1662.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Artificial intelligence (AI); Knowledge representation; Ontology; Epistemology; Conceptualization; Modal logic
Online: 23 May 2023 (14:20:17 CEST)
The deployment of Artificial Intelligence (AI) systems in decentralized environments is on the rise, yet representation of shared conceptualization in such scenarios remains a challenging issue. The absence of a shared understanding can lead to suboptimal performance of AI systems and hinders the ability to comprehend the knowledge and beliefs of agents in the domain. This paper proposes a formal model for modeling conceptualization in AI systems that integrates ontology, epistemology, and epistemic logic. The model aims to address the gap in representing shared conceptualization in decentralized environments and enhance the performance of AI systems operating in such environments. The proposed model is a hybrid structure that blends extensional and intensional structures and leverages logic-based languages for modeling purposes. A case study in the healthcare sector is presented to illustrate the application of the proposed model. The study contributes to the existing literature by providing a formal model for representing shared conceptualization in decentralized environments, which can be utilized to optimize the performance of AI systems in these environments.
ARTICLE | doi:10.20944/preprints201907.0110.v1
Subject: Arts And Humanities, Philosophy Keywords: causality; deep learning; machine learning; counterfactual; explainable AI; blended cognition; mechanisms; system
Online: 8 July 2019 (08:10:29 CEST)
Causality is the most important topic in the history of Western Science, and since the beginning of the statistical paradigm, it meaning has been reconceptualized many times. Causality entered into the realm of multi-causal and statistical scenarios some centuries ago. Despite of widespread critics, today Deep Learning and Machine Learning advances are not weakening causality but are creating a new way of finding indirect factors correlations. This process makes possible us to talk about approximate causality, as well as about a situated causality.
ARTICLE | doi:10.20944/preprints202305.1460.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: 6G; IoV; AI; edge computing; QoS; CNN; LSTM
Online: 22 May 2023 (04:41:28 CEST)
A full connected world is expected to gain in the 6th generation mobile network (6G). As a typi-cal fully connected scenario, the internet of vehicle (IoV) enables intelligent vehicle operations via artificial intelligence (AI) and edge computing technologies. In the future of vehicular net-works, wide variety of services need powerful computing resources and higher quality of ser-vice (QoS). Existing resources are insufficient to match these requirements. Aim to this problem, An intelligent service offloading framework is provided. Based on the framework, an Algorithm of Improved Gradient Descent (AIGD) is created to accelerate the speed of iteration. So, the con-vergence of convolutional neural network (CNN) based on AIGD is able to be accelerated too. Then, an Algorithm of convolutional long short-term memory (CN_LSTM) Based Traffic Predic-tion (ACLBTP) is designed to gain the predicted number of vehicles belonged to the edge node. At last, an Algorithm of Service Offloading Based on CN_LSTM (ASOBCL) is conducted to of-fload these services to the vehicles belonged to the edge node. In ASOBCL, sorting technique is adopted to speed up the offloading work. Simulation results demonstrate the fact that the pre-diction strategy designed in this paper has high accuracy. The low offloading time and main-taining stable load balance is gained via running ASOBCL. Low offloading time means short response time. And, the QoS is guaranteed. So, these strategies designed in this paper are effec-tive and valuable.
ARTICLE | doi:10.20944/preprints202304.1162.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Reinforcement learning; Decision tree; Explainable AI; Rule learning
Online: 28 April 2023 (10:14:59 CEST)
The demand for explainable and transparent models increases with the continued success of reinforcement learning. In this article, we explore the potential of generating shallow decision trees (DT) as simple and transparent surrogate models for opaque deep reinforcement learning (DRL) agents. We investigate three algorithms for generating training data for axis-parallel and oblique DTs with the help of DRL agents ("oracles") and evaluate these methods on classic control problems from OpenAI Gym. The results show that one of our newly developed algorithms, the iterative training, outperforms traditional sampling algorithms, resulting in well-performing DTs that often even surpass the oracle from which they were trained. Even higher dimensional problems can be solved with surprisingly shallow DTs. We discuss the advantages and disadvantages of different sampling methods and insights into the decision-making process made possible by the transparent nature of DTs. Our work contributes to the development of not only powerful but also explainable RL agents and highlights the potential of DTs as a simple and effective alternative to complex DRL models.
REVIEW | doi:10.20944/preprints202211.0302.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: AI; UAV; IoT; mobile edge computing; reinforcement learning
Online: 16 November 2022 (09:06:37 CET)
With the rapid development of the Internet of Things (IoT), there are dramatic increasing number of devices in the network, which causes the challenge that only using infrastructure such as based station cannot provide service with all devices with high quality. Therefore, due to their flexibility and economy, unmanned aerial vehicles (UAV) are widely used to increase the performance of IoT networks. UAVs can not only provide communication services for IoT devices in the absence of a network, but they can also perform video surveillance, cargo transportation, pesticide spraying, and other specialized tasks. However, due to the complexity of the scenario and the need for real-time decision making, it is challenging to schedule UAVs in the network using traditional optimization methods, and growing attention has focused on using AI to optimize UAVs in the network. In this paper, we focus on the AI-enabled UAV optimization method in IoT networks and give a comprehensive scope on what and how to use AI-enabled methods to increase the performance of UAV-assisted IoT networks. Moreover, a brief analysis of the challenges of using AI methods in IoT networks and some potential research directions are given.
ARTICLE | doi:10.20944/preprints201810.0030.v2
Subject: Computer Science And Mathematics, Computer Science Keywords: artificial general intelligence, AI policy, self-regulatory organization
Online: 8 November 2018 (10:52:39 CET)
The first group to build artificial general intelligence or AGI stands to gain a significant strategic and market advantage over competitors, so companies, universities, militaries, and other actors have strong incentives to race to build AGI first. An AGI race would be dangerous, though, because it would prioritize capabilities over safety and increase the risk of existential catastrophe. A self-regulatory organization (SRO) for AGI may be able to change incentives to favor safety over capabilities and encourage cooperation rather than racing.
ARTICLE | doi:10.20944/preprints201709.0062.v3
Subject: Computer Science And Mathematics, Information Systems Keywords: cognitive computing; cognition; AI; cognitive symbiosis; language; HCI
Online: 2 November 2017 (03:35:19 CET)
Cognitive Computing has become somewhat of a rallying call in the technology world, with the promise of new smart services offered by industry giants like IBM and Microsoft. The recent technological advances in Artificial Intelligence (AI) have thrown into the public sphere some old questions about the relationship between machine computation and human intelligence. Much of the industry and media hype suggests that many traditional challenges have been overcome. On the contrary, our simple examples from language processing demonstrate that present day Cognitive Computing still struggles with fundamental, long-standing problems in AI. An alternative interpretation of cognitive computing is presented, following Licklider's lead in adopting man-computer symbiosis as a metaphor for designing software systems that enhance human cognitive performance. A survey of existing proposals on this view suggests a distinction between weak and strong versions of symbiosis. We propose a Strong Cognitive Symbiosis which dictates an interdependence rather than simply cooperation between human and machine functioning, and introduce new software systems which were designed for cognitive symbiosis. We conclude that strong symbiosis presents a viable new perspective for the design of cognitive computing systems.
ARTICLE | doi:10.20944/preprints202109.0009.v1
Subject: Biology And Life Sciences, Biology And Biotechnology Keywords: Bacteria; Fungus; Boar semen; Progresivity; Fluconazole; Biosecurity of AI
Online: 1 September 2021 (11:52:42 CEST)
The aim of the study is to establish the complete microbiological profile of boar semen and to choose the most effective antiseptic measures in order to control and optimize AI reproduction in pigs. More than one hundred semen samples were collected and analyzed from several pig farms. The microbiological profile of ejaculates was determined by determining the degree of con-tamination of fresh semen and after dilution with specific extenders. The bacterial and fungal load of fresh boar semen recorded an average value of 82.41/0.149x103CFU/mL, while after diluting the ejaculates the contamination value was 0.354/0.140 x103CFU/mL. 23 bacterial and fungal species were isolated, the most common being Candida parapsilosis/sake (92%), Escherichia coli (81.2%). Modification of the sperm collection protocol (HPBC) reduced contamination in raw sperm (by 49.85% in bacteria and by 9.67% in fungi). The load in bacteria and filamentous fungi can be con-trollable, but not in levuras fungi. In some Fluconazole-added extenders (12.5mg%), ensures the solution of this problem, and even increase in sperm progressivity (8.39%) for at least a 12-hour shelf life after dilution. The validation of the experiment was done by obtaining the sow fertility rate after AI.
ARTICLE | doi:10.20944/preprints202304.0923.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Artificial Intelligence with respect to Cyber security; Artificial Intelligence and Cyber security; AI and Cybersecurity; Importance of AI with respect to Cyber security
Online: 25 April 2023 (10:35:26 CEST)
Artificial Intelligence has transformed the cyber security industry by enabling organizations to systematize and enlarge outdated safety procedures. AI can provide more effective threat detection and response capabilities, enhance vulnerability management, and improve compliance and governance. AI technologies such as machine learning, natural language processing, behavioral analytics, and deep learning can enhance cyber security defenses and protect against a wide range of cyber threats, including malware, phishing attacks, and insider threats.Theoretical underpinnings of AI in cyber security, such as machine learning, natural language processing, behavioral analytics, and deep learning, are discussed. The advantages of using AI in cyber security are discussed including speed and accuracy, continuous learning and adaptation, and efficiency and scalability. It's important to note that AI is not a silver bullet for cyber security and should be used in conjunction with other security measures to provide a comprehensive defense strategy.AI has transformed the way cyber security operates in today's digital age. By analyzing vast amounts of data quickly and accurately it has become a valuable tool for organizations looking to protect their assets from cyber threats.
ARTICLE | doi:10.20944/preprints202111.0112.v1
Subject: Medicine And Pharmacology, Other Keywords: forensic medicine; forensic dentistry; forensic anthropology; 3D CNN; AI; deep learning; biological age determination; sex determination; 3D cephalometric; AI face estimation; growth prediction
Online: 5 November 2021 (10:00:56 CET)
Three-dimensional convolutional neural networks (3D CNN) as a type of artificial intelligence (AI) are powerful in image processing and recognition using deep learning to perform generative and descriptive tasks. The advantage of CNN compared to its predecessors is that it automatically detects the important features without any human supervision. 3D CNN are used to extract features in three dimensions where input is a 3D volume or a sequence of 2D pictures e.g., slices in a cone-beam computer tomography scan (CBCT). The main aim of this article was to bridge interdisciplinary cooperation between forensic medical experts and deep learning engineers. With emphasis activating clinical forensic experts in the field with possibly basic knowledge of advanced artificial intelligence techniques with interest in its implementation in their efforts to advance the forensic research further. This paper introduces a novel workflow of 3D CNN analysis of full-head CBCT scans. Authors explore and present 3D CNN method for forensic research design concept in five perspectives: (1) sex determination, (2) biological age estimation, (3) 3D cephalometric landmark annotation, (4) growth vectors prediction, (5) facial soft-tissue estimation from the skull and vice versa. In conclusion, 3D CNN application can be a watershed moment in forensic medicine, leading to unprecedented improvement of forensic analysis workflows based on 3D neural networks.
ARTICLE | doi:10.20944/preprints202003.0299.v1
Subject: Computer Science And Mathematics, Information Systems Keywords: Data Mining; Alzheimer’s Dementia; Composite Hybrid Feature Selection; Machine learning; stack Hybrid Classification; AI; MRI; Neuroimaging; MPEG7 edge histogram feature extraction; CNN
Online: 19 March 2020 (11:25:01 CET)
Alzheimer's disease (AD) detection acting as an essential role in global health care due to misdiagnosis and sharing many clinical sets with other types of dementia, and costly monitoring the progression of the disease over time by magnetic reasoning imaging (MRI) with consideration of human error in manual reading. This paper goal a comparative study on the performance of data mining techniques on two datasets of Clinical and Neuroimaging Tests with AD. Our proposed model in the first stage, Apply clinical medical dataset to a composite hybrid feature selection (CHFS), for extract new features to select the best features due to eliminating obscures features, In parallel with Apply a novel hybrid feature extraction of three batch edge detection algorithm and texture from MRI images dataset and optimized with fuzzy 64-bin histogram. In the second stage, we applied a clinical dataset to a stacked hybrid classification(SHC) model to combine Jrip and random forest classifiers with six model evaluations as meta-classifier individually to improve the prediction of clinical diagnosis. At the same stage of improving the classification accuracy of neuroimaging (MRI) dataset images by applying a convolution neural network (CNN) in comparison with traditional classifiers, running on extracted features from images. The authors have collected the clinical dataset of 426 subjects with (1229 potential patient sample) from oasis.org and (MRI) dataset from a benchmark kaggle.com with a total of around ~5000 images each segregated into the severity of Alzheimer's. The datasets evaluated using an explorer set of weka data mining software for the analysis purpose. The experimental show that the proposed model of (CHFS) feature extraction lead to effectively reduced the false-negative rate with a relatively high overall accuracy with a stack hybrid classification of support vector machine (SVM) as meta-classifier of 96.50% compared to 68.83% of the previous result on a clinical dataset, Besides a compared model of CNN classification on MRI images dataset of 80.21%. The results showed the superiority of our CHFS model in predicting Alzheimer's disease more accurately with the clinical medical dataset in early-stage compared with the neuroimaging (MRI) dataset. The results of the proposed model were able to predict with accurately classify Alzheimer's clinical samples at a low cost in comparison with the MRI-CNN images model at the early stage and get a good indicator for high classification rate for MRI images when applying our proposed model of SHC.
REVIEW | doi:10.20944/preprints202009.0103.v1
Subject: Medicine And Pharmacology, Epidemiology And Infectious Diseases Keywords: climate change; vector-borne disease; artificial intelligence; explainable AI; geospatial modeling; infectious disease; arbovirus
Online: 4 September 2020 (12:21:32 CEST)
As recent history has shown, changing climate not only threatens to increase the spread of known disease, but also the emergence of new and dangerous phenotypes. This occurred most recently with West Nile virus: a virus previously known for mild febrile illness rapidly emerged to become a major cause of mortality and long-term disability throughout the world. As we move forward, into increasingly uncertain times, public health research must begin to incorporate a broader understanding of the determinants of disease emergence – what, how, why, and when. The increasing mainstream availability of high-quality open data and high-powered analytical methods presents promising new opportunities. Up to now, quantitative models of disease outbreak risk have been largely based on just a few key drivers, namely climate and large-scale climatic effects. Such limited assessments, however, often overlook key interacting processes and downstream determinants more likely to drive local manifestation of disease. Such pivotal determinants may include local host abundance, human behavioral variability, and population susceptibility dynamics. The results of such analyses can therefore be misleading in cases where necessary downstream requirements are not fulfilled. It is therefore important to develop models that include climate and higher-level climatic effects alongside the downstream non-climatic factors that ultimately determine individual disease manifestation. Today, few models attempt to comprehensively address such dynamics: up until very recently, the technology simply hasn’t been available. Herein, we present an updated overview of current perspectives on the varying drivers and levels of interactions that drive disease spread. We review the predominant analytical paradigms, discuss their strengths and weaknesses, and highlight promising new analytical solutions. Our focus is on the prediction of arboviruses, particularly West Nile virus, as these diseases represent the pinnacle of epidemiological complexity – solution to which would serve as an effective “gatekeeper”. We present the current state-of-the-art with respect to known drivers of arbovirus outbreak risk and severity, differentially highlighting the impact of climate and non-climatic drivers. The reality of multiple classes of drivers interacting at different geospatial and temporal scales requires advanced new methodologies. We therefore close out by presenting and discussing some promising new applications of AI. Given the reality of accelerating disease risks due to climate change, public health and other related fields must begin the process of updating their research programs to incorporate these much needed, new capabilities.
ARTICLE | doi:10.20944/preprints202205.0095.v1
Subject: Computer Science And Mathematics, Information Systems Keywords: Regression; AI based Tornado Analysis; Decision Support System; Mobile Application
Online: 9 May 2022 (03:15:11 CEST)
Tropical cyclones devastate large areas, take numerous lives and damage extensive property in Bangladesh. Research on landfalling tropical cyclones affecting Bangladesh has primarily focused on events occurring since AD1960 with limited work examining earlier historical records. We rectify this gap by developing a new tornado catalogue that include present and past records of tornados across Bangladesh maximizing use of available sources. Within this new tornado database, 119 records were captured starting from 1838 till 2020 causing 8,735 deaths and 97,868 injuries leaving more than 1,02,776 people affected in total. Moreover, using this new tornado data, we developed an end-to-end system that allows a user to explore and analyze the full range of tornado data on multiple scenarios. The user of this new system can select a date range or search a particular location, and then, all the tornado information along with Artificial Intelligence (AI) based insights within that selected scope would be dynamically presented in a range of devices including iOS, Android, and Windows. Using a set of interactive maps, charts, graphs, and visualizations the user would have a comprehensive understanding of the historical records of Tornados, Cyclones and associated landfalls with detailed data distributions and statistics.
ARTICLE | doi:10.20944/preprints202105.0028.v1
Subject: Computer Science And Mathematics, Algebra And Number Theory Keywords: Blockchain Paradox; DISC Hypothesis; Smart Contract; DeFi; Oracle; AI, Sustainable.
Online: 5 May 2021 (09:02:15 CEST)
While immutability is Blockchain's* much celebrated covenant, change is the rule of life. The paradox has seriously limited real-world deployability of Smart Contracts (SC), faltering its mainstream adoption and sustainability. Once implemented, SC remains unstoppable even if its execution makes losses, as evident in the recently exploded $50+B DeFi industry. How do we reconcile the two and make DeFI/Blockchain profitable and sustainable? A DISC (Dynamic Immutable Smart Contract) hypothesis was proposed to resolve the paradox. Using an existing decentralized IoT device framework we test the DISC hypothesis, by designing/implementing a DISC protocol that delivers an algorithmically controlled dynamic off-chain data feed into a self-executing SC. The experiment successfully introduced limited dynamism into SC without compromising its immutability or undermining user control over their SC terms. If consistently reproduced in diverse settings, the DISC protocol could earn an important milestone in the evolution of Blockchain's decentralized economy of the future.
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Internet of Things (IoT); Quality Assurance; Testing; Artificial Intelligence (AI)
Online: 9 December 2019 (07:39:47 CET)
IoT is a fast growing technology that has Promising potential for shaping our future. In this fast growing world of IoT, IoT systems are released without proper testing which effect its quality and does not guarantee user satisfaction. Different testing methodologies are carried out to ensure Quality assurance of IoT before releasing it to the market. In this paper we have reviewed different testing techniques using AI and different tools to ensure Quality of IoT. In this paper we have also reviewed different IoT challenges related to its quality.
ARTICLE | doi:10.20944/preprints202107.0265.v1
Subject: Computer Science And Mathematics, Algebra And Number Theory Keywords: Green AI; Sparse-views tomography; Learned Post Processing; UNet; Tomographic reconstruction
Online: 12 July 2021 (13:49:15 CEST)
Deep Learning is developing interesting tools which are of great interest for inverse imaging applications. In this work, we consider a medical imaging reconstruction task from subsampled measurements, which is an active research field where Convolutional Neural Networks have already revealed their great potential. However, the commonly used architectures are very deep and, hence, prone to overfitting and unfeasible for clinical usages. Inspired by the ideas of the green-AI literature, we here propose a shallow neural network to perform an efficient Learned Post-Processing on images roughly reconstructed by the filtered backprojection algorithm. The results obtained on images from the training set and on unseen images, using both the non-expensive network and the widely used very deep ResUNet show that the proposed network computes images of comparable or higher quality in about one fourth of time.
REVIEW | doi:10.20944/preprints202004.0325.v1
Subject: Medicine And Pharmacology, Epidemiology And Infectious Diseases Keywords: COVID-19; artificial intelligence; AI; blockchain; epidemic; pandemic; machine learning; deep learning; coronavirus; SARS-CoV-2
Online: 19 April 2020 (04:44:37 CEST)
The beginning of 2020 has seen the emergence of coronavirus outbreak caused by a novel virus called SARS-CoV-2. The sudden explosion and uncontrolled worldwide spread of COVID-19 show the limitations of existing healthcare systems to timely handle public health emergencies. In such contexts, innovative technologies such as blockchain and Artificial Intelligence (AI) have emerged as promising solutions for fighting coronavirus epidemic. On the one hand, blockchain can combat pandemics by enabling early detection of outbreaks, protecting user privacy, and ensuring reliable medical supply chain during the outbreak tracking. On the other hand, AI provides intelligent solutions for identifying symptoms caused by coronavirus for treatments and supporting drug manufacturing. Motivated by these, in this paper we present an extensive survey on the use of blockchain and AI for combating coronavirus (COVID-19) epidemics based on the rapidly emerging literature. First, we introduce a new conceptual architecture which integrates blockchain and AI specific for COVID-19 fighting. Particularly, we highlight the key solutions that blockchain and AI can provide to combat the COVID-19 outbreak. Then, we survey the latest research efforts on the use of blockchain and AI for COVID-19 fighting in a wide range of applications. The newly emerging projects and use cases enabled by these technologies to deal with coronavirus pandemic are also presented. Finally, we point out challenges and future directions that motivate more research efforts to deal with future coronavirus-like epidemics.
REVIEW | doi:10.20944/preprints202304.0158.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: ChatGPT; OpenAI; Artificial Intelligence (AI); Machine Learning (ML); Large Language Models (LLM)
Online: 10 April 2023 (08:40:13 CEST)
Modern language models are created to produce writing that can be mistaken for sentences authored by humans. Moreover, these models can converse with humans in a way that seems fair and logical. The most technologically advanced chatbot to date is ChatGPT, a version of OpenAI's Generative Pretrained Transformer (GPT) language standard. It can generate high-quality content in mere seconds, surpassing the capabilities of other chatbots. As a result, it has generated a lot of attention, enthusiasm and interest in various sectors and topics. This study provides an overview of the current research on ChatGPT, including its technological framework, support mechanisms, and implementation studies. Through this review, we explore the advantages and limitations of ChatGPT and propose future research directions.
REVIEW | doi:10.20944/preprints202204.0219.v1
Subject: Computer Science And Mathematics, Computer Vision And Graphics Keywords: Artificial intelligence (AI), machine learning; deep learning; medical imaging; tomography; image reconstruction
Online: 25 April 2022 (05:35:17 CEST)
Over recent years, the importance of patent literature has become more recognized in the academic setting. In the context of artificial intelligence, deep learning, and data sciences, patents are relevant to not only industry but also academe and other communities. In this article, we focus on deep tomographic imaging and perform a preliminary landscape analysis of the related patent literature. Our search tool is PatSeer . Our patent bibliometric data is summarized in various figures and tables. In particular, we qualitatively analyze key deep tomographic patent literature.
ARTICLE | doi:10.20944/preprints202012.0313.v2
Subject: Computer Science And Mathematics, Algebra And Number Theory Keywords: artificial intelligence; human-AI interaction; human factors; safety challenges; black-box challenge
Online: 8 January 2021 (13:50:32 CET)
In response to the need to address the safety challenges in the use of artificial intelligence (AI), this research aimed to develop a framework for a safety controlling system (SCS) to address the AI black-box mystery in the healthcare industry. The main objective was to propose safety guidelines for implementing AI black-box models to reduce the risk of potential healthcare-related incidents and accidents. The system was developed by adopting the multi-attribute value model approach (MAVT), which comprises four symmetrical parts: extracting attributes, generating weights for the attributes, developing a rating scale, and finalizing the system. On the basis of the MAVT approach, three layers of attributes were created. The first level contained 6 key dimensions, the second level included 14 attributes, and the third level comprised 78 attributes. The key first level dimensions of the SCS included safety policies, incentives for clinicians, clinician and patient training, communication and interaction, planning of actions, and control of such actions. The proposed system may provide a basis for detecting AI utilization risks, preventing incidents from occurring, and developing emergency plans for AI-related risks. This approach could also guide and control the implementation of AI systems in the healthcare industry.
ARTICLE | doi:10.20944/preprints202012.0092.v1
Subject: Engineering, Mechanical Engineering Keywords: anomaly detection; machine Learning; large stand off magnetometry; multimodal data; RAPIDS-AI
Online: 4 December 2020 (07:13:40 CET)
Pipeline integrity is an important area of concern for the oil and gas, refining, chemical, hydrogen, carbon sequestration, and electric-power industries, due to the safety risks associated with pipeline failures. Regular monitoring, inspection, and maintenance of these facilities is therefore required for safe operation. Large standoff magnetometry (LSM) is a non-intrusive, passive magnetometer-based mea- surement technology that has shown promise in detecting defects (anomalies) in regions of elevated mechanical stresses. However, analyzing the noisy multi-sensor LSM data to clearly identify regions of anomalies is a significant challenge. This is mainly due to the high frequency of the data collection, mis-alignment between consecutive inspections and sensors, as well as the number of sensor measurements recorded. In this paper we present LSM defect identification approach based on ma- chine learning (ML). We show that this ML approach is able to successfully detect anomalous readings using a series of methods with increasing model complexity and capacity. The methods start from unsupervised learning with "point" methods and eventually increase complexity to supervised learning with sequence methods and multi-output predictions. We observe data leakage issues for some methods with randomized train/test splitting and resolve them by specific non-randomized splitting of training and validation data. We also achieve a 200x acceleration of support-vector classifier (SVC) method by porting computations from CPU to GPU leveraging the cuML RAPIDS AI library. For sequence methods, we develop a customized Convolutional Neural Network (CNN) architecture based on 1D convolutional filters to identify and characterize multiple properties of these defects. In the end, we report the scalability of the best-performing methods and compare them, for viability in field trials.
REVIEW | doi:10.20944/preprints202005.0350.v1
Subject: Medicine And Pharmacology, Other Keywords: Medical documentation; medicine; public health; computer networks; artificial intelligence; AI; smart city
Online: 22 May 2020 (10:43:06 CEST)
This work addresses the problem of the application of artificial intelligence to the creation and maintenance of medical documentation and the use of big data in medicine to support efficient patient diagnosis and treatment. This study covers the latest advances in AI and big data, based on literature reviews and interviews with leading experts in these fields. The following conclusions were obtained: (1) Based on the needs of patients and providers of medical services, and given the latest technological advances, all medical documentation should be digital and the processes of its creation, access, sharing, and consistency checking should by supported by suitably designed AI systems. (2) The knowledge contained in medical documentation constitutes a resource of strategic importance for humanity, with almost unlimited potential. (3) All medical documentation should be anonymised and should be made widely available, just like data and research results in the field of experimental physics. (4) This will accelerate development of new treatments, best practice and help to identify new medical emergencies, such as Covid-19. In practice today, unfortunately, the design of medical record systems is fragmented between institutions and countries, often focusing discussions on narrow technical details, and forcing clinicians to waste time on filling up multiple pages of illness history. This leads to many inefficiencies and lost opportunities and necessitates a fundamentally new approach.
Subject: Engineering, Bioengineering Keywords: AI; deep-learning; neural-networks; graph neural-networks; cheminformatics; molecular property; machine-learning; computational chemistry; lipophilicity; solubility
Online: 1 October 2021 (14:29:01 CEST)
The accurate prediction of molecular properties such as lipophilicity and aqueous solubility are of great importance and pose challenges in several stages of the drug discovery pipeline. Machine learning methods like graph-based neural networks (GNNs) have shown exceptionally good performance in predicting these properties. In this work, we introduce a novel GNN architecture, called directed edge graph isomorphism network (D-GIN). It is composed of two distinct sub-architectures (D-MPNN, GIN) and achieves an improvement in accuracy over its sub-architectures employing various learning, and featurization strategies. We argue that combining models with different key aspects help make graph neural networks deeper and simultaneously increase their predictive power. Furthermore, we address current limitations in assessment of deep-learning models, namely, comparison of single training run performance metrics, and offer a more robust solution.
ARTICLE | doi:10.20944/preprints202303.0326.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: document image processing; deskew; Hough Line Transform; image rectification; machine learning; OCR; document orientation; image preprocessing; computer vision; AI
Online: 17 March 2023 (13:25:06 CET)
Document deskewing is a fundamental problem in document image processing. While existing methods have limitations, such as Hough Line Transformation that can deskew images upside down, and Deep Learning models that require huge amounts of human labour and computational resources and still fail to deskew while taking care of orientation, OCR-based methods also struggle to read text when it is tilted. In this paper, we propose a novel, simple, cost-effective deep learning method for fixing the skew and orientation of documents. Our approach reduces the search space for the machine learning model to predict whether an image is upside down or not, avoiding the huge search space of predicting an angle between 0 and 360. We finetuned a MobileNetV2 model, which was pre-trained on imagenet, using only 200 images and achieve good results. This method is useful for automation-based tasks, such as data extraction using OCR technology, and can greatly reduce manual labour.
ARTICLE | doi:10.20944/preprints202301.0474.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: ChatGPT; GPT-3; global trends; OpenAI; chatbots; digital health; artificial intelligence; automation; technological advancement; human-AI interaction; collaboration
Online: 26 January 2023 (08:37:22 CET)
This paper examines the potential of Artificial Intelligence (AI) to address societal megatrends, with a specific focus on OpenAI’s Generative Pre-Trained Transformer 3 (GPT-3). To do this, we con-ducted an analysis using GPT-3 in order to explore the benefits of AI for digitalization, urbanization, globalization, climate change, automation and mobility, global health issues and aging population. We also looked at emerging markets as well as sustainability in this study. Interaction with GPT-3 was done solely through prompt questions and generated responses were analyzed. Our results indicate that AI can significantly improve our understanding of these trends by providing insights into how they develop over time and what solutions could be implemented. Further research is needed to determine how effective AI will be in addressing them successfully but initial findings are encouraging. Our discussion focuses on the implications of our findings for society going forward and suggests that further investigation should be done into how best to utilize new technologies like GPT-3 when tackling these challenges. Lastly, we conclude that while there is still much work left to do before any tangible effects can be seen from utilizing AI tools such as GPT-3 on societal mega-trends; early indications suggest it may have a positive impact if used correctly.
Subject: Engineering, Automotive Engineering Keywords: Industry 4.0; Supply Chain Design; Transformational Design Roadmap; IIoT Supply Chain Model; Decision Support for Information Management, Artificial Intelligence and Machine Learning (AI/ML), dynamic self-adapting system, cognition engine, predictive cyber risk analytics.
Online: 23 December 2020 (17:20:35 CET)
Digital technologies have changed the way supply chain operations are structured. In this article, we conduct systematic syntheses of literature on the impact of new technologies on supply chains and the related cyber risks. A taxonomic/cladistic approach is used for the evaluations of progress in the area of supply chain integration in the Industrial Internet of Things and Industry 4.0, with a specific focus on the mitigation of cyber risks. An analytical framework is presented, based on a critical assessment with respect to issues related to new types of cyber risk and the integration of supply chains with new technologies. This paper identifies a dynamic and self-adapting supply chain system supported with Artificial Intelligence and Machine Learning (AI/ML) and real-time intelligence for predictive cyber risk analytics. The system is integrated into a cognition engine that enables predictive cyber risk analytics with real-time intelligence from IoT networks at the edge. This enhances capacities and assist in the creation of a comprehensive understanding of the opportunities and threats that arise when edge computing nodes are deployed, and when AI/ML technologies are migrated to the periphery of IoT networks.
ARTICLE | doi:10.20944/preprints202206.0359.v1
Subject: Medicine And Pharmacology, Pathology And Pathobiology Keywords: Kaposi’s sarcoma-associated virus (KSHV); Viral pre-initiation complex (vPIC); Bimolecular fluorescence complementation (BiFC); Artificial intelligence (AI) structure prediction; AlphaFold2
Online: 27 June 2022 (10:15:34 CEST)
Kaposi’s sarcoma-associated herpesvirus (KSHV) is the etiologic agent of Kaposi’s sarcoma, primary effusion lymphoma (PEL) and multicentric Castleman’s disease. During KSHV lytic infection, lytic-related genes, categorized as immediate-early, early, and late genes, are expressed in a temporal manner. The transcription of late genes requires the virus-specific pre-initiation complex (vPIC), which consists of viral transcription factors. However, the characterization of the protein-protein interactions of the vPIC factors has not been completely elucidated. KSHV ORF18 is one of the vPIC factors, and its interaction with other viral factors has not been sufficiently revealed. Here, we analyzed the interaction between ORF18 and another vPIC factor, ORF30, in living cells using the bimolecular fluorescence complementation (BiFC) assay. We identified four amino-acid residues (Leu29, Gln36, His41 and Trp170) on ORF18 that were responsible for its interaction with ORF30. The artificial intelligence (AI) system AlphaFold2 predicted that identified four residues are exposed to the surface of ORF18 and located in proximity to each other in the surface of ORF18. Thus, AI-predicted model supports the importance of four residues for binding of ORF18 to ORF30. Our results indicated that wet experiments in combination with AI may enhance the structural characterization of vPIC protein-protein interactions.
REVIEW | doi:10.20944/preprints202303.0464.v1
Subject: Medicine And Pharmacology, Pharmacology And Toxicology Keywords: Therapeutic proteins; Recombinant proteins; Repurposing; Reinventing, Drug-antibody combinations, AI/ML, Efficacy improvement, Immunogenicity; Artificial Intelligence; mRNA; High throughput analysis
Online: 27 March 2023 (14:02:03 CEST)
Reinventing approved therapeutic proteins for a new dose, a new formulation, a new route of administration, an improved safety profile, a new indication, or a new conjugate with a drug or a radioactive source is a creative approach to benefit from the billions spent on developing new therapeutic proteins. These new opportunities were created only recently with the arrival of AI/ML tools and high throughput screening technologies. Furthermore, the complex nature of proteins offers mining opportunities that are not possible with chemical drugs; bringing in newer therapies without spending billions makes this path highly lucrative financially while serving the dire needs of humanity. This paper analyses several practical reinventing approaches and suggests regulatory strategies to reduce development costs significantly. This should enable the entry of hundreds of new therapies at affordable costs.
ARTICLE | doi:10.20944/preprints202008.0645.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Speech Emotion Recognition; Emotion AI; Self-Supervised Learning; Transfer Learning; Low Resource Training; wav2vec
Online: 28 August 2020 (15:05:37 CEST)
We propose a novel transfer learning method for speech emotion recognition allowing us to obtain promising results when only few training data is available. With as low as 125 examples per emotion class, we were able to reach a higher accuracy than a strong baseline trained on 8 times more data. Our method leverages knowledge contained in pre-trained speech representations extracted from models trained on a more general self-supervised task which doesn’t require human annotations, such as the wav2vec model. We provide detailed insights on the benefits of our approach by varying the training data size, which can help labeling teams to work more efficiently. We compare performance with other popular methods on the IEMOCAP dataset, a well-benchmarked dataset among the Speech Emotion Recognition (SER) research community. Furthermore, we demonstrate that results can be greatly improved by combining acoustic and linguistic knowledge from transfer learning. We align acoustic pre-trained representations with semantic representations from the BERT model through an attention-based recurrent neural network. Performance improves significantly when combining both modalities and scales with the amount of data. When trained on the full IEMOCAP dataset, we reach a new state-of-the-art of 73.9% unweighted accuracy (UA).
REVIEW | doi:10.20944/preprints202212.0305.v1
Subject: Computer Science And Mathematics, Robotics Keywords: robotics; cloth-like deformable objects; deep reinforcement learning; deep imitation 12 learning; human-robot interaction; knot theory; general embodied AI
Online: 16 December 2022 (12:51:23 CET)
Manipulating cloth-like deformable objects (CDOs) is a long-standing problem in the robotics community. CDOs are flexible (non-rigid) objects that do not show a detectable level of com- pression strength while two points on the article are pushed towards each otherand include objects such as ropes (1D), fabrics (2D) and bags (3D). In general, CDOs’ many degrees of freedom (DoF) introduce severe self-occlusion and complex state-action dynamics as significant obstacles for perception and manipulation systems. These challenges exacerbate existing issues of modern robotic control methods such as imitation learning (IL) and reinforcement learning (RL). This review focuses on the application details of data-driven control methods on four major task families in this domain: cloth-shaping, rope manipulation, dressing and bag manip- ulation. Furthermore, we identify specific inductive biases in these four domains that present challenges for more general IL and RL algorithms, and summarise the future direction for the development of the field.
ARTICLE | doi:10.20944/preprints202009.0647.v1
Subject: Computer Science And Mathematics, Algebra And Number Theory Keywords: Lung condition; COVID-19; Machine learning; Custom Vision; Core ML; Auto ML; AI; Pneumonia; Smartphone application; Real-time diagnosis
Online: 26 September 2020 (16:14:39 CEST)
AI is leveraging all aspects of life. Medical services are not untouched. Especially in the field of medical image processing and diagnosis. Big IT and Biotechnology companies are investing millions of dollars in medical and AI research. The recent outbreak of SARS COV-2 gave us a unique opportunity to study for a non interventional and sustainable AI solution. Lung disease remains a major healthcare challenge with high morbidity and mortality worldwide. The predominant lung disease was lung cancer. Until recently, the world has witnessed the global pandemic of COVID19, the Novel coronavirus outbreak. We have experienced how viral infection of lung and heart claimed thousands of lives worldwide. With the unprecedented advancement of Artificial Intelligence in recent years, Machine learning can be used to easily detect and classify medical imagery. It is much faster and most of the time more accurate than human radiologists. Once implemented, it is more cost-effective and time-saving. In our study, we evaluated the efficacy of Microsoft Cognitive Service to detect and classify COVID19 induced pneumonia from other Viral/Bacterial pneumonia based on X-Ray and CT images. We wanted to assess the implication and accuracy of the Automated ML-based Rapid Application Development (RAD) environment in the field of Medical Image diagnosis. This study will better equip us to respond with an ML-based diagnostic Decision Support System(DSS) for a Pandemic situation like COVID19. After optimization, the trained network achieved 96.8% Average Precision which was implemented as a Web Application for consumption. However, the same trained network did not perform like Web Application when ported to Smartphone for Real-time inference, which was our main interest of study. The authors believe, there is scope for further study on this issue. One of the main goals of this study was to develop and evaluate the performance of AI-powered Smartphone-based Real-time Applications. Facilitating primary diagnostic services in less equipped and understaffed rural healthcare centers of the world with unreliable internet service.
ARTICLE | doi:10.20944/preprints202302.0166.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: speed sport climbing; video analysis; KLT algorithm; Convolutional Neural Network (CNN); OpenPose; AI; artificial intelligence
Online: 9 February 2023 (11:23:49 CET)
The continuously developing project aims to build an informatics system enabling analysis of spatial and temporal parameters of movement activities occurring in the sport of speed climbing. The monitoring system (climbing information speed system – CISS) is to be used for conducting comprehensive scientific research in the field of speed climbing. The system enables the evaluation of the training process of climbers at various levels of competition. The study analysis was based on video. The video recording with a camera positioned at a short distance (10 m) from the wall. The marker was positioned closest to the centre of mass (gravity) BMC. Results: development of a system for data collection and analysis of the climbing run based on video recording (application of the Kanade-Lucas-Tomasi (KLT) algorithm). Our results showed that used devices can measure a wide range of specific internal and external variables during speed climbing. Some of the analyzed parameters were significantly correlated with speed climbing time. These results could be a theoretical basis for future research and for training program’s preparation.
Subject: Computer Science And Mathematics, Algebra And Number Theory Keywords: intent-based networking; network management; 6G; industry 4.0; supply chain; ICT; AI; ML; Access Control
Online: 13 May 2021 (14:01:35 CEST)
The evolution towards Industry 4.0 is driving the need for innovative solutions in the area of network management, considering the complex, dynamic and heterogeneous nature of ICT supply chains. To this end, Intent-Based networking (IBN) which is already proven to evolve how network management is driven today, can be implemented as a solution to facilitate the management of large ICT supply chains. In this paper, we first present a comparison of the main architectural components of typical IBN systems and, then, we study the key engineering requirements when integrating IBN with ICT supply chain network systems while considering AI methods. We also propose a general architecture design that enables intent translation of ICT supply chain specifications into lower level policies, to finally show an example of how the access control is performed in a modelled ICT supply chain system.
ARTICLE | doi:10.20944/preprints202011.0451.v1
Subject: Computer Science And Mathematics, Algebra And Number Theory Keywords: Explainable AI; Cluster Analysis; Swarm Intelligence; Machine Learning System; High-Dimensional Data Visualization; Decision Trees
Online: 17 November 2020 (14:01:33 CET)
The understanding of water quality and its underlying processes is important for the protection of aquatic environments enabling the rare opportunity of access to a domain expert. Hence, an explainable AI (XAI) framework is proposed that is applicable to multivariate time series resulting in explanations that are interpretable by a domain expert. The XAI combines in three steps a data-driven choice of a distance measure with explainable cluster analysis through supervised decision trees. The multivariate time series consists of water quality measurements, including nitrate, electrical conductivity, and twelve other environmental parameters. The relationships between water quality and the environmental parameters are investigated by identifying similar days within a cluster and dissimilar days between clusters. The XAI does not depend on prior knowledge about data structure, and its explanations are tendentially contrastive. The relationships in the data can be visualized by a topographic map representing high-dimensional structures. Two comparable decision-based XAIs were unable to provide meaningful and relevant explanations from the multivariate time series data. Open-source code in R for the three steps of the XAI framework is provided.
ARTICLE | doi:10.20944/preprints201910.0360.v1
Subject: Computer Science And Mathematics, Mathematical And Computational Biology Keywords: Random Forest; Iterative Random Forest; gene expression networks; high performance computing; X-AI-based eQTL
Online: 31 October 2019 (02:33:17 CET)
As time progresses and technology improves, biological data sets are continuously increasing in size. New methods and new implementations of existing methods are needed to keep pace with this increase. In this paper, we present a high performance computing(HPC)-capable implementation of Iterative Random Forest (iRF). This new implementation enables the explainable-AI eQTL analysis of SNP sets with over a million SNPs. Using this implementation we also present a new method, iRF Leave One Out Prediction (iRF-LOOP), for the creation of Predictive Expression Networks on the order of 40,000 genes or more. We compare the new implementation of iRF with the previous R version and analyze its time to completion on two of the world's fastest supercomputers Summit and Titan. We also show iRF-LOOP's ability to capture biologically significant results when creating Predictive Expression Networks. This new implementation of iRF will enable the analysis of biological data sets at scales that were previously not possible.
ARTICLE | doi:10.20944/preprints202305.1047.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Artificial Intelligence; Large Language Models; GPT-3.5; GPT-4; Google-BARD; Historical Fact-Checking; Distance to Reality; History; AI in Education; Gap Bridging
Online: 15 May 2023 (12:53:11 CEST)
The rapid proliferation of information in the digital era underscores the importance of accurate historical representation and interpretation. While artificial intelligence (AI) has shown promise in various fields, its potential for historical fact-checking and gap-filling remains largely untapped. This study evaluates the performance of three large language models (LLMs)—GPT-3.5, GPT-4, and Google- BARD—in the context of predicting and verifying historical events based on given data. A novel metric, "Distance to Reality" (DTR), is introduced to assess the models' outputs against established historical facts. The results reveal a substantial potential for AI in historical studies, with GPT-4 demonstrating superior performance. This paper underscores the need for further research into AI's role in enriching our understanding of the past and bridging historical knowledge gaps.
ARTICLE | doi:10.20944/preprints202304.0350.v2
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: ChatGPT; Generative AI; Fake Publications; Human-Generated Publications; Supervised Learning; ML Algorithm; Fake Science; NeoNet Algorithm
Online: 17 April 2023 (08:12:18 CEST)
Background: ChatGPT is becoming a new reality. Where do we go from here? Objective: to show how we can distinguish ChatGPT-generated publications from counterparts produced by scientist. Methods:By means of a new algorithm, called xFakeBibs, we show the significant difference between ChatGPT-generated fake publications and real publications. Specifically, we triggered ChatGPT to generate 100 publications that were related to Alzheimer’s disease and comorbidity. Using TF-IDF, using the real publications, we constructed a training network of bigrams comprised of 100 publications. Using 10-folds of 100 publications each, we also 10 calibrating networks to derive lower/upper bounds for classifying articles as real or fake. The final step was to test xFakeBibs against each of the ChatGPT-generated articles and predict its class. The algorithm successfully assigned the POSITIVE label for real ones and NEGATIVE for fake ones. Results: When comparing the bigrams of the training set against all the other 10 calibrating folds, we found that the similarities fluctuated between (19%-21%). On the other hand, the mere bigram similarity from the ChatGPT was only (8%). Additionally, when testing how the various bigrams generated from the calibrating 10-folds against ChatGPT we found that all 10 calibrating folds contributed (51%-70%) of new bigrams, while ChatGPT contributed only 23%, which is less than 50% of any of the other 10 calibrating folds. The final classification results using the xFakeBibs set a lower/upper bound of (21.96-24.93) number of new edges to the training mode without contributing new nodes. Using this calibration range, the algorithm predicted 98 of the 100 publications as fake, while 2 articles failed the test and were classified as real publications. Conclusions: This work provided clear evidence of how to distinguish, in bulk ChatGPT-generated fake publications from real publications. Also, we also introduced an algorithmic approach that detected fake articles with a high degree of accuracy. However, it remains challenging to detect all fake records. ChatGPT may seem to be a useful tool, but it certainly presents a threat to our authentic knowledge and real science. This work is indeed a step in the right direction to counter fake science and misinformation.
REVIEW | doi:10.20944/preprints202303.0116.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: XAI; AI; artificial intelligence; explainable; explainability; machine learning; deep learning; data science; big data; healthcare; medicine
Online: 7 March 2023 (01:43:13 CET)
Artificial Intelligence (AI) describes computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. Examples of AI techniques are machine learning, neural networks and deep learning. AI can be applied in many different areas, such as econometrics, biometry, e-commerce and the automotive industry. In recent years, AI has found its way into healthcare as well, helping doctors to make better decisions (‘clinical decision support’), localizing tumors in magnetic resonance images, reading and analyzing reports written by radiologists and pathologists, and much more. However, AI has one big risk: it can be perceived as a ‘black box’, limiting trust in its reliability, which is a very big issue in an area in which a decision can mean life or death. As a result, the term Explainable Artificial Intelligence (XAI) has been gaining momentum. XAI tries to ensure that AI algorithms (and the resulting decisions) can be understood by humans. In this narrative review, we will have a look at the current status of XAI in healthcare, describe several issues around XAI, and discuss whether it can really help healthcare to advance, for example by increasing understanding and trust. Finally, alternatives to increase trust in AI are discussed, as well as future research possibilities in the area of XAI.
ARTICLE | doi:10.20944/preprints202106.0372.v1
Subject: Engineering, Automotive Engineering Keywords: distributed engine control system (DECS), NGVA, Data Distribution Service (DDS), Artificial Intelligence (AI), Augmented Reality (AR)
Online: 14 June 2021 (15:02:28 CEST)
This report is considered different aspects of the concept of the networked distributed engine control system (DECS) of future air vehicles. These aspects include the following: the structure of multiple networks similar to NATO Generic Vehicle Architecture (NGVA), the role of Artificial Intelligence (AI) in DECS, and the use Augmented Reality (AR) as Human-Machine Interface between AI and pilots. Deployment of AI solutions for monitoring equipment in on-board infrastructure can be provided on physical or virtual servers and in the clouds. In this case, it is possible to use various methods of alerting the pilot and ground personnel on the basis of AR. The use of AI allows covering an unlimited set of scenarios, to provide an assessment of the likelihood of equipment failure, classification alarm is normal, and recognition of the development of defects. To collect Big Data from sensors and the pre-processing of this data before a machine learning (ML) procedure it is proposed to form data sets with the help of the face-splitting matrix product. To decrease the time of reaction of Neural Networks it has been suggested the implementation of advanced tensor-matrix theory on the basis of penetrating face product of matrices. Other important results of the report are a possible version of the AR data format for DECS and a proposal about the use of non-orthogonal frequency discrete multiplexing (N-OFDM) signals to data transfer via fibre optics.
ARTICLE | doi:10.20944/preprints201810.0024.v1
Subject: Social Sciences, Political Science Keywords: Artificial General Intelligence; superintelligence; decisive strategic advantage; human goals; AI race; strategic stability; nuclear weapons; regulation
Online: 2 October 2018 (13:50:53 CEST)
AGI could arise within the next decades, promising a decisive strategic advantage. This paper discusses risks, associated with the development of AGI: destabilizing effects on strategic balance, underestimating risks in the interest of victory in the race, egoistically exploiting the huge benefits by a tiny minority. Further: Developed AGI could be beyond human control. Human goals could not be implemented and an intelligence explosion to superintelligence could take place leading to a total loss of control and power. If competition for AGI is non-transparent, secret, uncontrolled and not regulated, it’s possible that risks could not be managed and would lead to catastrophic consequences. The danger corresponds to that of nuclear weapons. It is crucial that the key actors of a possible AI Race agree at an early stage on the prevention and transparent regulation of a possible AI Race - similar to measures to secure strategic stability, on arms control measures, disarmament, and prevention of the proliferation of nuclear weapons. The realization that an uncontrolled AI race can lead to the extinction of humanity - this time even independent of human will – requires analogous measures to contain, prevent, regulate and secure an AI race within the framework of AGI development.
CONCEPT PAPER | doi:10.20944/preprints202302.0069.v1
Subject: Business, Economics And Management, Human Resources And Organizations Keywords: Skills Shortage; AI; Work 4.0; Digital Transformation; Digital Transformation; Adaptive Learning; Skill Development; HCI; Artificial Intelligence; Recruitment and Selection; Human Resources Infor-mation Systems; Augmentation; Substitution of Workforce; Augmentation Strategies
Online: 3 February 2023 (10:09:41 CET)
In order to counter the impending shortage of skilled professionals in the aging societies of our time in many western countries such as Germany, solutions for business and society are urgently needed. Here, artificial intelligence (AI) can play an important role in mitigating the problem with the help of diverse applications. At the same time, it is important to consider both the needs of the respective employee1 and the company to ensure that the use of AI has a positive impact on the organization and finds social acceptance. In this article, we describe the newly developed OSQE model (Optimize, Secure, Qualify, Expand), which for the first time outlines an AI cycle against the shortage of skilled professionals in a holistic approach that focuses equally on people and companies. This can serve organizations as a guide for strategy development, decision-making for and implementation of AI-supported measures in an entire cycle of an employee's affiliation with a company. The model takes three driving forces into account: companies, professionals, and AI applications. In the model, the measures to be implemented are prioritized with ascending numbering based on what would be most urgent for a company to implement. All measures relate to areas of action that place people at the center and can be assigned to the classic cycle of belonging of an employee in the company. In this regard, the opportunities that AI offers to professionals and companies are highlighted.
ARTICLE | doi:10.20944/preprints202304.0593.v1
Subject: Arts And Humanities, Art Keywords: AI-based painting systems(AIBPS); Technology Acceptance Model (TAM); Behavioral intentions; User experience; Structural Equation Modelling (SEM
Online: 19 April 2023 (13:27:16 CEST)
Artificial intelligence (AI) applications in different fields are developing rapidly, among which AI painting technology, as an emerging technology, has received wide attention from users for its creativity and efficiency. This study aimed to investigate the factors that influence user acceptance of the use of AIBPS by proposing an extended model that combines the Extended Technology Acceptance Model (ETAM) with the AI-based Painting System (AIBPS).A questionnaire was administered to 528 Chinese participants, using validated factor analysis data and Structural Equation Modeling (SEM) was used to test the hypotheses. The findings showed that hedonic motivation (HM) and perceived trust (PE) had a positive effect (+) on users' perceived usefulness (PU) and perceived ease of use (PEOU), while previous experience (PE) and technical features (TF) had no effect (-) on users' perceived usefulness (PU). This study provides an important contribution to the literature on AIBPS and the evaluation of systems of the same type, which helps to promote the sustainable development of AI in different domains and provides a possible space for further extension of TAM, thus helping to improve the user experience of AIBPS. The results of the study provide insights for system developers and enterprises to better motivate users to use AIBPS.
ARTICLE | doi:10.20944/preprints202301.0162.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Content-based image classification; Data curation and preparation; Convolutional neural networks (CNN); Deep learning; Artificial intelligence (AI)
Online: 9 January 2023 (10:59:31 CET)
Background: MR image classification in datasets collected from multiple sources is complicated by inconsistent and missing DICOM metadata. Therefore, we aimed to establish a method for the efficient automatic classification of MR brain sequences. Methods: Deep convolutional neural networks (DCNN) were trained as one-vs-all classifiers to differentiate between six classes, T1 weighted (w), contrast-enhanced T1w, T2w, T2w-FLAIR, ADC, and SWI. Each classifier yields a probability, allowing threshold-based and relative probability assignment while excluding images with low probability (label: unknown, open-set recognition problem). Data from three high-grade glioma (HGG) cohorts was assessed; C1 (320 patients, 20101 MRI images) was used for training, while C2 (197, 11333) and C3 (256, 3522) were for testing. Two raters manually checked images through an interactive labeling tool. Finally, MR-Class' added value was evaluated via radiomics models' performance for progression-free survival (PFS) prediction in C2, utilizing the concordance index (C-I). Results: Approximately 10% of annotation errors were observed in each cohort between the DICOM series descriptions and the derived labels. MR-Class accuracy was 96.7% [95%-Cl: 95.8, 97.3] for C2 and 94.4% [93.6, 96.1] for C3. 620 images were misclassified; Manual assessment of those frequently showed motion artifacts or alterations of anatomy by large tumors. Implementation of MR-Class increased on average the PFS model C-I by 14.6% compared to a model trained without MR-Class. Conclusions: We provide a DCNN-based method for sequence classification of brain MR images and demonstrate its usability in two independent HGG datasets.
ARTICLE | doi:10.20944/preprints201810.0736.v2
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Artificial Intelligence (AI); artificial general intelligence (AGI); artificial social intelligence (ASI); social sciences; singularity; complexity; embodied cognition; value alignment
Online: 18 December 2018 (11:38:15 CET)
In this paper we present a review of recent developments in AI towards the possibility of an artificial intelligence equals that of human intelligence. AI technology has always shown a stepwise increase in its capacity and complexity. The last step took place several years ago, with the increased progress in deep neural network technology. Each such step goes hand in hand with our understanding of ourselves and our understanding of human cognition. Indeed, AI was always about the question of understanding human nature. AI percolates into our lives, changing our environment. We believe that the next few steps in AI technology, and in our understanding of human behavior, will bring about much more powerful machines, flexible enough to resemble human behavior. In this context, there are two research fields: Artificial Social Intelligence (ASI) and General Artificial Intelligence (AGI). The authors also allude to one of the main challenges for AI, embodied cognition, and explain how it can viewed as an opportunity for further progress in AI research.
ARTICLE | doi:10.20944/preprints202110.0064.v1
Subject: Business, Economics And Management, Economics Keywords: digital transit; sustainable development; labor market; professional employment; economic sustainability; pace of development; artificial intelligence (AI); corporate social responsibility.
Online: 4 October 2021 (15:36:30 CEST)
This article explores the question of the rate of digital progress in the context of the labor market. Specific features of the current situation are indicated: temporality of socio-technological transformations, which is becoming less and less compatible with the harmonious development of man and society; the pace at which machines acquire intelligence; total devaluation of mental labor; unresolved issue of the role of man in the world of intelligent machines; the criticality of the problem of the labor market, due to its global nature, social significance and the rate of socio-technological changes. It is emphasized that these circumstances in the short term threaten the sustainable development of the global society, whose reactions to the transformation of technological and socio-economic infrastructure are significantly lagging behind. It is concluded that there is an urgent need to strengthen social responsibility, determined by the new ethics of relations between humans and machines with AI, supplemented by the primacy of the dignity of the social role of humans. The authors point out the urgent need to revise ideas about work as the main purpose of a person and about realization in the profession as the main factor that determines the self-esteem of an individual and his social status.
ARTICLE | doi:10.20944/preprints201903.0131.v1
Subject: Engineering, Energy And Fuel Technology Keywords: energy consumption; prediction; machine learning models; deep learning models; 21 artificial intelligence (AI); computational intelligence (CI); forecasting; soft computing (SC)
Online: 11 March 2019 (10:09:33 CET)
Machine learning (ML) methods has recently contributed very well in the advancement of the prediction models used for energy consumption. Such models highly improve the accuracy, robustness, and precision and the generalization ability of the conventional time series forecasting tools. This article reviews the state of the art of machine learning models used in the general application of energy consumption. Through a novel search and taxonomy the most relevant literature in the field are classified according to the ML modeling technique, energy type, perdition type, and the application area. A comprehensive review of the literature identifies the major ML methods, their application and a discussion on the evaluation of their effectiveness in energy consumption prediction. This paper further makes a conclusion on the trend and the effectiveness of the ML models. As the result, this research reports an outstanding rise in the accuracy and an ever increasing performance of the prediction technologies using the novel hybrid and ensemble prediction models.
ARTICLE | doi:10.20944/preprints202302.0065.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: OpenAI; ChatGPT; GPT-3; text-davinci-003; chatbots; prediction; scenario planning; scenario generation; future research; artificial intelligence; human-AI interaction; collaboration
Online: 3 February 2023 (09:34:38 CET)
Artificial Intelligence (AI) has the power to generate scenarios and make predictions through the use of advanced algorithms and machine learning techniques. OpenAI’s GPT-3 AI is a state-of-the-art language model that has been trained on a large dataset of text, which allows it to generate hu-man-like text, and it can generate scenarios for different fields. However, GPT-3 was trained on data available up until June 2021, had no access to more recent data nor was connected to the internet. In this study, we investigated the capability of OpenAI’s GPT-3 AI to predict the Ukrainian war es-calation, which started in 2022 and had massive geopolitical effects. We used GPT-3´s capability to generate future scenarios, to check those scenarios for internal consistency, and to create a proba-bility estimate. The results showed that although GPT-3 described an open war as one of the low probability scenarios, its capability on predicting the future was limited. Furthermore, it became evident that checking internal consistency of the generated scenarios could be improved. We ap-preciated GPT-3 as very useful and powerful for generating future scenarios, but also concluded that its prediction capabilities of real-world events are limited, should be used with caution, and require further development.
REVIEW | doi:10.20944/preprints202103.0262.v1
Subject: Biology And Life Sciences, Immunology And Microbiology Keywords: immune checkpoint inhibitors; immune checkpoint radiolabeled antibodies; PD-1; PD-L1; immune PET; immunotherapy; AI; Radiomics; Deep learning; CAR-T cells
Online: 9 March 2021 (11:12:55 CET)
Immunotherapy is an effective therapeutic option for several cancers. In the last years, the introduction of checkpoint inhibitors (ICIs) has shifted the therapeutic landscape in oncology and improved patient prognosis in a variety of neoplastic diseases. However, to date, the selection of the best patients eligible for these therapies, as well as the response assessment is still challenging. Patients are mainly stratified using immunohistochemical analysis of the expression of anti-gens on biopsy specimens, such as PD-L1 and PD-1, on tumor cells, on peritumoral immune cells, and/or in the tumor microenvironment (TME). Recently, the use and development of imaging biomarkers able to assess in-vivo cancer-related processes are becoming more important. Today, positron emission tomography (PET) with 2-deoxy-2-[18F]fluoro-D-glucose ([18F]FDG) is used routinely to evaluate tumor metabolism, and also to predict and monitor response to immunotherapy. Although highly sensitive, FDG-PET, in general, is rather unspecific. Novel radiopharmaceuticals (immuno-PET radiotracers) able to identify specific immune system targets are under investigation in pre-clinical and clinical settings. In this review, we will provide an overview of the main new immuno-PET radiotracers in development. We will also review the main players (immune cells, tumor cells, and molecular targets) involved in immunotherapy. Furthermore, we report current applications and the evidence of using [18F]FDG PET in immunotherapy, including the use of artificial intelligence (AI).
ARTICLE | doi:10.20944/preprints202003.0297.v1
Subject: Computer Science And Mathematics, Information Systems Keywords: Data Mining; Alzheimer’s Dementia; Composite Hybrid Feature Selection; Machine learning; Stack Hybrid Classification; AI Techniques; Classification; AD Diagnose; Clinical AD Dataset
Online: 19 March 2020 (10:52:31 CET)
Alzheimer's disease (AD) is a significant regular type of dementia that causes damage in brain cells. Early detection of AD acting as an essential role in global health care due to misdiagnosis and sharing many clinical sets with other types of dementia, and costly monitoring the progression of the disease over time by magnetic reasoning imaging (MRI) with consideration of human error in manual reading. Our proposed model, in the first stage, apply the medical dataset to a composite hybrid feature selection (CHFS) to extract new features for select the best features to improve the performance of the classification process due to eliminating obscures features. In the second stage, we applied a dataset to a stacked hybrid classification system to combine Jrip and random forest classifiers with six model evaluations as meta-classifier individually to improve the prediction of clinical diagnosis. All experiments conducted on a laptop with an Intel Core i7- 8750H CPU at 2.2 GHz and 16 G of ram running on windows 10 (64 bits). The dataset evaluated using an explorer set of weka data mining software for the analysis purpose. The experimental show that the proposed model of (CHFS) feature extraction performs better than principal component analysis (PCA), and lead to effectively reduced the false-negative rate with a relatively high overall accuracy with support vector machine (SVM) as meta-classifier of 96.50% compared to 68.83% which is considerably better than the previous state-of-the-art result. The receiver operating characteristic (ROC) curve was equal to 95.5%. Also, the experiment on MRI images Kaggle dataset of CNN classification process with 80.21% accuracy result. The results of the proposed model show an accurate classify Alzheimer's clinical samples against MRI neuroimaging for diagnoses AD at a low cost.
ARTICLE | doi:10.20944/preprints202002.0368.v1
Subject: Engineering, Civil Engineering Keywords: Aridity Index (AI); Percentage of Normal Index (PNI); Standardized Precipitation -Evopotranspiration Index (SPEI); Standardized Precipitation Index (SPI); Drought; Factor Analysis; Reliability Analysis
Online: 25 February 2020 (11:09:28 CET)
The climate covers a series of events that deeply affect human life. It is possible to understand these events through spatial and statistical analyzes. Today, climate change, which is one of the most important of these events and the impact factors of consequences of this change, become a current issue. Drought is cited as one of the consequences of climate change and it is important to examine it with various methods as it can give negative results to both the economy and the nature. In this study, the drought status of the regions where these stations are located and the effects of drought on climate change were statistically calculated and evaluated using Standardized Precipitation Index (SPI), Percentage of Normal Index (PNI), Aridity Index (AI) and Standardized Precipitation -Evopotranspiration Index (SPEI). The precipitation data from 1981 to 2010 were obtained from Cihanbeyli, Karapınar, Çumra, Seydişehir, Kulu, Ereğli, Niğde, Karaman, Beyşehir and Aksaray meteorology stations affiliated to Turkish State Meteorological Service. At the same time, factor analysis and validity-reliability analysis were conducted to test the computability of the indices used in the study as a single index and to determine the reliability of the operations. While using exploratory factor analysis, Kaiser-Meyer-Olkin (KMO) test and Barlett test for factor analysis; Cronbach's alpha coefficient was used for reliability analysis. In our study, K-Means Cluster Analysis method was performed to determine the cutoff values of indices. According to the result of cluster analysis for the new (common) index, new clusters were created and ANOVA test was conducted to determine whether there was a difference between clusters.
ARTICLE | doi:10.20944/preprints202203.0345.v1
Subject: Biology And Life Sciences, Insect Science Keywords: insecticide-treated nets (ITN); pyrethroid; mosquito; strain characterisation; insecticide resistance; method development; durability monitoring; product evaluation; quality control (QC); dual active ingredients (dual-AI); bioefficacy
Online: 25 March 2022 (14:12:38 CET)
Durability monitoring of insecticide-treated nets (ITNs) containing a pyrethroid in combination with a second active ingredient (AI) must be adapted so that the insecticidal bioefficacy of each AI can be monitored independently. An effective way to do this is to measure rapid knock down of a pyrethroid-susceptible strain of mosquitoes to assess the bioefficacy of the pyrethroid component and to use a pyrethroid-resistant strain to measure the bioefficacy of the second ingredient. To allow robust comparison of results across tests within and between test facilities, and over time, protocols for bioefficacy testing must include either characterisation of the resistant strain, standardisation of the mosquitoes used for bioassays, or a combination of the two. Through a series of virtual meetings, key stakeholders and practitioners explored different approaches to achieving these goals. Via an iterative process we decided on the preferred approach and produced a protocol consisting of characterising mosquitoes used for bioefficacy testing before and after a round of bioassays, for example at each time point in a durability monitoring study. We present the final protocol and justify our approach to establishing a standard methodology for durability monitoring of ITNs containing pyrethroid and a second AI.
ARTICLE | doi:10.20944/preprints202304.0344.v1
Subject: Medicine And Pharmacology, Epidemiology And Infectious Diseases Keywords: Deep phenotyping; AI; 2D and 3D facial scans; Genetic diseases; Early treatment; Big data; 2D and 3D facial scans; Facial traits; Healthcare; Citizen privacy; Ethical concerns; EU legal framework; Forensic medicine; Orwellian ramifications
Online: 14 April 2023 (03:56:59 CEST)
One in 12 babies is born with a rare genetic disease. Sadly, most cases are undetected until later age, missing time for early treatment and opportunity to prevent complications. Humanity has entered a new era where Big Data collected by governments, including 2D and 3D facial scans, are available. Many rare genetic diseases can be identified by artificial intelligence (AI) analysis of the facial photo. Phenotyping AI utilizations facilitate comprehensive and accurate genetic evaluations. AI processing of this Big Data to identify rare genetic diseases could bring unimaginable benefits to healthcare, although this would be a questionable step in terms of citizen privacy and could lead to future "Orwellian" ramifications with government abuse. Going forward, a balance must be found between protecting the privacy of citizens and the enticing use of AI for their health risks and cost savings through prevention. The unimaginable potential of AI early diagnostics from facial photos also raises various ethical and legal concerns. This paper presents concept, protentional methods and legal and other limitations within EU legal framework in contrast with potential benefits. This paper is focused on AI utilization to early diagnostic of rare genetic diseases. Shift of paradigm in the screening for rare genetic diseases in population with AI face analysis is expected to have a significant impact. The potential of AI algorithms utilizations similar to face2gene app in general population or systematically on Big governmental datasets recording facial traits changes in time can have significant impact on public health but at the same time give raise to profound concern as violation of one’s privacy.
Subject: Engineering, Civil Engineering Keywords: evolutionary model, gene-expression programming (GEP), prediction, soil compression index, estimation, soil engineering, soil informatics, civil engineering, machine learning, data science, big data, soft computing, deep learning, forecasting, subject classification codes, construction informatics, computational intelligence (CI), artificial intelligence (AI), estimation
Online: 25 March 2019 (10:25:18 CET)
Appropriate estimation of soil settlement is of significant importance since it directly influences the performance of building and infrastructures that are built on soil. In particular, the settlement of fine-grained soils is critical because of low permeability and continuous settlement with time. Coefficient of consolidation (Cc) is a key parameter to estimate settlement of fine-grained soil layers. However, estimation of this parameter is time consuming, needs skilled technicians, and specific equipment. In this study, Cc was estimated using several soil parameters such as liquid limit (LL), plastic limit (PL), and initial void ratio (e0). Estimating such parameters in laboratory is straight forward and needs substantially less time and cost compared to conventional tests to estimate Cc such as Oedometer test. This study presents a novel prediction model for Cc of fine-grained soils using gene-expression programming (GEP). GEP is a biologically inspired technique capable of offering closed-form solution for the optimal solution. A database consisted of 108 different data points was used to develop the model. A closed-form equation solution was derived to estimate Cc based on LL, PL, and e0. The performance of developed GEP-based model was evaluated through coefficient of determination (R2), root mean squared error (RMSE), and mean average error (MAE). High R2 and low error values indicated the descent performance of the model. Furthermore, the model was evaluated using the additional performance measures and met all the suggested criteria. Furthermore, the model had a better performance in terms of R2, RMSE, and MAE compared to most of existing models. It is expected that the developed model will decrease the time and cost associate with determining Cc of fine-grained soils.
Subject: Engineering, Civil Engineering Keywords: Evolutionary model, gene-expression programming (GEP), prediction, soil compression index, estimation, soil engineering, soil informatics, civil engineering, machine learning, data science, big data, soft computing, deep learning, forecasting, subject classification codes, construction informatics, computational intelligence (CI), artificial intelligence (AI), estimation
Online: 25 March 2019 (10:21:45 CET)
Appropriate estimation of soil settlement is of significant importance since it directly influences the performance of building and infrastructures that are built on soil. In particular, the settlement of fine-grained soils is critical because of low permeability and continuous settlement with time. Coefficient of consolidation (Cc) is a key parameter to estimate settlement of fine-grained soil layers. However, estimation of this parameter is time consuming, needs skilled technicians, and specific equipment. In this study, Cc was estimated using several soil parameters such as liquid limit (LL), plastic limit (PL), and initial void ratio (e0). Estimating such parameters in laboratory is straight forward and needs substantially less time and cost compared to conventional tests to estimate Cc such as oedometer test. This study presents a novel prediction model for Cc of fine-grained soils using gene-expression programming (GEP). GEP is a biologically inspired technique capable of offering closed-form solution for the optimal solution. A database consisted of 108 different data points was used to develop the model. A closed-form equation solution was derived to estimate Cc based on LL, PL, and e0. The performance of developed GEP-based model was evaluated through coefficient of determination (R2), root mean squared error (RMSE), and mean average error (MAE). High R2 and low error values indicated the descent performance of the model. Furthermore, the model was evaluated using the additional performance measures and met all the suggested criteria. Furthermore, the model had a better performance in terms of R2, RMSE, and MAE compared to most of existing models. It is expected that the developed model will decrease the time and cost associate with determining Cc of fine-grained soils.Keywords: evolutionary model, gene-expression programming (GEP), prediction, soil compression index, estimation, soil engineering, soil informatics, civil engineering, machine learning, data science, big data, soft computing, deep learning, forecasting, subject classification codes, construction informatics, computational intelligence (CI), artificial intelligence (AI), estimation