STUDY ON SIGNIFICANT DRIFT IN THE DOMAIN OF EXPLAINABLE ARTIFICIAL INTELLIGENCE

Artificial Intelligence (AI) is required since multiple resources are in need to complete depending on a daily basis. As a result, automating routine tasks is an excellent idea. This reduces the foundation's work schedules while also improving efficiency. Furthermore, the business can obtain talented personnel for the business strategy through Artificial Intelligence. Explainability in XAI derives from a combination of strategies that improve machine learning models' environmental flexibility and interpretability. When Artificial Intelligence is trained with a large number of variables to which we apply alterations, the entire processing is turned into a black box model which is in turn difficult to understand. The data for this research's quantitative analysis is gathered from the IEEE, Web of Science, and Scopus databases. This study looked at a variety of fields engaged in the (Explainable Artificial Intelligence) XAI trend, as well as the most commonly employed techniques in domain of XAI, the location from which these studies were conducted, the year-by-year publishing trend, and the most frequently occurring keywords in the abstract. Ultimately, the quantitative review reveals that employing Explainable Artificial Intelligence or XAI methodologies, there is plenty of opportunity for more research in this field.


INTRODUCTION
Artificial intelligence is on track to revolutionize worldwide economy, working environments, and cultures, as well as generate immense fortune. XAI is a new and developing field that aims to improve the transparency of AI processes. The ultimate purpose of XAI is to assist people in better understanding, trusting, and managing the outcomes of AI technology.
The ultimate aim of XAI is to create more explainable models while continuing to improve level of learning performance and accuracy in prediction. Through an in-depth model and data examination of your current AI system, XAI enhances the application of AI in environment.
The benefits of XAI can be found across a wide range of sectors and job activities. A few domains include domains of healthcare, insurance, marketing, financial services, autonomous industry and even IT services. Major of these technologies are black boxes, which means we have no idea how they function or why do they make certain decisions.

Fig. 1: Black box in Artificial Intelligence
It's risky to rely on black-box conclusions without understanding how they're made. In sectors where black-box judgments can be life-changing and have important ramifications, such as medical diagnosis, crime prevention, and autonomous-driving cars, the need for interpretability is extremely essential. Example as shown in Figure 2, assume an image of a nut is given to a black-box predictive model. It's not enough to simply state, "It's brown, thus it's a nut". Additionally, providing unnecessary explanations also must be avoided. When discussing a black-box decision system, it's critical to provide adequate information in a clear manner. In other words, instructions should be concise but accurate.
In this article [1], research points to the notion of Responsible Artificial Intelligence, which is a technique for large-scale AI application in real businesses that prioritises justice, model interpretability, and responsibility. This article explains how the custom code analysis to custom code transformation process can be automated in a clear and understandable manner.
An explainability taxonomy is created as well as it examines the needs in terms of functions [2]. A new approach is offered for synthesizing counterfactuals that incorporates innovative concepts such as counterfactual potential and case-base explanatory scope. The novel method recycles characteristics of good counterfactuals from a case database to create related counterfactuals that can explain fresh issues and solutions [3]. New XAI techniques are frequently founded on an explicit statement of what constitutes to a successful explanation. In this study the authors [4], look at how rule and example-based explanation styles affect system behaviour, contextual relevance, and work engagement in the situation of diabetes management decision support. An overview of the history of Explainable Artificial Intelligence is given by authors in this article [5]. It is also mentioned about how explainability was traditionally imagined, how it is currently accepted, and how it might be recognised in the future. The authors of this article have used explainable AI (XAI), a developing subdiscipline of artificial intelligence, as a toolkit for better analysing SDMs (Species Distribution Model).
The goal of XAI is to decode the properties of different statistical and machine learning models which include neural networks, random forests, decision trees and create more accessible and meaningful predictions [6]. Considering the African elephant, the authors have done a systematic SDM analysis and demonstrate several XAI tools, like local interpretable model-agnostic explanation (LIME), to predict the model's performance [7].
Many Machine Learning relevant computing systems are opaque, hence it's difficult to understand why they do what they do or how does it work. The goal of authors [8] is to create such a framework, with special consideration paid to the diverse explanatory demands of various stakeholders. The framework differentiates among multiple questions that seek for the description and those that are expected to be asked by various stakeholders and describes the broad methods in which these questions should be addressed in order for these stakeholders to fulfil their responsibilities in the Machine Learning environment. The consistency, accuracy, and trust security features of gradient-based XAI algorithms are investigated using a unique black box attack. The authors in this [9], demonstrate that the proposed system meets the victim's goal of deceiving both the classifier and the explanatory report using three securitybased data sets and models, and that only the explainability approach affects the classifier. The two threads that have emerged in the field of XAI are sometimes harmonious and sometimes contradictory. The first is concerned with the creation of practical tools for improving the transparency of automatically taught prediction models, such as deep learning or reinforcement learning. The second aims to foresee the adverse implications of opaque models, with the goal of regulating or restricting the repercussions of inaccurate predictions, particularly in crucial fields such as medicine and law [10].
The purpose of this paper is to describe a bibliometric analysis of scientific effort in the field of Explainable Artificial Intelligence. The ultimate purpose of XAI is to assist people in better understanding, trusting, and managing the outcomes of AI technology. The basic goal of XAI is to create more explainable models while yet keeping excellent learning performance.
Explainable Artificial Intelligence needs to address the details as mentioned in Table 1. The system needs to efficient enough to provide the description Meaningful AI system users must be able to comprehend the description

Reliability
The description should include a comprehensive information of how the AI system generates results Limitations The AI system should work within the constraints for which it was created Understanding and interpreting the outputs of AI systems is becoming increasingly vital as they become more extensively used and are applied to more significant decisions. These objectives make XAI more reliable and better version of Artificial Intelligence.

DATA COLLECTION
The availability of information on the aforementioned topic was examined in several data repositories such as Scopus, Web of Science, and IEEE Xplore. Table 2 lists the terms that were utilised to create the search.    Figure 2 shows a concise and slow trend in the graph for rise in number of publications in the recent years, among the time period of 2010 to 2021.  Year wise publications

Recent utilization of technologies in XAI
This information is retrieved from IEEE database, the publications are distributed in correspondence with the recent technologies and methodologies. As shown in Table 6   VOS viewer software is used to do keyword-based analysis. In all 96 keywords matched the threshold. The frequency of co-occurrence of words is set to 15 using the VOS viewer. The keyword-based network visualisation is illustrated in Figure 5. It is observed that intelligence is the most occurring keyword.
The density mapping of keywords is shown in Figure 6. Yellow-coloured streaks represent the relatively higher density.