Preprint Article Version 2 Preserved in Portico This version is not peer-reviewed

Explainable Artificial Intelligence for Anomaly Detection and Prognostic of Gas Turbines using Uncertainty Quantification with Sensor-Related Data

Version 1 : Received: 1 September 2021 / Approved: 2 September 2021 / Online: 2 September 2021 (11:36:48 CEST)
Version 2 : Received: 15 September 2021 / Approved: 16 September 2021 / Online: 16 September 2021 (11:47:31 CEST)

How to cite: Nor, A.K.M.; Pedapati, S.R.; Muhammad, M.; Leiva, V. Explainable Artificial Intelligence for Anomaly Detection and Prognostic of Gas Turbines using Uncertainty Quantification with Sensor-Related Data. Preprints 2021, 2021090034 (doi: 10.20944/preprints202109.0034.v2). Nor, A.K.M.; Pedapati, S.R.; Muhammad, M.; Leiva, V. Explainable Artificial Intelligence for Anomaly Detection and Prognostic of Gas Turbines using Uncertainty Quantification with Sensor-Related Data. Preprints 2021, 2021090034 (doi: 10.20944/preprints202109.0034.v2).

Abstract

Explainable artificial intelligence (XAI) is in its assimilation phase in the prognostic and health management (PHM). The literature on PHM-XAI is deficient with respect to metrics of uncertainty quantification and explanation evaluation. This paper proposes a new method of anomaly detection and prognostic for gas turbines using Bayesian deep learning and Shapley additive explanations (SHAP). The method explains the anomaly detection and prognostic and improves the performance of the prognostic, aspects that have not been considered in the literature of PHM-XAI. The uncertainty measures considered serve to broaden explanation scope and can also be exploited as anomaly indicators. Real-world gas turbine sensor-related data are tested for the anomaly detection, while NASA commercial modular aero-propulsion system simulation data, related to turbofan sensors, were used for prognostic. The generated explanation is evaluated using two metrics: consistency and local accuracy. All anomalies were successfully detected using the uncertainty indicators. Meanwhile, the turbofan prognostic results showed up to 9% improvement in root mean square error and 43% enhancement in early prognostic due to the SHAP, making it comparable to the best existing methods. The XAI and uncertainty quantification offer a comprehensive explanation for assisting decision-making. Additionally, the SHAP ability to increase PHM performance confirms its value in AI-based reliability research.

Keywords

Artificial intelligence; CMAPSS; consistency and local accuracy; CUSUM chart; deep learning; prognostic and health management; RMSE; sensing and data extraction; SHAP; Uncertainty; XAI

Subject

MATHEMATICS & COMPUTER SCIENCE, Artificial Intelligence & Robotics

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our diversity statement.

Leave a public comment
Send a private comment to the author(s)
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.