Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Application of Explainable AI (Xai) For Anomaly Detection and Prognostic of Gas Turbines with Uncertainty Quantification.

Version 1 : Received: 1 September 2021 / Approved: 2 September 2021 / Online: 2 September 2021 (11:36:48 CEST)
Version 2 : Received: 15 September 2021 / Approved: 16 September 2021 / Online: 16 September 2021 (11:47:31 CEST)

How to cite: Nor, A.K.M.; Pedapati, S.R.; Muhammad, M. Application of Explainable AI (Xai) For Anomaly Detection and Prognostic of Gas Turbines with Uncertainty Quantification.. Preprints 2021, 2021090034 (doi: 10.20944/preprints202109.0034.v1). Nor, A.K.M.; Pedapati, S.R.; Muhammad, M. Application of Explainable AI (Xai) For Anomaly Detection and Prognostic of Gas Turbines with Uncertainty Quantification.. Preprints 2021, 2021090034 (doi: 10.20944/preprints202109.0034.v1).

Abstract

XAI is presently in its early assimilation phase in Prognostic and Health Management (PHM) domain. However, the handful of PHM-XAI articles suffer from various deficiencies, amongst others, lack of uncertainty quantification and explanation evaluation metric. This paper proposes an anomaly detection and prognostic of gas turbines using Bayesian deep learning (DL) model with SHapley Additive exPlanations (SHAP). SHAP was not only applied to explain both tasks, but also to improve the prognostic performance, the latter trait being left undocumented in the previous PHM-XAI works. Uncertainty measure serves to broaden explanation scope and was also exploited as anomaly indicator. Real gas turbine data was tested for the anomaly detection task while NASA CMAPSS turbofan datasets were used for prognostic. The generated explanation was evaluated using two metrics: Local Accuracy and Consistency. All anomalies were successfully detected thanks to the uncertainty indicator. Meanwhile, the turbofan prognostic results show up to 9% improvement in RMSE and 43% enhancement in early prognostic due to SHAP, making it comparable to the best published methods in the problem. XAI and uncertainty quantification offer a comprehensive explanation package, assisting decision making. Additionally, SHAP ability in boosting PHM performance solidifies its worth in AI-based reliability research.

Keywords

XAI; SHAP; Uncertainty; PHM; Anomaly Detection; Prognostic.

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our diversity statement.

Leave a public comment
Send a private comment to the author(s)
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.