Version 1
: Received: 1 September 2021 / Approved: 2 September 2021 / Online: 2 September 2021 (11:36:48 CEST)
Version 2
: Received: 15 September 2021 / Approved: 16 September 2021 / Online: 16 September 2021 (11:47:31 CEST)
Version 3
: Received: 11 January 2022 / Approved: 12 January 2022 / Online: 12 January 2022 (10:22:47 CET)
How to cite:
Nor, A. K. M.; Pedapati, S. R.; Muhammad, M. Application of Explainable AI (Xai) For Anomaly Detection and Prognostic of Gas Turbines with Uncertainty Quantification.. Preprints2021, 2021090034. https://doi.org/10.20944/preprints202109.0034.v1
Nor, A. K. M.; Pedapati, S. R.; Muhammad, M. Application of Explainable AI (Xai) For Anomaly Detection and Prognostic of Gas Turbines with Uncertainty Quantification.. Preprints 2021, 2021090034. https://doi.org/10.20944/preprints202109.0034.v1
Nor, A. K. M.; Pedapati, S. R.; Muhammad, M. Application of Explainable AI (Xai) For Anomaly Detection and Prognostic of Gas Turbines with Uncertainty Quantification.. Preprints2021, 2021090034. https://doi.org/10.20944/preprints202109.0034.v1
APA Style
Nor, A. K. M., Pedapati, S. R., & Muhammad, M. (2021). Application of Explainable AI (Xai) For Anomaly Detection and Prognostic of Gas Turbines with Uncertainty Quantification.. Preprints. https://doi.org/10.20944/preprints202109.0034.v1
Chicago/Turabian Style
Nor, A. K. M., Srinivasa Rao Pedapati and Masdi Muhammad. 2021 "Application of Explainable AI (Xai) For Anomaly Detection and Prognostic of Gas Turbines with Uncertainty Quantification." Preprints. https://doi.org/10.20944/preprints202109.0034.v1
Abstract
XAI is presently in its early assimilation phase in Prognostic and Health Management (PHM) domain. However, the handful of PHM-XAI articles suffer from various deficiencies, amongst others, lack of uncertainty quantification and explanation evaluation metric. This paper proposes an anomaly detection and prognostic of gas turbines using Bayesian deep learning (DL) model with SHapley Additive exPlanations (SHAP). SHAP was not only applied to explain both tasks, but also to improve the prognostic performance, the latter trait being left undocumented in the previous PHM-XAI works. Uncertainty measure serves to broaden explanation scope and was also exploited as anomaly indicator. Real gas turbine data was tested for the anomaly detection task while NASA CMAPSS turbofan datasets were used for prognostic. The generated explanation was evaluated using two metrics: Local Accuracy and Consistency. All anomalies were successfully detected thanks to the uncertainty indicator. Meanwhile, the turbofan prognostic results show up to 9% improvement in RMSE and 43% enhancement in early prognostic due to SHAP, making it comparable to the best published methods in the problem. XAI and uncertainty quantification offer a comprehensive explanation package, assisting decision making. Additionally, SHAP ability in boosting PHM performance solidifies its worth in AI-based reliability research.
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.