Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Explainable Approaches for Forecasting Building Electricity Consumption

Version 1 : Received: 16 August 2023 / Approved: 16 August 2023 / Online: 17 August 2023 (09:06:07 CEST)

A peer-reviewed article of this Preprint also exists.

Sakkas, N.; Yfanti, S.; Shah, P.; Sakkas, N.; Chaniotakis, C.; Daskalakis, C.; Barbu, E.; Domnich, M. Explainable Approaches for Forecasting Building Electricity Consumption. Energies 2023, 16, 7210. Sakkas, N.; Yfanti, S.; Shah, P.; Sakkas, N.; Chaniotakis, C.; Daskalakis, C.; Barbu, E.; Domnich, M. Explainable Approaches for Forecasting Building Electricity Consumption. Energies 2023, 16, 7210.

Abstract

Building electric energy is characterized by a significant increase of its uses (e.g. vehicle charging), a rapidly declining cost of all related data collection and a proliferation of smart grid concepts, including diverse and flexible electricity pricing schemes. Not surprisingly, an increased number of approaches have been proposed for its modeling and forecasting. In this work, we place our emphasis on three forecasting related issues. First, on the forecasting explainability, i.e. the ability to understand and explain to the user what shapes the forecast. To this extent we rely on concepts and approaches that are inherently explainable, such as the evolutionary approach of genetic programming (GP) and its associated symbolic expressions, as well as the so-called SHAP (SHapley Additive eXplanations) values, which is a well established model agnostic approach for explainability, especially in terms of feature importance. Second, we investigate the impact of the training timeframe on the forecasting accuracy; this is driven by the realization that a fast training would allow for faster deployment of forecasting in real life solutions. And third, we explore the concept of counterfactual analysis on actionable features, i.e. features that the user can really act upon and which therefore present an inherent advantage when it comes to decision support. We have found that SHAP values can provide important insights into the model explainability. As for GP models, we have found comparable and in some cases superior accuracy when compared to its neural-network and time-series counterparts but a rather questionable potential to produce crisp and insightful symbolic expressions, allowing a better insight into the model performance. We have also found, and report here on an important potential especially for practical, decision support, solutions of counterfactuals built on actionable features and short training timeframes.

Keywords

electricity demand forecasting; model explainability; SHAP values; neural networks; structured time series; genetic programming (GP); symbolic expressions; training timeframe; counterfactuals; actionable features

Subject

Engineering, Energy and Fuel Technology

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.