5.5. Methods and Models for Electrical Load Forecasting
Although several forecasting methods and models are created to make an accurate prediction, the process of finding an appropriate forecasting model is difficult and none of the models listed below can be generalized for all demand models. The main methods of predicting electrical load are multi-factor forecasting methods & time series prediction methods. The multi-factor prediction method focuses on the search for causal relationships between different influencers and forecast values. On the other hand, the time series forecasting method is based primarily on historical "series". The most used time series prediction models are divided into three subcategories, Statistical Models (Static and Dynamic)}. The statistical model is a mathematical model that incorporates a set of statistical assumptions about creating sample data. Some of the first traditional statistical models include Autoregressive (AR), Moving Average (MA), Autoregressive Moving Average (ARMA), Autoregressive Integrated Moving Average (ARIMA), ARMAX and ARIMAX and are described in the following paragraphs Methods [
48].
Autoregressive (AR): The sequences of an auto regurgitated model signifies the aggregate of information essential to predict a predictable signal that can be represented by AR constants. Short-duration acceleration data can certainly be a type of permanent accidental indicator. Consequently, it makes a lot of reason to use the AR model to explain increase of velocity signals. We can then use the AR model measurements as assigns to implement endeavor recognition. The assortment of the model order in AR model is a detailed problem, too low an instruction is a levelled assessment, while too enormous an order affects illegitimate mountains and widespread statistic variability. An AR model is based on special features to recognize activity from triaxial acceleration signals. AR coefficients corresponding to different activity patterns are discriminatory. Also, AR factors can provide sufficient distinction between different types of human activity and provide new feature options for activity recognition [
49].
The MA model is a linear regression model that expresses the declination, of the process as a finite, weighted sum of white noise terms. The unkind model accepts that the top predictor of what will transpire tomorrow is the average of the whole thing that has occurred up until now. The random walk model undertakes that the unsuitable predictor of what will happen tomorrow is what transpired today, and all previous antiquity can be overlooked. Spontaneously there is a range of prospects in involving these two limits. Why not take an average of what has occurred in some window of the latest prior? That’s the concept of a “moving” average [
50].
The method for building an ARMA model is slightly multifaceted and necessitates a profound acquaintance of the method. Accordingly, construction an ARMA model is often a trying task for the user, necessitating preparation in statistical analysis, a good knowledge of the field of application, and the obtainability of an relaxed to use but multipurpose specialized computer program. The number of series to be analyzed is often large. It is vital to note, that nowadays, the most used advertisement tools for the time-series forecasting (Stat graphics, SPSS, etc.) required intervention of an human expert for the definition of the ARMA model [
51].
The AR, MA or ARMA models, mentioned above, can only be used for fixed time series data. In terms of application, the ARMA model is insufficient to properly describe non-fixed time series. ARIMA modelling or the Box-Jenkins method was named after the two statisticians who introduced this approach in 1976. ARIMA is the mixture of the autoregressive and moving average models. There are a little essential key vital in ARIMA modelling, such as stationarity, invertibility and carefulness. Stationary means that the nasty, alteration, and covariance of the series remains continual over time. This can be accomplished by logarithmic transformation and by differencing either combined to the order one or two. Box and Jenkins assumed that economical models supply expert forecast slightly than an over-parameterized model with further measurements that would concern the quantities of autonomy. Invertibility is another implied condition in ARIMA in which the evaluated alterable necessity demonstrate a convergent autoregressive process or designated by a restricted order moving average. The three stages in ARIMA modelling as advocated by Box and Jenkins are (a) identification; (b) estimation; and (c) diagnostic checking. Therefore, seasonal variants of the ARIMA model are known as models (SARIMA). Another useful generalization of ARIMA models is the automatic fractionally integrated moving average (ARFIMA) model, which allows non-integer values of the different parameter d. ARFIMA has useful applications in time series modeling with a large memory capacity [
52].
In the ARMAX model, the current value of the time series is expressed linearly based on its previous values, in terms of current and previous noise values, and in addition, in terms of present and past values of the exogenous variable(s). The forecasting precision of ARIMA model is increased by adding of weekdays and weekends relationship that practices to ARIMAX model. Due to dynamical nature of the load, there exists a very durable association with the input variable star picked. Therefore, the forecasting carrying out of the ARIMAX model dedicatedly be governed by on the input variable quantity selected and the input variables are to be chosen in such a way that the forecasting error of the developed model is minimized. Hourly load changes from one day to another day of the week concerns the load pattern and therefore becomes an significant influence to be measured and involved in the recommended ARIMAX model [
53].
The following models are also used for further accuracy, such as:
The Kalman filter (KF) is established as an ideal repair estimator for a linear system. Founded on nonflavored transformation (UT), the UKF algorithm is advanced for nonlinear systems as a recursive state estimator. The unscented transform (UT) is a deterministic sampling technique, which utilizes a set of 2n+1 sample points (called “sigma points”) for the approximation of statistic characteristics of the changed variable. Forecasts, especially long-term ones, are marked by a high layer of uncertainty due to their high reliance on socio-economic agents, so a level of error of up to 10% is permissible. Applying a Kalman algorithm can significantly minimize the average model error. The KF is a set of mathematical equations in the state space that can provide efficient, computational means for estimating the state of an observed process. In addition, this filter is very powerful in various other aspects such as: support for assessing past, present and future situations, but also for controlling noisy systems [
47,
48].
A system is called a white system if all the evidence linked with that system is known, and conversely, it is called a black system if all the information is undetermined. Hence, the grey system is a system with partially known and partly unknown information. There are several systems in this world in which social information is either incomplete or challenging to gather. The simplest form of the grey sculpting approach is the Grey Model (GM) (1, 1). The first ‘1’ signifies the order of the differential equation, and the second ‘1’ suggests the number of variable stars. This theory can trade with noticed systems that have semi unknown parameters, as grey models need only a partial amount of data to assess the conduct of the unknown system. GM’s are suitable for all four types of load forecast [
54].
The exponential smoothing (ES) method describes a class of forecasting methods. Each has the property that forecasts are weighted combinations of past observations, where recent observations are given relatively more weight than older ones. The double exponential smoothing (DES) is an extension of ES designed for trend time series. Exponential Smoothing (ES) is a realistic approach to prediction, according to which the forecast can be made from the exponentially weighted average of previous comments. ES models are between the most public and widespread methods of statistical forecast due to their precision, austerity, and petite cost [
55].
Traditional / Statistical models are restricted and can occasionally lead to insufficient solutions. The cause is the extremely high number of computational capabilities that lead to long solution times and the sophistication of some nonlinear data motifs. Therefore, machine learning and artificial intelligence methods offer a promising and attractive alternative to intelligent energy networks. An important ML model is the use of artificial neural networks (ANNs). This system is the interconnectedness of “neurons” that can compute quantities from inputs consuming facts via system. The ANN method was originally used as an alternative mechanism for predicting time series and was successfully applied in several different areas, for prediction and classification reasons. Regardless, several guidelines are available in the literature to provide future modellers with a systematic way of developing ANN models. The model development process is divided into eight main steps: (1) data collection, (2) data pre-processing, (3) selection of input variables (or predictors), (4) data splitting, (5) selection of model architecture, (6) determination of model structure, (7) model training, and (8) model validation [
41]. This model handles real time records as input of error adjustment style and simulation findings directs that Mean Average Percentage Error (MAPE) of 0.72% which is reliable than conventional system.
Extreme learning machine (ELM) is a training algorithm for single hidden layer feedforward neural network (SLFN), which converges much faster than traditional methods and yields promising performance. They usually appeal to a single-hidden layer of FF neural network. In ELM, the weights of hidden flat nodes are selected randomly, and the least squares solution can determine the ELM's output weights in detail. ELM models have a positive effect on the next day's load forecast because they have better forecast accuracy than other models [
47].
Support Vector Machine (SVM) is proposed to solve system level electricity load prediction problem. Vector support machines (SVM) are regression and classification mechanisms, which were first introduced by Vapnik in 1992. A binary SVM model is developed using three different data sampling methods and nineteen predictor variables, four of which are first introduced in this study. The model is configured by regulating the penalty parameter, selecting the most appropriate kernel function, and setting the best value for the kernel function's parameter. A novel combination of goodness-of-fit metrics is used to more realistically evaluate the model accuracy to predict built and unbuilt land cells as well as changed and unchanged land cells in the whole study area. This approach uses weather forecasting and historical electricity usage data as inputs and predicts the next daily load of electricity in the system (air conditioning, lighting, electricity, and other equipment) [
56].
Fuzzy set theory can be considered as a generalized classical set theory. Normally, in classical set theory an element can either belong to a particular set or not. Therefore, the degree of being a member of that set is its crisp value. However, in fuzzy set theory, the degree of membership of an element can be continuously varied. Fuzzy set maps from the universe of discourse to the close interval {0, 1}. In the field of electrical load prediction, unclear logic is dedicated to modeling and forecasting with a particular focus on computational and artificial intelligence approaches, including Fuzzy Logic models. The use of Fuzzy Logic models gives results that are quite promising, as they reduce error in at least half in classic methodologies.
Figure 8 shows the basic chart block diagram of the methodology of Fuzzy Logic for short-term load forecasting. The inputs to the unclear set relies on classification, i.e., hourly data of the predicted temperature and time are specified to the unclear conclusion system through a block of fuzzification. The system of conclusions completes the task of forecasting by using the unclear basis of rules resulting from the forecast. Then, the conclusion system gives, output, the defuzzification block converts the unclear output to the output that may appear further in a chart known as the load curve.
Figure 9 demonstrates the streaming of short-term load forecast using Fuzzy Logic. The exit received is compared to the real load and the error in the load forecast is used to enhance the rule base for future forecasts. The above improvement increases the accuracy of the load forecast [
57].
Fuzzy Logic applies primarily to device-mode controllers. A typical example is the relevant applications to the public, such as the operation control of the washing machines with fuzzy control where the washing machine regulates the speed, type and quantity of detergent as well as the temperature of the water, depending on the strength, type and quality of the clothes that counts with appropriate sensor systems, achieving savings of water and energy consumption, as well as in the prediction of load and energy.
Wavelet Neural Network’s (WNNs) are powerful for approaching nonlinear operations. To accurately record the characteristics of the load at multiple frequencies, a WNN technique is used to decompose the loads into various frequency elements. Then each element is transformed, normalized, and properly fed with time and date indicators in a neural network so that the characteristics of the individual elements are correctly recorded. Predictions from individual neural networks are then altered and combined by the final predictions. The WNN method is used to predict very short-term loads for a time horizon of one hour in the future at 5-minute intervals.
A new approach to very short-term forecasting could be Advanced Wavelet Neural Networks (AWNNs). It customs an advanced wavelet transformation with randomness price purpose to choose the greatest wavelet improper for data putrefaction, joint information for ear assortment, and neural networks for forecast. The presentation of the AWNN is methodically assessed by means of control weight data for one and multi-step forecasts gaining and likened to numerous orientation algorithms and standards. Finished wavelet decomposition it is likely to discovery a customary of best incidence mechanisms to signify the data and then brand the suitable forecast in each of these rudiments distinctly. By identifying these elements and predicting them separately, there is a good chance to create more accurate forecasting models [
58]. They are often suitable in nonlinear systems and conduct a particular optimization based on the natural selection of optimal solutions resulting from a wide range of candidate prediction models. This kind of optimization based on genetic algorithms is usually developed during the model selection process, when the most appropriate parameters of the prediction model need to be found.
Because it is very difficult to solve objective functions that in turn contain discontinuous functions, using traditional methods, as well as that the load may shift–change, we aim to use the genetic algorithm (GA), as it enables us to solve the above functions in a more optimal way. We seek optimal transportation and distribution with the lowest possible production costs. There is a lot of research that proves that through polynomial functions and with the help of the genetic algorithm, end users benefit greatly in terms of electricity bills [
59]. This is a new field emerging because of developments in artificial intelligence. Specialized systems are new techniques that have emerged because of advances in the field of artificial intelligence (AI). An Expert System is a computational program that can explain, understand, and expand the knowledge base to new information that they become available. Specialized systems combine rules and procedures used by experts to create appropriate software, making them able to automatically make predictions, without human assistance Numerous hybrid methods that syndicate the expert system with additional load forecasting models for load forecasting. For example, fuzzy logic and expert system are mutual with a hyphenate.
Hybrid or combined models and methods can achieve enhance prediction efficiency than the single model by integrating the advantages of diverse individual forecasting models and are therefore used in many forecasting areas. Thus, new studies have moved their primary research focus to the development of efficiently hybrid models in the hope of improving predictive performance. The following figure shows cataloging of models for electrical load forecasting.
Figure 10.
Cataloging of models for electrical load forecasting.
Figure 10.
Cataloging of models for electrical load forecasting.