Forecasting Performances of the Reduced Form VAR and Sims-Zha Bayesian VAR Models when the Multiple Time Series are Jointly Influenced by Collinearity and Autocorrelated Error

The goal of VAR or BVAR is the characterization of the dynamics and endogenous relationships among time series. Also the VAR models are known for their applications to forecasting and policy analysis. This paper compare the performance of VAR and Sims-Zha Bayesian VAR models when the multiple time series are jointly influenced by different levels of collinearity and autocorrelation in the short term (T=16, 32, 64 and 128). Five levels (-0.9,-0.5, 0,+0.5,+0.9) of collinearity and autocorrelation were considered and the results from the simulation study revealed that VAR(2) model dominated for no and moderate levels of autocorrelation (-0.5, 0, +0.5) irrespective of the collinearity level except in few cases when T=16. While the BVAR models dominated for high autocorrelation levels (-0.9 and +0.9) irrespective of the collinearity level except in few cases when T=128. The performance of the models varies at different levels of the collinearity and autocorrelated error, and also varies with the short term periods. Furthermore, the values of the RMSE and MAE criteria decrease as a result of increase in the time series length. In conclusion, the performance of the forecasting models depend on the time series data structure and the time series length. It is therefore recommended that the data structure and series length should be considered in using an appropriate model for forecasting.


Introduction
The field of time series analysis consists of techniques that are applied to time series in order to understand the past behavior, make comparison and produce forecast for such time series.The concepts and fields related to time series include: longitudinal data, growth curves, repeated measures, econometric methods, multivariate analysis, signal processing and system analysis (Brillinger, 2000).Some of the objectives for studying time series include: Analysis of past behavior of a variable, Forecasting, Evaluation of current achievement and Comparative Studies.
Modeling multivariate time series data effectively is important for decision making activities in the fields of economics, medicines, finance, agriculture, sciences and engineering.
The statistical multivariate time series modeling methods include Vector Autoregressive (VAR) and Bayesian VAR (BVAR) processes.The goal of VAR or BVAR is the characterization of the dynamics and endogenous relationships among time series.The approach is to show whether various variables are endogenously related, with what dynamics, and over what time periods (Brandt & Williams, 2007) One of the major advantages of the reduced form multiple equation time series modeling such as VARs is their applications to forecasting and policy (Sims, 1980).In recent times, it has been discovered that unrestricted VAR models tend to overfit the data, attribute unrealistic portions of the variance in time series to their deterministic components, and overestimate the magnitude of the coefficients of distant lags of variables as a result of sampling error (Brandt and Freeman,2006;Canova, 2007;Caraiani, 2010).Because of the many problems encountered in using unrestricted VARs, the Bayesian Econometrics and Bayesian Time Series Analysis was intended to solve the many problems associated with the unrestricted VARs.
The Bayesian VAR (BVAR) were originally devised to improve macroeconomic forecast (Litterman, 1986a;Sims andZha, (1998, 1999)).In addition, the Bayesian method was intended to solve the problems associated with unrestricted VAR models.BVAR makes in-sample fitting less dramatic and improve out-of-sample performances.These advantages of BVAR make it more useful in forecasting short-term macro-economic series both in Central Banks and other international financial institutions.
Bayesian approach has been especially effective in dealing with specification uncertainty inherent in time-series modeling.The final strength of the BVAR has been the emergence of a consistent method for specifying the Bayesian prior, including formal statistical criteria for examining the performance of alternative specifications (Park, 1990).Another advantage of BVAR is that it does not ponder too much on any of the parameters of the model, but rather, emphasis is laid on the use of prior distribution for the parameters.The prior distributions are the key factor in the BVAR approach.Another feature of the Bayesian VAR framework is that it allows for the presence of trend in the variables (Caraiani, 2010).Gujarati (2003) observed that multicollinearity is a problem that usually afflict VAR models.In another literature, it was reported that correlation coefficients 7 .0 r > was an appropriate indicator for when collinearity begins to severely distort model estimation and subsequent prediction (Dormann, et al 2003).Also, Tahir (2014) reported that Bayesian methods in BVAR provide inherent solution to circumvent the problem of multicollinearity and over parameterization.In a recent work of Garba et al. (2013), they observed that the autocorrelation problem usually afflict time series data.Therefore our work is motivated by these recent studies.
The aim of this study is to compare the forecasting performances of the reduced VAR and reduced form Sims-Zha Bayesian VAR (with harmonic decay) for bivariate time series data that have autocorrelated error terms and variables that are correlated (collinear time series data).

Review of Related Literature
A lot of works have been carried under the Classical VAR and Bayesain VAR framework and in Sims-zha Bayesian VAR in particular.Examples include Uhlig (1997) proposed a Bayesian approach to a vector autoregression with stochastic volatility, where the multiplicative evolution of the precision matrix is driven by a multivariate Beta variate.He suggested that the estimation of the autoregressive parameters required numerical method that is an importancesampling based approach.Kadiyala and Karlsson (1997) considered Bayesian analysis of Vector autoregression models, and compared the forecasts from Diffuse, Normal-Wishart, Normal-Diffuse and Extended natural conjugate priors to Minnesota prior.In their numerical methods they found that the preferred choice prior is the Normal-Wishart when the prior beliefs are of the Litterman.
They also found that for more general prior beliefs or when the computational effort is of minor importance, the Normal-Diffuse and the Extended natural conjugated priors are strong alternative to the Normal-Wishart.Sims and Zha (1998) revealed that if dynamic multivariate models are to be used to guide decision making, it is important that probability assessments of forecasts or policy projections should be provided.They used Bayesian methods to develop methods to introduce prior information in both reduced-form and structural VAR models without introducing substantial new computational burdens.They concluded that it is possible to extend Bayesian methods to larger models and to models with over identifying restrictions.Sims and Zha (1999) further showed how to correctly extend known methods for generating error bands in reduced form for VARs to over identified models.They explained that classical confidence region can be misleading, and can result to conceptual and computational problems.They suggested that likelihood-based band, rather than approximate confidence bands based on asymptotic theory, be standard in reporting results for VAR models.Phillips and Ploberger (1999) developed an asymptotic theory of Bayesian inference for time series.They obtained the limiting representation of the Bayesian data and showed that it is the same to general exponential form for wide class of likelihoods and prior distributions for both continuous and discrete time cases.
They suggested that no assumptions concerning stationarity or rates of convergence are required in the asymptotics.Shoesmith and Pinder (2001) compared demand forecasts computed using the time series forecasting techniques of VAR and BVAR with forecasts computed using exponential smoothing and seasonal decomposition as applied to inventory management.The study revealed that improvement in forecast accuracy can be gained when VAR and BVAR models are used.Sims (2002) compared restricted and unrestricted VAR models and then discussed the role of models and probabilities in the monetary policy process.The work dwelt on the way data relate to decision making in Central Banks.In his work, he used the VAR and the BVAR in characterizing inflation and GDP forecast accuracy.
McNelis and Neftci ( 2006) considered financial market data to assessed the likelihood of renminbi revaluation and its implications for Chinese share prices increases, given the continuing appreciation of the Euro against the US dollar.Their result revealed that the Bayesian VAR methods performed much more accurately than the standard vector autoregressive (VAR) methods.Brandt and Freeman (2006) reviewed recent developments in Bayesian multi-equation time series modeling in testing, forecasting and policy analysis.They explained methods for constructing Bayesian measures of uncertainty of impulse responses (Bayesian shape error bands).Their work further revealed that a reference prior for the VAR models has proven useful in short and medium term forecasting both in macroeconomics and in the study of politics.Brandt and Freeman (2009)  Vector Autoregression and Bayesian VAR models for forecasting Turkish GDP.The results confirmed the accuracy of Bayesian Vector Autoregression models for forecasting Turkish GDP.
Adenomon et al (2015a) compared VAR and Sims-Zha Bayesian VAR (with quadratic decay) in the presence of high autocorrelated error (-0.99, -0.95, -0.9, -0.85, -0.8, 0.8, 0.85, 0.9, 0.95, 0.99).Their result from 10,000 iteration revealed that BVAR models were suitable for short and long terms with classical VAR was suitable for long term at the different levels of autocorrelation.
Adenomon et al (2015b) compared Sims-Zha Bayesian VAR (with quadratic decay) for stable data generation process for short, medium and long terms.Their result from 10,000 iteration in the simulation study revealed that BVAR models with tight prior were suitable for short term forecast while BVAR models with loose prior were suitable for long term forecast.
Dahem (2016) compared the standard VAR and Bayesian VAR to assess three models for predicting inflation, the make-up model, the monetary model and Phillips curve over the period 1990Q1-2013Q4.The result revealed that Bayesian Vector Error Correction Model (BVECM) make-up model was best suited to forecast inflation for Tunisia.Some authors have compared the performances of Hybrid DSGE model to classical econometrics models (such as Vector Autoregressive (VAR), Factor Augmented VAR and Bayesian VAR).The authors include: Gupta and Kabundi (2009); Paccagnini (2012).

Vector Autoregression (VAR) Model
VAR methodology superficially resembles simultaneous equation modeling in that we consider several endogenous variables together.But each endogenous variable is explained by its lagged values and the lagged values of all other endogenous variables in the model; usually, there are no exogenous variables in the model (Gujarati, 2003).
It can be written alternatively as where ⊗ denotes the Kronecker product and Vec the vectorization of the matrix Y.This estimator is consistent and asymptotically efficient.It furthermore equals the conditional Maximum Likelihood Estimator (MLE) (Hamilton, 1994).As the explanatory variables are the same in each equation, the Multivariate least squares is equivalent to the Ordinary Least Squares (OLS) estimator applied to each equation separately, as was shown by Zellner (1962).
In the standard case, the MLE estimator of the covariance matrix differs from the OLS estimator. ( OLS estimator for a model with a constant, k variables and p lags, in a matrix notation, gives Therefore, the covariance matrix of the parameters can be estimated as ( 8)

BAYESIAN VECTOR AUTOREGRESSION WITH SIMS-ZHA PRIOR
The most popular BVAR model is that of the Litterman (1986b), although other priors have been studied.For instance, Ni and Sun, (2005) explored the properties of Bayesian estimates of Vector Autoregression (VAR) models under several possible choices of sampling distribution of data (normal and Student-t errors), loss functions (for Σ Φ , ) and priors (Jeffreys prior, RAT prior, Yang and Bargers prior, Shrinkage prior and constant prior).They concluded that the choice of prior has stronger effect on the Bayesian estimates than the choice of loss function.In this line also, Ni, Sun and Sun (2007) investigated the properties of the Bayesian estimates of impulse responses through an information-theoretic approach.They derived Bayesian estimators from an intrinsic entropy loss function and showed that they are distinctly Again, for the Litterman BVAR, the estimation of the VAR coefficients is done on an equationby-equation basis as in the reduced form version while the Sims-Zha BVAR estimates the parameters for the full system in a multivariate regression.
The procedure for BVAR with Sims-Zha prior is as follows.First, consider the following (identified) dynamic simultaneous equation model as .T . .1,2, t ; This is an m-dimensional VAR for a sample of size T with y t a vector of observations at time t, A l the coefficient matrix for the l th lag; p the maximum number of lags (assumed known), d a vector of constant and t ε a vector of i.i.d normal structural shocks such that The structural model can be transformed into a multivariate regression by defining A 0 as the contemporaneous conditions of the series and A + as a matrix of the coefficients on the lagged variables by YA 0 + XA + =E where Y is Txm, A 0 is mxm, X is Tx(mp+1), A + is (mp+1)xm and E is Txm matrices.
To define the VAR in a compact form The VAR model can then be written as a linear projection of the residual by letting Z=[Y X], and is a conformable stacking of the parameters in A 0 and A + : In order to derive the Bayesian estimator for this structural equation model, we have to examine the (conditional) likelihood function for normally distributed residuals ].
The prior overall of the structural parameters has the form a ~ denotes the mean parameters in the prior for a + , Ψ is the prior covariance for + a ~ and ) ( φ is a multivariate normal density.
The posterior for the coefficients is then The posterior is conditional multivariate normal, since the prior has a conjugated form.In this case, the posterior can be estimated by a multivariate seeming unrelated regression (SUR) model.
The forecast and inferences can be generated by exploiting the multivariate normality of the posterior distribution of the coefficients.The normal conditional prior for the mean of the structural parameters is given by is the prior covariance matrix for + a ~.Though complicated, it is specified to reflect the following general beliefs and facts about the series being model: 1.The standard deviations around the first lag coefficients are proportionate to all the other lags.
2. The weight of each variable's own lags is the same as those of other variables' lags.
3. The standard deviations of the coefficients of longer lags are proportionately smaller than those of earlier lags.(Lag coefficients shrink to zero over time and have smaller variance at higher lags.) 4. The standard deviation of the intercept is proportionate to the standard deviation of the residuals for the equation.
5. The standard deviation of the sums of the autoregressive coefficients should be proportionate to the standard deviation of the residuals for the respective equation (consistent with the possibility of cointegration).
6.The variance of the initial conditions should be proportionate to the mean of the series.These are "dummy initial observations" that capture trends or beliefs about stationarity and are correlated across the equations.The summary of the Sims-Zha prior is given in Table 3 Brandt and Freeman (2006) Each diagonal element of Ψ therefore corresponds to the variance of the VAR parameters.The variance of each of these coefficients is assumed to have the form for the element corresponding to the l th lag of variable j in equation i.
The overall coefficient covariances are scaled by the value of error variances from m univariate AR(p) OLS regressions of each variable on its own lagged values, 2 j σ for j=1, 2, . ..m.The parameter 0 λ sets an overall tightness across the elements of the prior on The matrix representation of the reduced form is given as We can then construct a reduced form Bayesian SUR with the Sims-Zha prior as follows.The prior means for the reduced form coefficients are that B 1 =I and B 2 , . . .B p =0.We assume that the prior has a conditional structure that is multivariate Normal-inverse Wishart distribution for the parameters in the model.To estimate the coefficients for the system of the reduced form model with the following estimators This representation translates the prior proposed by Sims and Zha form from the structural model to the reduced form (Brandt andFreeman, (2006, 2009), and Sims andZha, (1998, 1999).
Lastly, we consider an h-step forecast equation for the reduced form VAR model Where we use the convention that B j =0 for j>p, C l are the impulse response matrices for lag l, K i describe the evolution of the constants in the forecasts, and N l (h) define the evolution of the autoregressive coefficients over the forecast horizon.The h-step forecast equation above gives the dynamic forecasts produced by a model with structural innovations.

3 Setting of Hyperparameters for BVAR Model with Sims-Zha Prior
The setting of hyperparameters for BVAR Model has received a lot of attention in Bayesian time series literature.For instance, in the work of Kadiyala & Karlsson (1997), the values of the hyperparameters were chosen based on the forecast performance over a calibration period.Also in Sims & Zha (1998,1999), and in Leeper, Sims & Zha (1996) . Brandt & Freeman (2006, 2009) also exploited the Sims-Zha prior in forecasting macro political dynamics.They found that the Sims-Zha prior performed well in forecasting macro political time series data.In addition, Brandt, Colaresi and Freeman (2008) set the hyperparameter values for the Sims-Zha prior based on experience with events data and discussing with leading international relations scholars.Supper (2007) recommended that such priors from experience provide better fits.

4.0: Simulation Procedure
We considered the VAR (2) process that obeys the following form Such that y 1t and y 2t are jointly influenced by collinearity and autocorrelation levels to have the following combinations of collinearity and autocorrelation levels.The choice for this form model is to obtain a stable process with the true lag length and to avoid a VAR process that is not affected by overparameterization (Caraiana, 2010).The autocorrelated error terms of order 1 given as was implanted in the model.The choice here is similar to the work and illustration of Cowpertwait (2006).Also five levels of autocorrelation was considered as δ =(0, -0.5, +0.5, -0.9 and +0.9).Furthermore, the autocorrelation levels were classified as no (0); Moderate (±0.5) and High (±0.9).Then the Cholesky Decomposition was used to create a bivariate time series data so that y 1 and y 2 are collinear at different collinearity levels.This study considered five collinearity levels as ρ=(0, -0.5, +0.5, -0.9 and +0.9).
The choice and classification of the collinearity level is in line with work and illustration of Carsey & Harden (2011).In this present work, the desired correlation matrix is given as ρ ρ P and then, the simulated data is pre-multiplied by the Choleski factor so that the simulated data is scaled to have the desired correlation level (Diebold & Mariano, 2002).
The simulated data was generated for time series lengths of 16, 32, 64 and 128 while the number of the simulation was N=5000.The choice of the length chosen is to be able to study the models in the short run (Diebold & Mariano, 2002).

Sample of the R code for the data generation is presented below
The R Code set.seed(200) ## Simu l t e VAR( 2 )-dat a library (dse) library (vars) n=16 p=0.9 p1=0.9 z=e=rnorm(32) for(t in 2:32)z[t]=p1*z[t-1]+e[t] ## S e t t i n g the lag-p o l y n omi a l A(L ) Apoly <-array ( c ( 1.0,-0.5,0.3,0,0.2,0.1,0,-0.2,0.7,1,0.5,-0.3),c (3,2,2) ) ## S e t t i n g Co v a r i a n c e to i d e n t i t y -ma t r i x B <-diag ( 2 ) ## S e t t i n g c o n s t a n t term to 5 and 10 TRD <-c (5 , 10) ## Gen e r a t in g the VAR( 2 The above simulation method is well reported in Pfaff (2008a).

Model Specification
VAR model with lag 2 was used for the generation of data.The choice here is to have a VAR process with a true lag value while the VAR and BVAR models of lag length of 2 were used for modeling and forecasting purpose.
The BVAR model with Sims-Zha prior with the following range of values for the hyperparameters given and the Normal-Inverse Wishart prior was employed.
This study considered two tight priors and two loose priors as follows: The BVAR models with tight prior are given below: where nµ is prior degrees of freedom given as m+1 where m is the number of variables in the multiple time series data.In this work nµ is 3 (that is two (2) time series variables plus 1(one)).
Our choice of Normal-Inverse Wishart prior for the BVAR models follow the work of Kadiyala & Karlsson (1997) that Normal Wishart prior tends to performed better when compared to other priors.In addition Sims and Zha, (1998) proposed Normal-Inverse Wishart prior because of its suitability for large systems while Breheny (2013) reported that the most advantage of wishart distribution is that it guaranteed to produce positive definite draws.The choice of the overall tightness 0.8 and 6 .0 0 = λ is in line with work of Brandt, Colaresi and Freeman (2008).

Methods of Estimation
The methods of estimation of the model are as follows: The equation-by-equation seemingly unrelated regression (SUR) method is used to estimates the reduced form VAR model.
The multivariate seeming unrelated regression (SUR) method is used in the estimation of the Bayesian VAR models for just identified VARs (Sims and Zha, 1998).

3 Forecast Assessment
The following are the criteria for Forecast assessments used: 1. Mean Absolute Error or Deviation (MAE or MAD) has a formular

∑
. This criterion measures deviation from the series in absolute terms, and measures how much the forecast is biased.This measure is one of the most common ones used for analyzing the quality of different forecasts.
2. For the RMSE is given as 2 (y y ) where y i is the time series data and y f is the forecast value of y (Caraiani, 2010).
For the two measures above, the smaller the value, the better the fit of the model (Cooray, 2008) In our simulation study, and where N=5000.Therefore, the model with the minimum RMSE and MAE result as the preferred model.
1.At no and moderate levels (both negative and positive) of autocorrelation, irrespective of the collinearity level, the VAR(2) model was preferred except in few cases when T=16 2. At high level (both negative and positive) of autocorrelation and irrespective of the collinearity level, the BVAR models are preferred except in few cases when T=128.

0 Conclusion and Recommendation
This paper compare the forecasting performance of the reduced VAR and reduced form Sims-Zha Bayesian VAR (with harmonic decay) for bivariate time series data that have autocorrelated error terms and variables that are correlated (collinear time series data).The paper revealed that when multiple time series are jointly influenced by autocorrelation and collinearity at different levels, the conclusion follows that the performances of the models varies at different levels of the collinearity and autocorrelation, and also varies with the short term periods.
It is therefore recommended that the data structure and series length should be considered in using an appropriate model for forecasting.

Appendix
used the BVAR model to model the U.S macroeconomic and political data.They also extended their work with BVAR model to carry-out structural analysis on the macro political dynamics with the Bayesian Structural Vector Autoregression (BSVAR).Brandth, Colaresi and Freeman (2008) used the structural Bayesian time series approach to evaluate Bystander, Follower, Accountability and Credibility in a Macropolitical economic in relation to Israel and Palestine conflict, and intervention on the part of the United States.The approach by Brandth, Colaresi and Freeman further addresses the problems of model scale, endogeneity and specification uncertainty, and finally used the reduced form Bayesian VAR models to forecast the Israelis-Palestinians conflict.Njenga and Sherris (2011) used the unrestricted VAR and Sims-Zha Bayesian VAR to modeled mortality among male and female in a given population.Their result revealed that Sims-Zha Bayesian VAR was superior.Dua and Ranjan (2011) developed vector autoregressive and Bayesian vector autoregressive models to forecast the indian Re/US dollar exchange rate which was governed by managed floating exchange rate regime.Their study reported that the Bayesian vector autoregressive model generally outperformed their corresponding VAR variants.Chama-Chiliba et al (2012) compared the forecasting performances of the classical and the Minnesota-type Bayesian Vector Autoregressive (VAR) models with those of linear (fixed-parameter) and non-linear (time-varying parameter) VARs involving a stochastic search algorithm for variable selection, estimated using Markov Chain Monte Carlo methods.In general, the study revealed that variable selection, whether imposed on a time-varying VAR or a fixed parameter VAR, and non-linearity in VARs, play an important part in improving predictions when compared to the fixed coefficient classical VAR.Adenomon and Oyejola (2014) compared the forecasting performances of the Reduced form Vector Autoregression (VAR) and Sims-Zha Bayesian VAR (BVAR) in a situation where the Endogenous variables are collinear at different levels and at different short terms time series lengths.Their simulation study revealed that the BVAR forecast seems to be superior.Sacakli-Sacildi (2015) compared the out-of-sample forecasting accuracy of unrestricted (2016)  compared the forecast performance of classical VAR and Sims- posterior mean.They also proposed an algorithm that uses generated data as latent variables in numerical simulation of Bayesian estimates under loss entropy loss.However, in recent times, the BVAR model ofSims and Zha (1998) has gained popularity both in economic time series and political analysis.As stated inBrandt and Freeman (2006), Litterman proposed BVAR for the reduced form of the model, while Sims-Zha specified prior for the simultaneous equation of the model.They further noted that Sims-Zha has more advantage compared to the BVAR proposed by Litterman.The Sims-Zha BVAR allows for a more general specification and can produce a tractable multivariate normal posterior distribution.

λ
controls the tightness of the beliefs about the random walk prior or the standard deviation of the first lags.The 3 λ l term allows the variance of the coefficients on higher order lags to shrink as the lag length increases.The constant in the model receives a prior adds dummy observations to account for unit roots, trends, and cointegration which was not possible with the Litterman prior.Given the reduced form model Preprints (www.preprints.org)| NOT PEER-REVIEWED | Posted: 20 November 2018 Preprints (www.preprints.org)| NOT PEER-REVIEWED | Posted:

Table 4 . 0 :
Sample of Generated data for collinearity level of 0.9 and autocorrelation level of 0.9

Table 5 .
1A: Preferred Forecasting Models at Different Level of Collinearity and Autocorrelation Level Using MAE Criterion for All the Time Series Length

Table 4 .
4.2B: The Ranks of the Forecasting Performances of the Models When CollinearityLevel is Negatively Moderate and Autocorrelation Level is Negatively High (ρ=-0.5, δ =-0.9)

Table 4 .
4.4B: The Ranks of the Forecasting Performances of the Models When CollinearityLevel is Positively Moderate and Autocorrelation Level is Negatively High (ρ=0.5, δ =-0.9)

Table 4 .
4.6B: The Ranks of the Forecasting Performances of the Models When CollinearityLevel is Negatively High and Autocorrelation Level is Moderately Negative (ρ=-0.9,δ =-0.5)

Table 4 .
4.9B: The Ranks of the Forecasting Performances of the Models When Collinearity Level is Moderately Positive and Autocorrelation Level is Moderately Negative

Table 4 .
4.10B: The Ranks of the Forecasting Performances of the Models When CollinearityLevel is Positively High and Autocorrelation Level is Moderately Negative (ρ=0.9, δ =-0.5)

Table 4 .
4.11B: The Ranks of the Forecasting Performances of the Models When CollinearityLevel is Negatively High and there is no Autocorrelation (ρ=-0.9,δ =0)

Table 4 .
4.13A: Forecasting Performances of the Models When there is no Collinearity and no

Table 4 .
4.14B: The Ranks of the Forecasting Performances of the Models When CollinearityLevel is Moderately Positive and no Autocorrelation (ρ=0.5, δ =0)

Table 4 .
4.15B: The Ranks of the Forecasting Performances of the Models When Collinearity isPositively High and no Autocorrelation (ρ=0.9, δ =0)

Table 4 .
4.16B: The Ranks of the Forecasting Performances of the Models When CollinearityLevel is Negatively High and Autocorrelation Level is Moderately Positive (ρ=-0.9,δ=0.5)

Table 4 .
4.17B: The Ranks of the Forecasting Performances of the Models When CollinearityLevel is Moderately Negative and Autocorrelation Level is Moderately Positive (ρ=-0.5, δ=0.5)

Table 4 .
4.18A: Forecasting Performances of the Models When there is no Collinearity and Autocorrelation Level is Moderately Positive (ρ=0, δ=0.5)

Table 4 .
4.18B: The Ranks of the Forecasting Performances of the Models When there is no Collinearity and Autocorrelation Level is Moderately Positive (ρ=0, δ=0.5)

Table 4 .
4.19B: the Ranks of the Forecasting Performances of the Models When CollinearityLevel and Autocorrelation Level are both Moderately Positive (ρ=0.5, δ=0.5)

Table 4 .
4.20B: The Ranks of the Forecasting Performances of the Models When CollinearityLevel is Positively High and Autocorrelation Level is Moderately Positive (ρ=0.9, δ=0.5)

Table 4 .
4.21B: The Ranks of the Forecasting Performances of the Models When Collinearity is Negatively High and Autocorrelation Level is Positively High (ρ=-0.9,δ =0.9)

Table 4 .
4.22B: The Ranks of the Forecasting Performances of the Models When Collinearity isNegatively High and Autocorrelation Level is Positively High (ρ=-0.5, δ=0.9)

Table 4 .
4.23A: Forecasting Performances of the Models When there is no Collinearity and Autocorrelation Level is Positively High (ρ=0, δ =0.9)

Table 4 .
4.24B: The Ranks of the Forecasting Performances of the Models When CollinearityLevel is Moderately Positive and Autocorrelation Level is Positively High (ρ=0.5, δ=0.9)

Table 4 .
4.25B: The Ranks of the Forecasting Performances of the Models When Collinearity and Autocorrelation Levels are both Positively High (ρ=0.9, δ =0.9)