Preprint Article Version 1 This version is not peer-reviewed

On the Tuning Parameter Selection in Model Selection and Model Averaging: A Monte Carlo Study

Version 1 : Received: 24 May 2019 / Approved: 27 May 2019 / Online: 27 May 2019 (10:28:22 CEST)

How to cite: Xiao, H.; Sun, Y. On the Tuning Parameter Selection in Model Selection and Model Averaging: A Monte Carlo Study. Preprints 2019, 2019050311 (doi: 10.20944/preprints201905.0311.v1). Xiao, H.; Sun, Y. On the Tuning Parameter Selection in Model Selection and Model Averaging: A Monte Carlo Study. Preprints 2019, 2019050311 (doi: 10.20944/preprints201905.0311.v1).

Abstract

Model selection and model averaging have been the popular approaches in handling modelling uncertainties. Fan and Li(2006) laid out a unified frame work for variable selection via penalized likelihood. The tuning parameter selection is vital in the optimization problem for the penalized estimators in achieving consistent selection and optimal estimation. Since the OLSpost-LASSO estimator by Belloni and Chernozhukov (2013), few studies have focused on the finite sample performances of the class of OLS post-penalty estimators with the tuning parameter choice determined by different tuning parameter selection approaches. We aim to supplement the existing model selection literature by studying such a class of OLS post-selection estimators. Inspired by the Shrinkage Averaging Estimator (SAE) by Schomaker(2012) and the Mallows Model Averaging (MMA) criterion by Hansen (2007), we further propose a Shrinkage Mallows Model Averaging (SMMA) estimator for averaging high dimensional sparse models. Based on the Monte Carlo design by Wang et al. (2009) which features an expanding sparse parameter space as the sample size increases, our Monte Carlo design further considers the effect of the effective sample size and the degree of model sparsity on the finite sample performances of model selection and model averaging estimators. From our data examples, we find that the OLS post-SCAD(BIC) estimator in finite sample outperforms most of the current penalized least squares estimators as long as the number of parameters does not exceed the sample size. In addition, the SMMA performs better given sparser models. This supports the use of the SMMA estimator when averaging high dimensional sparse models.

Subject Areas

Mallows criterion; Model averaging; Model selection; Shrinkage; Tuning parameter choice.

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our diversity statement.

Leave a public comment
Send a private comment to the author(s)
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.