Starting from a very simple economic scenario, we build on it a game and then we introduce a general strategy able to reduce a regression problem to an equivalent binary classification problem. This reduction scheme (that we call adaptive reduction or also dynamic reduction) can be also used to derive a new boosting algorithm for regression problems named bOOstd. The bOOstdalgorithm is very simple to implement, and it can use any learning algorithm with no priori assumptions. We present a conjecture for bOOstd performances, which ensures a little error on training set. More important we can also provide a very good theoretical upper bound for the generalization error. We give a set of preliminary experimental results that seems to confirm our conjecture for bOOstd performances on training set and the theoretical assumptions for the generalization error. We also provide a possible justification of why boosting often does not overfit. Finally, we leave some open problems and argue that in the future an adaptive single boosting (with an unique code) algorithm for binary, multi class and regression problems can be derived.
Keywords:
Subject: Computer Science and Mathematics - Applied Mathematics
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.