Preprint
Article

The Adaptive Reductions in Game Theory and their Applications to bOOsting

Altmetrics

Downloads

165

Views

791

Comments

0

This version is not peer-reviewed

Submitted:

25 March 2022

Posted:

28 March 2022

You are already at the latest version

Alerts
Abstract
Starting from a very simple economic scenario, we build on it a game and then we introduce a general strategy able to reduce a regression problem to an equivalent binary classification problem. This reduction scheme (that we call adaptive reduction or also dynamic reduction) can be also used to derive a new boosting algorithm for regression problems named bOOstd. The bOOstd algorithm is very simple to implement, and it can use any learning algorithm with no priori assumptions. We present a conjecture for bOOstd performances, which ensures a little error on training set. More important we can also provide a very good theoretical upper bound for the generalization error. We give a set of preliminary experimental results that seems to confirm our conjecture for bOOstd performances on training set and the theoretical assumptions for the generalization error. We also provide a possible justification of why boosting often does not overfit. Finally, we leave some open problems and argue that in the future an adaptive single boosting (with an unique code) algorithm for binary, multi class and regression problems can be derived.
Keywords: 
Subject: Computer Science and Mathematics  -   Applied Mathematics
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated