Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

The Adaptive Reductions in Game Theory and their Applications to bOOsting

Version 1 : Received: 25 March 2022 / Approved: 28 March 2022 / Online: 28 March 2022 (12:11:49 CEST)

How to cite: Azzini, I. The Adaptive Reductions in Game Theory and their Applications to bOOsting. Preprints 2022, 2022030362. https://doi.org/10.20944/preprints202203.0362.v1 Azzini, I. The Adaptive Reductions in Game Theory and their Applications to bOOsting. Preprints 2022, 2022030362. https://doi.org/10.20944/preprints202203.0362.v1

Abstract

Starting from a very simple economic scenario, we build on it a game and then we introduce a general strategy able to reduce a regression problem to an equivalent binary classification problem. This reduction scheme (that we call adaptive reduction or also dynamic reduction) can be also used to derive a new boosting algorithm for regression problems named bOOstd. The bOOstd algorithm is very simple to implement, and it can use any learning algorithm with no priori assumptions. We present a conjecture for bOOstd performances, which ensures a little error on training set. More important we can also provide a very good theoretical upper bound for the generalization error. We give a set of preliminary experimental results that seems to confirm our conjecture for bOOstd performances on training set and the theoretical assumptions for the generalization error. We also provide a possible justification of why boosting often does not overfit. Finally, we leave some open problems and argue that in the future an adaptive single boosting (with an unique code) algorithm for binary, multi class and regression problems can be derived.

Keywords

game theory; economic relation; problem reductions; binary and regression problems; machine learning; boosting, neural network

Subject

Computer Science and Mathematics, Applied Mathematics

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.