Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

AutoMH: Automatically Create Evolutionary Metaheuristic Algorithms Using Reinforced Learning

Version 1 : Received: 31 December 2020 / Approved: 4 January 2021 / Online: 4 January 2021 (13:31:05 CET)

How to cite: Almonacid, B. AutoMH: Automatically Create Evolutionary Metaheuristic Algorithms Using Reinforced Learning. Preprints 2021, 2021010048 (doi: 10.20944/preprints202101.0048.v1). Almonacid, B. AutoMH: Automatically Create Evolutionary Metaheuristic Algorithms Using Reinforced Learning. Preprints 2021, 2021010048 (doi: 10.20944/preprints202101.0048.v1).

Abstract

Machine learning research has been able to solve problems in multiple aspects. An open area of research is machine learning for solving optimisation problems. An optimisation problem can be solved using a metaheuristic algorithm, which is able to find a solution in a reasonable amount of time. However, there is a problem, the time required to find an appropriate metaheuristic algorithm, that would have the convenient configurations to solve a set of optimisation problems properly. A solution approach is shown here, using a proposal that automatically creates metaheuristic algorithms aided by a reinforced learning approach. Based on the experiments performed, the approach succeeded in creating a metaheuristic algorithm that managed to solve a large number of different continuous domain optimisation problems. This work's implications are immediate because they describe a basis for the generation of metaheuristic algorithms in real-time.

Subject Areas

Artificial intelligence; Machine Learning; Reinforced Learning; Optimisation; Metaheuristic; Metaheuristic Generation

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our diversity statement.

Leave a public comment
Send a private comment to the author(s)
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.