Preserved in Portico This version is not peer-reviewed
Deep Reinforcement Learning for Energy Management in a Microgrid with Flexible Demand
: Received: 6 October 2020 / Approved: 7 October 2020 / Online: 7 October 2020 (11:21:03 CEST)
A peer-reviewed article of this Preprint also exists.
Journal reference: Sustainable Energy, Grids and Networks 2021, 25, 100413
In this paper, we study the performance of various deep reinforcement learning algorithms to enhance the energy management system of a microgrid. We propose a novel microgrid model that consists of a wind turbine generator, an energy storage system, a set of thermostatically controlled loads, a set of price-responsive loads, and a connection to the main grid. The proposed energy management system is designed to coordinate among the different flexible sources by defining the priority resources, direct demand control signals, and electricity prices. Seven deep reinforcement learning algorithms were implemented and are empirically compared in this paper. The numerical results show that the deep reinforcement learning algorithms differ widely in their ability to converge to optimal policies. By adding an experience replay and a semi-deterministic training phase to the well-known asynchronous advantage actor-critic algorithm, we achieved the highest model performance as well as convergence to near-optimal policies.
Artificial intelligence; Deep reinforcement learning; Demand Response; Dynamic pricing; Energy management system; Microgrid; Neural networks; Price-responsive loads; Smart grid; Thermostatically controlled loads
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We encourage comments and feedback from a broad range of readers. See criteria for comments and our diversity statement.