Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Reinforcement Learning for Electric Vehicle Charging using Dueling Neural Networks

Version 1 : Received: 22 March 2021 / Approved: 24 March 2021 / Online: 24 March 2021 (13:44:36 CET)

How to cite: Gokhale, G.; Claessens, B.; Develder, C. Reinforcement Learning for Electric Vehicle Charging using Dueling Neural Networks. Preprints 2021, 2021030592. https://doi.org/10.20944/preprints202103.0592.v1 Gokhale, G.; Claessens, B.; Develder, C. Reinforcement Learning for Electric Vehicle Charging using Dueling Neural Networks. Preprints 2021, 2021030592. https://doi.org/10.20944/preprints202103.0592.v1

Abstract

We consider the problem of coordinating the charging of an entire fleet of electric vehicles (EV), using a model-free approach, i.e. purely data-driven reinforcement learning (RL). The objective of the RL-based control is to optimize charging actions, while fulfilling all EV charging constraints (e.g. timely completion of the charging). In particular, we focus on batch-mode learning and adopt fitted Q-iteration (FQI). A core component in FQI is approximating the Q-function using a regression technique, from which the policy is derived. Recently, a dueling neural networks architecture was proposed and shown to lead to better policy evaluation in the presence of many similar-valued actions, as applied in a computer game context. The main research contributions of the current paper are that (i)we develop a dueling neural networks approach for the setting of joint coordination of an entire EV fleet, and (ii)we evaluate its performance and compare it to an all-knowing benchmark and an FQI approach using EXTRA trees regression technique, a popular approach currently discussed in EV related works. We present a case study where RL agents are trained with an epsilon-greedy approach for different objectives, (a)cost minimization, and (b)maximization of self-consumption of local renewable energy sources. Our results indicate that RL agents achieve significant cost reductions (70--80%) compared to a business-as-usual scenario without smart charging. Comparing the dueling neural networks regression to EXTRA trees indicates that for our case study's EV fleet parameters and training scenario, the EXTRA trees-based agents achieve higher performance in terms of both lower costs (or higher self-consumption) and stronger robustness, i.e. less variation among trained agents. This suggests that adopting dueling neural networks in this EV setting is not particularly beneficial as opposed to the Atari game context from where this idea originated.

Keywords

Electric Vehicles; batch reinforcement learning; dueling neural networks; fitted Q-iteration

Subject

Engineering, Electrical and Electronic Engineering

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.