Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

User Anxiety-aware Electric Vehicle Charging Scheduling: An Episodic Deep Reinforcement Learning Approach

Version 1 : Received: 5 April 2024 / Approved: 5 April 2024 / Online: 9 April 2024 (09:35:02 CEST)

How to cite: Zaman, I.; He, M. User Anxiety-aware Electric Vehicle Charging Scheduling: An Episodic Deep Reinforcement Learning Approach. Preprints 2024, 2024040598. https://doi.org/10.20944/preprints202404.0598.v1 Zaman, I.; He, M. User Anxiety-aware Electric Vehicle Charging Scheduling: An Episodic Deep Reinforcement Learning Approach. Preprints 2024, 2024040598. https://doi.org/10.20944/preprints202404.0598.v1

Abstract

The transportation industry is rapidly transitioning from Internal Combustion Engine (ICE) based vehicles to Electric Vehicles (EVs) to promote clean energy. However, large-scale adoption of EVs can compromise the reliability of the power grids by introducing large uncertainty in the demand. Demand response with a controlled charge scheduling strategy for EVs can mitigate such issues. In this paper, a deep reinforcement learning- based charge scheduling strategy is developed for individual EVs by considering user’s dynamic driving behavior and charging preferences. The temporal dynamics of user’s anxiety about charging the EV battery is rigorously addressed. A dynamic weight allocation technique is applied to continuously tune user’s priority for charging and cost-saving with respect to charging duration. The sequential charging control problem is formulated as a Markov decision process, and an episodic approach to the deep deterministic policy gradient (DDPG) algorithm with target policy smoothing and delayed policy update techniques is applied to develop the optimal charge scheduling strategy. A real-world dataset that captures user’s driving behavior, such as arrival time, departure time, and charging duration, is utilized in this study. The extensive simulation results reveal the effectiveness of the proposed algorithm in minimizing energy cost while satisfying user’s charging requirements.

Keywords

deep deterministic policy gradient (DDPG), deep reinforcement learning, EV charge scheduling, Markov decision process (MDP)

Subject

Engineering, Electrical and Electronic Engineering

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.