Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Safety Constraints-Guided Reinforcement Learning with Linear Temporal Logic

Version 1 : Received: 30 October 2023 / Approved: 30 October 2023 / Online: 30 October 2023 (08:25:05 CET)

A peer-reviewed article of this Preprint also exists.

Kwon, R.; Kwon, G. Safety Constraint-Guided Reinforcement Learning with Linear Temporal Logic. Systems 2023, 11, 535. Kwon, R.; Kwon, G. Safety Constraint-Guided Reinforcement Learning with Linear Temporal Logic. Systems 2023, 11, 535.

Abstract

In the context of reinforcement learning (RL), ensuring both safety and performance is crucial, especially in real-world scenarios where mistakes can lead to severe consequences. This study aims to address this challenge by integrating temporal logic constraints into RL algorithms, thereby providing a formal mechanism for safety verification. We employ a combination of theoretical and empirical methods, including the use of temporal logic for formal verification and extensive simulations to validate our approach. Our results demonstrate that the proposed method not only maintains high levels of safety but also achieves comparable performance to traditional RL algorithms. Importantly, our approach fills a critical gap in existing literature by offering a solution that is both mathematically rigorous and empirically validated. The study concludes that the integration of temporal logic into RL offers a promising avenue for developing algorithms that are both safe and efficient. This work lays the foundation for future research aimed at generalizing this approach to various complex systems and applications.

Keywords

RL; safety constraint; linear temporal logic; formal verification

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.