Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Explaining the Behaviour of Reinforcement Learning Agents in a Multi-Agent Cooperative Environment Using Policy Graphs

Version 1 : Received: 17 January 2024 / Approved: 18 January 2024 / Online: 19 January 2024 (04:41:49 CET)

A peer-reviewed article of this Preprint also exists.

Domenech i Vila, M.; Gnatyshak, D.; Tormos, A.; Gimenez-Abalos , V.; Alvarez-Napagao, S. Explaining the Behaviour of Reinforcement Learning Agents in a Multi-Agent Cooperative Environment Using Policy Graphs. Electronics 2024, 13, 573. Domenech i Vila, M.; Gnatyshak, D.; Tormos, A.; Gimenez-Abalos , V.; Alvarez-Napagao, S. Explaining the Behaviour of Reinforcement Learning Agents in a Multi-Agent Cooperative Environment Using Policy Graphs. Electronics 2024, 13, 573.

Abstract

The adoption of algorithms based on Artificial Intelligence (AI) has been rapidly increasing during the last years. However, some aspects of AI techniques are under heavy scrutiny. For instance, in many use cases, it is not clear whether the decisions of an algorithm are well-informed and conforming to human understanding. Having ways to address these concerns is crucial in many domains, especially whenever humans and intelligent (physical or virtual) agents must cooperate in a shared environment. In this paper, we introduce an application of an explainability method based on the creation of a Policy Graph (PG) based on discrete predicates that represent and explain a trained agent’s behaviour in a multi-agent cooperative environment. We show that from these policy graphs, policies for surrogate interpretable agents can be automatically generated. These policies can be used to measure the reliability of the explanations enabled by the PGs, through a fair behavioural comparison between the original opaque agent and the surrogate one. The contributions of this paper represent the first application of policy graphs in the context of explaining agent behaviour in collaborative multi-agent scenarios and presents experimental results that sets this kind of scenario apart from previous application in single-agent scenarios: when requiring collaborative behaviour, predicates that allow representing observations about the other agents are crucial to replicate the opaque agent’s behaviour and increase the reliability of explanations.

Keywords

Explainable AI; Reinforcement Learning; Policy Graphs; Multi-agent Reinforcement Learning; Cooperative Environments

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.