Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Achievable Minimally Contrastive Counterfactual Explanations

Version 1 : Received: 10 July 2023 / Approved: 11 July 2023 / Online: 12 July 2023 (07:52:00 CEST)

A peer-reviewed article of this Preprint also exists.

Barzekar, H.; McRoy, S. Achievable Minimally-Contrastive Counterfactual Explanations. Mach. Learn. Knowl. Extr. 2023, 5, 922-936. Barzekar, H.; McRoy, S. Achievable Minimally-Contrastive Counterfactual Explanations. Mach. Learn. Knowl. Extr. 2023, 5, 922-936.

Abstract

Decision support systems based on machine learning models should be able to help users identify opportunities and threats. Popular model-agnostic explanation models can identify factors that support various predictions, answering questions like “What factors affect sales?” or “Why did sales decline?”, but do not highlight what a person should or could do to get a more desirable outcome. Counterfactual explanation approaches address intervention, and some even consider feasibility, but there has been no evaluation of their suitability for real-time applications, such as question answering. Here we address this gap by introducing a novel model-agnostic method that provides specific, feasible changes that would impact the outcomes of a complex black-box AI model for a given instance and assess its real-world utility by measuring its real-time performance and ability to find achievable changes. The method uses the instance of concern to generate high-precision explanations and then applies a secondary method to find minimally contrastive and maximally probable high-precision counterfactual explanations, while limiting contrasted features to changes that are achievable. We demonstrate that using this method, it is possible to find such explanations quickly enough for use in real-time systems. High-precision achievable minimally contrastive explanations would be useful in applications where people seek remedial actions or question how effective a proposed remedy is likely to be.

Keywords

Machine Learning; Interpretability; Feasibility; Counterfactual and Contrastive Explanation

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.