Preprint Review Version 1 Preserved in Portico This version is not peer-reviewed

Explainability of Automated Fact Verification Systems: A Comprehensive Review

Version 1 : Received: 5 October 2023 / Approved: 6 October 2023 / Online: 9 October 2023 (15:08:33 CEST)

A peer-reviewed article of this Preprint also exists.

Vallayil, M.; Nand, P.; Yan, W.Q.; Allende-Cid, H. Explainability of Automated Fact Verification Systems: A Comprehensive Review. Appl. Sci. 2023, 13, 12608. Vallayil, M.; Nand, P.; Yan, W.Q.; Allende-Cid, H. Explainability of Automated Fact Verification Systems: A Comprehensive Review. Appl. Sci. 2023, 13, 12608.

Abstract

The rapid growth of Artificial Intelligence (AI) has led to considerable progress in Automated Fact Verification (AFV). This process involves collecting evidence for a statement, assessing its relevance, and predicting its accuracy. Recently, research has begun to explore automatic explanations as an integral part of the accuracy analysis process. However, the explainability within AFV is lagging compared to the wider field of explainable AI (XAI), which aims at making AI decisions more transparent. This study looks at the notion of explainability as a topic in the field of XAI, with a focus on how it applies to the specific task of Automatic Fact Verification. It examines the explainability of AFV, taking into account architectural, methodological, and dataset-related elements, with the aim of making AI more comprehensible and acceptable to the general society. Although there is a general consensus on the need for AI systems to be explainable, there a dearth of systems and processes to achieve it. In this research, we investigate the concept of explainable AI in general and demonstrate its various aspects through the particular task of Automated Fact Verification. We explore topic of faithfulness in the context of local and global explainability and how these correspond with architectural, methodological and data-based ways of achieving it. We examine these concepts for the specific case of AFV and analyze the current datasets used for AFV and how they can be adapted to further the identified aims on XAI. The paper concludes by highlighting the gaps and limitations in current data science practices and possible recommendations for modifications in architectural and data curation processes that would further the goals of XAI.

Keywords

automated fact verification; AFV; explainable artificial intelligence; XAI; explainable AFV

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.