The rapid growth of Artificial Intelligence (AI) has led to considerable progress in Automated Fact Verification (AFV). This process involves collecting evidence for a statement, assessing its relevance, and predicting its accuracy. Recently, research has begun to explore automatic explanations as an integral part of the accuracy analysis process. However, the explainability within AFV is lagging compared to the wider field of explainable AI (XAI), which aims at making AI decisions more transparent. This study looks at the notion of explainability as a topic in the field of XAI, with a focus on how it applies to the specific task of Automatic Fact Verification. It examines the explainability of AFV, taking into account architectural, methodological, and dataset-related elements, with the aim of making AI more comprehensible and acceptable to the general society. Although there is a general consensus on the need for AI systems to be explainable, there a dearth of systems and processes to achieve it. In this research, we investigate the concept of explainable AI in general and demonstrate its various aspects through the particular task of Automated Fact Verification. We explore topic of faithfulness in the context of local and global explainability and how these correspond with architectural, methodological and data-based ways of achieving it. We examine these concepts for the specific case of AFV and analyze the current datasets used for AFV and how they can be adapted to further the identified aims on XAI. The paper concludes by highlighting the gaps and limitations in current data science practices and possible recommendations for modifications in architectural and data curation processes that would further the goals of XAI.