Preprint Review Version 1 Preserved in Portico This version is not peer-reviewed

A Survey on Explainability: Why Should We Believe the Accuracy of A Model?

Version 1 : Received: 23 April 2020 / Approved: 25 April 2020 / Online: 25 April 2020 (02:57:06 CEST)

How to cite: Dutta, P.; Muppalaneni, N.B.; Patgiri, R. A Survey on Explainability: Why Should We Believe the Accuracy of A Model?. Preprints 2020, 2020040456. https://doi.org/10.20944/preprints202004.0456.v1 Dutta, P.; Muppalaneni, N.B.; Patgiri, R. A Survey on Explainability: Why Should We Believe the Accuracy of A Model?. Preprints 2020, 2020040456. https://doi.org/10.20944/preprints202004.0456.v1

Abstract

The world has been evolving with new technologies and advances day-by-day. With the advent of various learning technologies in every field, the research community is able to provide solution in every aspect of life with the applications of Artificial Intelligence, Machine Learning, Deep Learning, Computer Vision, etc. However, with such high achievements, it is found to lag behind the ability to provide explanation against its prediction. The current situation is such that these modern technologies are able to predict and decide upon various cases more accurately and speedily than a human, but failed to provide an answer when the question of why to trust its prediction is put forward. In order to attain a deeper understanding into this rising trend, we explore a very recent and talked-about novel contribution which provides rich insight on a prediction being made -- ``Explainability.'' The main premise of this survey is to provide an overview for researches explored in the domain and obtain an idea of the current scenario along with the advancements published to-date in this field. This survey is intended to provide a comprehensive background of the broad spectrum of Explainability.

Keywords

Artificial Intelligence; Explainability; Deep Learning; Machine Learning

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.