Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Neurosymbolic Knowledge Representation for Explainable and Trustworthy AI

Version 1 : Received: 15 January 2020 / Approved: 16 January 2020 / Online: 16 January 2020 (10:49:10 CET)

How to cite: Di Maio, P. Neurosymbolic Knowledge Representation for Explainable and Trustworthy AI. Preprints 2020, 2020010163. https://doi.org/10.20944/preprints202001.0163.v1 Di Maio, P. Neurosymbolic Knowledge Representation for Explainable and Trustworthy AI. Preprints 2020, 2020010163. https://doi.org/10.20944/preprints202001.0163.v1

Abstract

AI research and implementations are growing, and so are the risks associated with AI (Artificial Intelligence) developments, especially when it’s difficult to understand exactly what they do and how they work, both at a localized level, and at deployment, in particular when distributed and on a large scale. Governments are pouring massive funding to promote AI research and education, yet research results and claims, as well as the effectiveness of educational programmes, can be difficult to evaluate given the limited reproducibility of computations based on ML (machine learning) and poor explainability, which in turn limits the accountability of the systems and can cause cascading systemic problems and challenges including poor reproducibility, reliability, and overall lack of trustworthiness. This paper addresses some of the issues in Knowledge Representation for AI at system level, identifies a number of knowledge gaps and epistemological challenges as root causes of risks and challenges for AI, and proposes that neurosymbolic and hybrid KR approaches can serve as mechanisms to address some of the challenges. The paper concludes with a postulate and points to related and future research

Keywords

symbolic; neurosymbolic; eplainable AI; trustworthy

Subject

Computer Science and Mathematics, Computer Science

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.