Preprint
Article

This version is not peer-reviewed.

A Research Programme for Continual and Neurosymbolic Learning in Epistemic Artificial Intelligence

Submitted:

13 May 2026

Posted:

15 May 2026

You are already at the latest version

Abstract
Artificial intelligence has achieved enormous visibility in recent years, mostly thanks to the success of deep learning and its generative AI applications. Still, current state-of-the-art models remain brittle and struggle to provide reliable predictions under settings that differ, often even marginally, from those that generate their training data. The issue is only compounded when one attempts to enable machines to continually learn from data, in an imitation of humans’ lifelong learning experience. While recognizing this issue under various names (e.g., ‘overfitting’ or ‘model adaptation’), traditional machine learning seems unable to address it in radical ways. We argue that a real breakthrough requires a proper mathematical treatment of the ‘epistemic’ uncertainty stemming from a forcibly partial knowledge of the world, which in turn links to both the continual learning from new data and the injection of knowledge in a neurosymbolic sense. Our position supports the creation of a new continual learning paradigm designed to provide worst-case guarantees on model predictions throughout the learning process, coupled with the extension of neurosymbolic AI under epistemic uncertainty, as the two main channels to reduce the latter via additional knowledge.
Keywords: 
;  ;  ;  ;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated