Artificial intelligence has achieved enormous visibility in recent years, mostly thanks to the success of deep learning and its generative AI applications. Still, current state-of-the-art models remain brittle and struggle to provide reliable predictions under settings that differ, often even marginally, from those that generate their training data. The issue is only compounded when one attempts to enable machines to continually learn from data, in an imitation of humans’ lifelong learning experience. While recognizing this issue under various names (e.g., ‘overfitting’ or ‘model adaptation’), traditional machine learning seems unable to address it in radical ways. We argue that a real breakthrough requires a proper mathematical treatment of the ‘epistemic’ uncertainty stemming from a forcibly partial knowledge of the world, which in turn links to both the continual learning from new data and the injection of knowledge in a neurosymbolic sense. Our position supports the creation of a new continual learning paradigm designed to provide worst-case guarantees on model predictions throughout the learning process, coupled with the extension of neurosymbolic AI under epistemic uncertainty, as the two main channels to reduce the latter via additional knowledge.