This paper addresses the critical issue of trust in Artificial Intelligence systems, especially when users might find it challenging to comprehend the internal decision-making processes of such systems. A relevant topic of research in this respect is Theory of Mind, which involves attempting to understand these systems as if they possessed beliefs, desires, and intentions. We focus on the latter: intentions, and we examine how producing explanations based on them can improve \emph{understandability} while allowing for a better interpretation of how they align with human values. We also review some existing methods for identifying intentions in AI systems and we conclude with a discussion on possible future directions in this line of research.