This chapter examines the ontological assumptions, epistemological challenges, and ethical implications involved in using Agentic AI to assist, guide, or potentially replace human agents in the making of moral and legal decisions. It argues that different metaphysical assumptions regarding the ontology of ethics, justice, cognition, and AI decisively shape the framework within which such systems are evaluated. In this respect, the analysis distinguishes between two different levels of severity in the moral issues raised, corresponding to two distinct levels of AI autonomy: first, AI systems operating as advisory systems, with limited autonomy; and secondly, AI systems operating as regulatory systems, with full autonomy, that is, as entities entrusted with final decision-making authority. The text adopts a critical perspective on the use of Agentic AI in contexts of moral and legal judgement, highlighting both the conceptual fragility and the epistemological challenges that accompany proposals for such applications. At the same time, it considers the conditions under which such systems could genuinely contribute to human flourishing. Particular attention is given to the risk that ostensibly advisory systems may, in practice, become tacitly regulatory, especially under the pressure of widespread assumptions concerning AI objectivity and effectiveness. The chapter’s structure follows an algorithmic logic, in which a series of key questions serve as branching yes/no nodes, each possible answer leading to a distinct line of philosophical analysis.