Background/Objectives: Artificial intelligence reshapes clinical practice and its effect on physician-patient relationship requires reconsideration of frameworks that have shaped modern medical ethics. When physician delegate expertise to algorithms they cannot verify, it becomes unclear who bears clinical responsibility. Methods: This article applies theoretically grounded normative approach to explore ethical conditions under which artificial intelligence can be integrated into clinical practice without compromising the moral foundations of medicine. The analysis is primarily based on Pellegrino and Thomasma’s concept of internal morality of medicine and the physician’s act of profession. It further draws on Kantian ethics of human dignity, Levinasian relational ethics, virtue ethics, and Vallor’s concept of technomoral wisdom. Results: AI systems do not satisfy the conditions under which moral responsibility can be ascribed to them. Clinical moral agency lies in the capacity to bear three distinct responsibilities – epistemic, relational, and phronetic – none of which can be fulfilled by AI. The implementation of AI in healthcare, therefore, must occur strictly under the condition of Meaningful Human Control, rather technical function of human oversight over algorithmic outputs. To ensure that MHC can function as an effective and ethically grounded safeguard, we propose five normative requirements: primacy of clinical judgement, prohibition of forced automation, traceability and explainability, transparency towards patients, and clinical authority over diagnostic tools. A dialog between the physician and the patients should remain the foundation of clinical decision-making. Proposed normative requirements aim to preserve internal morality of medicine in a form that harmoniously combines both technological progress and established medical ethics.