Artificial intelligence is fast expanding in clinical research and medicinal development. In response, a considerable governance literature has arisen, characterised by ambitious theoretical frameworks but persisting gaps in practical implementation. This critical analysis assesses the underlying assumptions, organisational constraints, and institutional flaws that undermine responsible AI governance in healthcare and clinical research. The analysis combines findings from AI ethics, organisational governance, computational toxicology, clinical trial methodology, and patient safety science. The core thesis is that, despite significant agreement among governments, corporations, and academia on stated objectives, the responsible AI field has persistently failed to bridge the gap between normative goals and organisational realities. In clinical settings, this failure has direct consequences for patient safety. The analysis is structured around five interconnected critiques: the conceptual inadequacy of the performance-centric evaluation paradigm, which conflates statistical reliability with clinical safety; the inadequacy of explainability methods as substitutes for genuine accountability; the practical unimplementability of principled administrative frameworks in most healthcare research institutions; and the characterisation of regulatory fragmentation as a political economy. Drawing on a large body of research, the review suggests that solving the governance gap in clinical AI requires facing more fundamental assumptions than current studies acknowledges.