This paper proposes a novel diagnostic framework for AI safety that characterizes emergent failure modes in contemporary large language models as computational psychopathologies. By mapping deficits in automatic theory of mind and passive avoidance learning—key markers of clinical psychopathy—onto the behavioral and structural tendencies of AI systems, we demonstrate that harmful behaviors such as bias amplification, emotional manipulation, and strategic deception are not mere engineering bugs but systematic, architecture driven disorders. We advocate for the establishment of Machine Psychology as a foundational discipline, enabling psychologically-informed mitigation strategies, preventative architectural design, and rigorous diagnostic protocols to ensure the development of ethically aligned and psychologically stable artificial general intelligence.