The rapid diffusion of Artificial Intelligence (AI) across public sector institutions is reshaping governance practices and service delivery worldwide. However, most existing AI governance frameworks have been developed within technologically advanced and institutionally stable environments, limiting their applicability to developing and fragile states characterised by institutional volatility, regulatory gaps, and socio-political complexity. Addressing this gap, this study proposes an empirically grounded Ethical AI Governance Framework designed to support responsible AI adoption in public sector institutions operating within fragile governance contexts. The research adopts a sequential explanatory mixed-methods design, integrating quantitative and qualitative approaches to ensure methodological rigor and contextual validity. The quantitative phase involved a structured survey assessing organisational readiness for AI adoption, including dimensions of institutional capacity, technological infrastructure, governance preparedness, and stakeholder trust. The results provided baseline indicators of AI readiness across public sector entities. Building on these findings, the qualitative phase employed semi-structured interviews with policymakers, digital transformation experts, and institutional stakeholders to explore deeper governance challenges related to AI deployment, including ethical accountability, regulatory constraints, and institutional trust. The integration of both datasets enabled methodological triangulation and facilitated a comprehensive understanding of AI governance dynamics. Based on the empirical insights derived from this mixed-methods analysis, the study develops an Ethical AI Governance Framework structured around three interdependent pillars: Institutional Readiness, emphasising organisational capacity, regulatory alignment, and technical infrastructure; Ethical Oversight, ensuring transparency, accountability, fairness, and responsible algorithmic governance; and Participatory Design, promoting citizen engagement, stakeholder inclusion, and trust-building mechanisms throughout the AI lifecycle. The proposed framework contributes to the emerging literature on AI governance in fragile and developing states by providing a context-sensitive governance model grounded in empirical evidence from the Libyan public sector. By bridging theoretical governance principles with practical policy considerations, the study offers a scalable blueprint for enabling responsible, trustworthy, and inclusive AI-driven digital transformation in complex institutional environments.