Explainable Artificial Intelligence (XAI) has emerged as a critical enabler for the adoption of machine learning models in high-stakes domains such as healthcare. While significant progress has been made in XAI for computer vision and natural language processing, tabular data—the predominant format of electronic health records—presents unique challenges and opportunities. This systematic review provides a comprehensive analysis of XAI methods specifically applied to tabular healthcare data for classification tasks. We examine 15 representative studies published between 2025 and 2026, covering three complementary perspectives: (1) intrinsically interpretable models such as Concept and Argumentation Models (CAM), (2) post-hoc methods including LIME, SHAP, and their variants like TransLIME, and (3) evaluation frameworks that assess both model-centered fidelity and human-centered clinical alignment. Our analysis reveals that SHAP remains the dominant post-hoc method for tabular healthcare data, achieving strong model fidelity but showing inconsistent alignment with clinical expert reasoning. Intrinsically interpretable models, such as CAM, offer transparency by design but require semantic feature descriptions. Emerging trends include integrating XAI with federated learning to preserve privacy, applying transfer learning to improve explanations in data-scarce settings, and deploying real-time XAI systems in occupational health. We identify critical gaps, including limited adoption of XAI in automated machine learning pipelines (in only 30.7% of studies), a lack of standardized evaluation metrics that combine technical fidelity with clinical utility, and the predominance of single-institution validation studies. This review provides researchers and practitioners with a structured roadmap for selecting, evaluating, and deploying XAI methods for trustworthy tabular healthcare classification.