Deep learning classifiers deployed in scientific and industrial settings face a fundamental yet unrecognized problem: they cannot distinguish between clean in- puts and corrupted data that violates physical laws. When a medical CT scanner produces images with motion artifacts, or a reservoir sensor transmits pressure readings that violate Darcy’s law, standard neural networks process these physi- cally impossible inputs with unwarranted confidence—a silent failure mode with potentially catastrophic consequences. Existing approaches address robustness in isolation: normalization methods adapt to noise but cannot detect physics viola- tions; Bayesian networks quantify uncertainty without leveraging domain knowl- edge; physics-informed learning embeds constraints during training but offers no rejection mechanism at inference. What is missing is a unified framework that synthesizes these advances into a coherent whole. We introduce Uncertainty-Aware Classifier with Physics-Based Rejection (UA- PBR), a novel framework that combines physics-informed filtering with Bayesian uncertainty quantification and decision-theoretic rejection. The key novelty lies in the principled integration of two orthogonal signals—PDE residuals and predictive entropy—with theoretical guarantees on the joint rejection rule. UA-PBR operates in two stages: a physics-informed autoencoder detects inputs violating governing partial differential equations using PDE residuals, while a Bayesian neural network with Monte Carlo Dropout quantifies predictive entropy. Inputs are rejected if either the physics score exceeds a threshold or the entropy surpasses an optimally selected value. We provide three theoretical guarantees: (1) the PDE residual bounds the reconstruction error; (2) a novel risk bound for joint rejection under Lipschitz continuity; and (3) existence of optimal thresholds via grid search. On the Darcy flow benchmark with realistic permeability fields, UA-PBR achieves statistically significant risk reduction (p < 0.0001) across 10 independent seeds. The framework maintains 89.7% acceptance rate on clean data with 99.99% accuracy on accepted samples. Under severe corruption (severity 0.9), UA-PBR reduces risk by 92.1% for Gaussian noise, 88.0% for salt-pepper noise, and 93.2% for physics- violating perturbations compared to standard CNNs. Ablation studies confirm that both components contribute synergistically: the full framework outperforms either physics-only or uncertainty-only variants. UA-PBR serves as a drop-in safety layer for any scientific ML pipeline, providing both theoretical guarantees and practical robustness for real-world deployment. The complete open-source implementation is available at : https://github.com/UA-PBR/UA-PBR.