The choice of activation function is a fundamental design decision in deep learning, yet most popular options like ReLU, GELU, or Swish are static and treat all inputs uniformly. This one-size-fits-all approach can be suboptimal when data contains heterogeneous noise, where the ideal non-linearity might depend on the input’s statistical context. In this paper, we introduce Bayesian Probabilistic Adaptive Sigmoidal Activation (Bayesian PASA), an activation function that adapts its behavior based on input uncertainty. The method frames activation selection as a Bayesian model averaging problem, adaptively mixing sigmoidal, linear, and noise-aware behaviors. Mixing weights are derived from a variational evidence lower bound (ELBO) and regularized by a ψ-function that bounds the influence of local noise estimates. We provide theo- retical analysis showing Lipschitz continuity, gradient bounds, and convergence under standard assumptions. Due to computational constraints (Google Colab, limited epochs), we evaluate Bayesian PASA on CIFAR-100 (50 epochs, 3 seeds) and CIFAR-10-C (100 epochs, 3 seeds). Despite these limitations, Bayesian PASA achieves 76.38% accuracy on CIFAR-100, slightly outperforming ReLU (75.68%) and GELU (75.98%) under the same constrained conditions. On corrupted CIFAR-10-C, Bayesian PASA combined with Bayesian R-LayerNorm achieves an average accuracy of 53.91%, a +1.87% improvement over the ReLU+LayerNorm baseline. These results, though modest, are consistent across seeds and suggest that Bayesian PASA offers a promising direction for uncertainty-aware activation functions, particularly when training re-sources are limited. Code is available at: https://github.com/BayesianPASA/BayesianPASA/