Preprint
Article

This version is not peer-reviewed.

Lightweight Explainable Physics-Informed Neural Networks by Learnable Activation Function and Contextual Modulation

Submitted:

09 May 2026

Posted:

09 May 2026

You are already at the latest version

Abstract
Physics-informed neural networks (PINNs) provide a data-efficient frameworkfor solving partial differential equations, but improving their accuracy often requires enlarging multilayer perceptron backbones, which increases parameter countand computational cost. This study investigates whether PINN performance canbe improved while keeping the underlying MLP lightweight. We introduce the Cannistraci-Muscoloni-Gu Generalized Logistic-Logit Function (CMG-GLLF) as a learnable activation function for compact PINNs. To make CMG practicalfor PINN training, we reformulate its implicit logit-phase approximation into anexplicit differentiable form using a one-step Newton approximation, reducing numerical instability and computational overhead. Empirical validation on Burgers’equation shows that the explicit CMG formulation substantially outperforms boththe implicit CMG implementation and fixed tanh activation. We further show that alayer-wise CMG design achieves a favorable accuracy-parameter trade-off, addingonly two trainable parameters per hidden layer while improving over vanilla MLPsin most settings. In addition, we evaluate transponder-based contextual modula-tion, which adaptively modulates hidden-layer representations according to thenetwork input. Across Burgers, Allen-Cahn, and diffusion-reaction benchmarks,Transponder-NS consistently improves over parameter-matched vanilla MLPs andachieves the best overall ranking, with approximately order-of-magnitude errorreductions on Burgers and Allen-Cahn. Combining CMG with transponder modu-lation further improves performance on Allen-Cahn and remains competitive acrosstasks. Finally, parameter-level analysis on Allen-Cahn shows that learned CMG parameters differ from the fixed Tahn and that transponder modulation varies acrossboth layers and nodes, providing explainability on why CMG and transpondercould outperform vanilla networks through depth-dependent modulation behavior.These results suggest that learnable activation functions and contextual modulationoffer a practical route toward lightweight, accurate, and explainable PINNs.
Keywords: 
;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated