Article
Version 1
Preserved in Portico This version is not peer-reviewed
Trainable Activations for Image Classification
Version 1
: Received: 22 January 2023 / Approved: 26 January 2023 / Online: 26 January 2023 (02:59:29 CET)
How to cite: Pishchik, E. Trainable Activations for Image Classification. Preprints.org 2023, 2023010463. https://doi.org/10.20944/preprints202301.0463.v1 Pishchik, E. Trainable Activations for Image Classification. Preprints.org 2023, 2023010463. https://doi.org/10.20944/preprints202301.0463.v1
Abstract
Non-linear activation functions are one of the main parts of deep neural network architectures. The choice of the activation function can affect model speed, performance and convergence. Most popular activation functions don't have any trainable parameters and don't alter during the training. We propose different activation functions with and without trainable parameters. Said activation functions have a number of advantages and disadvantages. We'll be testing the performance of said activation functions and comparing the results with widely known activation function ReLU. We assume that the activation functions with trainable parameters can outperform functions without ones, because the trainable parameters allow the model to "select'' the type of each of the activation functions itself, however, this strongly depends on the architecture of the deep neural network and the activation function itself.
Keywords
Trainable Activations, Trainable Activation Functions, CosLU, DELU, LinComb, NormLinComb, ReLUN, ScaledSoftSign, ShiLU
Subject
Computer Science and Mathematics, Computer Vision and Graphics
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)