Article
Version 1
Preserved in Portico This version is not peer-reviewed
Backpropagation and F-adjoint
Version 1
: Received: 26 April 2023 / Approved: 27 April 2023 / Online: 27 April 2023 (05:01:36 CEST)
How to cite: Boughammoura, A. Backpropagation and F-adjoint. Preprints 2023, 2023041046. https://doi.org/10.20944/preprints202304.1046.v1 Boughammoura, A. Backpropagation and F-adjoint. Preprints 2023, 2023041046. https://doi.org/10.20944/preprints202304.1046.v1
Abstract
This paper presents a concise mathematical framework for investigating both feed-forward and backward process, during the training to learn model weights, of an artificial neural network (ANN). Inspired from the idea of the two-step rule for backpropagation, we define a notion of F_adjoint which is aimed at a better description of the backpropagation algorithm. In particular, by introducing the notions of F-propagation and F-adjoint through a deep neural network architecture, the backpropagation associated to a cost/loss function is proven to be completely characterized by the F-adjoint of the corresponding F-propagation relatively to the partial derivative, with respect to the inputs, of the cost function.
Keywords
Artificial neural networks;Backpropagation;Two-step rule;F-propagation;F-adjoint
Subject
Computer Science and Mathematics, Applied Mathematics
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment