Article
Version 1
Preserved in Portico This version is not peer-reviewed
Deep Radial Basis Function Networks
Version 1
: Received: 18 February 2024 / Approved: 21 February 2024 / Online: 21 February 2024 (04:32:47 CET)
A peer-reviewed article of this Preprint also exists.
Wurzberger, F.; Schwenker, F. Learning in Deep Radial Basis Function Networks. Entropy 2024, 26, 368. Wurzberger, F.; Schwenker, F. Learning in Deep Radial Basis Function Networks. Entropy 2024, 26, 368.
Abstract
Radial basis function (RBF) networks are often viewed as instable when used in multi-layered architectures and therefore are mostly used in a single-layered manner.
Universal approximation theorems for single-layered RBF networks further render deeper architectures useless.
However, deep neural networks have proven their effectiveness on many different tasks.
We show that deeper RBF architectures with multiple radial basis function layers are achievable and able to learn. We introduce an initialization scheme for deep RBF networks based on k-means clustering and covariance estimation. We further show how to make use of convolutions to speed up the calculation of a Mahalanobis distance in a partially-connected fashion, similar to convolutional neural networks (CNNs).
Finally, we evaluate our approach on image classification as well as speech emotion recognition tasks.
Our results show that deep RBF networks perform similar to simple CNNs.
Keywords
radial basis function networks; Mahalanobis distance; partially connected
Subject
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment