Radial basis function (RBF) networks are often viewed as instable when used in multi-layered architectures and therefore are mostly used in a single-layered manner.
Universal approximation theorems for single-layered RBF networks further render deeper architectures useless.
However, deep neural networks have proven their effectiveness on many different tasks.
We show that deeper RBF architectures with multiple radial basis function layers are achievable and able to learn. We introduce an initialization scheme for deep RBF networks based on k-means clustering and covariance estimation. We further show how to make use of convolutions to speed up the calculation of a Mahalanobis distance in a partially-connected fashion, similar to convolutional neural networks (CNNs).
Finally, we evaluate our approach on image classification as well as speech emotion recognition tasks.
Our results show that deep RBF networks perform similar to simple CNNs.