Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Why Dilated Convolutional Neural Networks: A Proof of Their Optimality

Version 1 : Received: 18 April 2021 / Approved: 19 April 2021 / Online: 19 April 2021 (15:00:30 CEST)

How to cite: Contreras, J.; Ceberio, M.; Kreinovich, V. Why Dilated Convolutional Neural Networks: A Proof of Their Optimality. Preprints 2021, 2021040501 (doi: 10.20944/preprints202104.0501.v1). Contreras, J.; Ceberio, M.; Kreinovich, V. Why Dilated Convolutional Neural Networks: A Proof of Their Optimality. Preprints 2021, 2021040501 (doi: 10.20944/preprints202104.0501.v1).

Abstract

One of the most effective image processing techniques is the use of convolutional neural networks, where we combine intensity values at grid points in the vicinity of each point. To speed up computations, researchers have developed a dilated version of this technique, in which only some points are processed. It turns out that the most efficient case is when we select points from a sub-grid. In this paper, we explain this empirical efficiency proving that the sub-grid is indeed optimal – in some reasonable sense. To be more precise, we prove that all reasonable optimality criteria, the optimal subset of the original grid is either a sub-grid, or a sub-grid-like set.

Subject Areas

convolutional neural networks; dilated neural networks; optimality

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our diversity statement.

Leave a public comment
Send a private comment to the author(s)
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.