Preprint Communication Version 1 Preserved in Portico This version is not peer-reviewed

Systematic Model Complexity Reduction by Elimination of Irrelevant Layers in Convolutional Neural Networks

Version 1 : Received: 6 June 2023 / Approved: 7 June 2023 / Online: 7 June 2023 (05:42:05 CEST)

How to cite: Gadepally, K.C.; Dhal, S.B.; Kalafatis, S.; Nowka, K. Systematic Model Complexity Reduction by Elimination of Irrelevant Layers in Convolutional Neural Networks. Preprints 2023, 2023060492. https://doi.org/10.20944/preprints202306.0492.v1 Gadepally, K.C.; Dhal, S.B.; Kalafatis, S.; Nowka, K. Systematic Model Complexity Reduction by Elimination of Irrelevant Layers in Convolutional Neural Networks. Preprints 2023, 2023060492. https://doi.org/10.20944/preprints202306.0492.v1

Abstract

Neural networks were treated as black boxes for a long time. Previous works have unearthed what aspects of an image were important for convolutional layers at different positions in the network. This was done using deconvolutional networks. In this paper, we examine how well a convolutional neural network performs when those convolutional layers which are relatively unimportant for a particular image (i.e., the image does not produce one of the strongest activations) are skipped in the training, validating, and testing process.

Keywords

strongest activations; image complexity; convolution

Subject

Engineering, Electrical and Electronic Engineering

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.