Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

PrimeNet: Adaptive Multi-Layer Deep Neural Structure for Enhanced Feature Selection in Early Convolution Stage

Version 1 : Received: 28 July 2021 / Approved: 30 July 2021 / Online: 30 July 2021 (12:25:45 CEST)

A peer-reviewed article of this Preprint also exists.

Khan, F.U.; Aziz, I. PrimeNet: Adaptive Multi-Layer Deep Neural Structure for Enhanced Feature Selection in Early Convolution Stage. Appl. Sci. 2022, 12, 1842. Khan, F.U.; Aziz, I. PrimeNet: Adaptive Multi-Layer Deep Neural Structure for Enhanced Feature Selection in Early Convolution Stage. Appl. Sci. 2022, 12, 1842.

Abstract

The colossal depths of the deep neural network sometimes suffer from ineffective backpropagation of the gradients through all its depths. Whereas, The strong performance of shallower multilayer neural structures prove their ability to increase the gradient signals in the early stages of training which easily gets backpropagated for global loss corrections. Shallow neural structures are always a good starting point for encouraging the sturdy feature characteristics of the input. In this research, a shallow, deep neural structure called PrimeNet is proposed. PrimeNet is aimed to dynamically identify and encourage the quality visual indicators from the input to be used by the subsequent deep network layers and increase the gradient signals in the lower stages of the training pipeline. In addition to this, the layerwise training is performed with the help of locally generated errors which means the gradient is not backpropagated to previous layers, and the hidden layer weights are updated during the forward pass, making this structure a backpropagation free variant. PrimeNet has obtained state-of-the-art results on various image datasets, attaining the dual objective of (1) compact dynamic deep neural structure, which (2) eliminates the problem of backwards-locking. The PrimeNet unit is proposed as an alternative to traditional convolution and dense blocks for faster and memory-efficient training, outperforming previously reported results aimed at adaptive methods for parallel and multilayer deep neural systems.

Keywords

adaptive computing; dynamic deep neural structure; adpative convolution; dynamic training

Subject

Computer Science and Mathematics, Algebra and Number Theory

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.